paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2020_SylUiREKvB
Variational Hyper RNN for Sequence Modeling
In this work, we propose a novel probabilistic sequence model that excels at capturing high variability in time series data, both across sequences and within an individual sequence. Our method uses temporal latent variables to capture information about the underlying data pattern and dynamically decodes the latent information into modifications of weights of the base decoder and recurrent model. The efficacy of the proposed method is demonstrated on a range of synthetic and real-world sequential data that exhibit large scale variations, regime shifts, and complex dynamics.
reject
The paper proposes a neural network architecture that uses a hypernetwork (RNN or feedforward) to generate weights for a network (variational RNN), that models sequential data. An empirical comparison of a large number of configurations on synthetic and real world data show the promise of this method. The authors have been very responsive during the discussion period, and generated many new results to address some reviewer concerns. Apart from one reviewer, the others did not engage in further discussion in response to the authors updating their paper. The paper provides a tweak to the hypernetwork idea for modeling sequential data. There are many strong submissions at ICLR this year on RNNs, and the submission in its current state unfortunately does not pass the threshold.
val
[ "rylIcVsaYH", "Hkxn7dMhsr", "SJgwukGhiB", "BJx5jWQ2jS", "S1gtkwM2ir", "B1eYrNz2sS", "BklLq7Mnsr", "SylsiWDSoS", "HkxATlPrsr", "Skl0TyPSjH", "SylW2nrRtS", "Bkl7IVhBqS" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes the variational hyper RNN (VHRNN), which extends the previous variational RNN (VRNN) by learning the parameters of RNN using a hyper RNN. VRHNN is tested and compared with VRNN on synthetic and real datasets. The authors report superior performance parameter efficiency over VRNN.\n\nThe perform...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_SylUiREKvB", "HkxATlPrsr", "Skl0TyPSjH", "iclr_2020_SylUiREKvB", "HkxATlPrsr", "HkxATlPrsr", "HkxATlPrsr", "rylIcVsaYH", "SylW2nrRtS", "Bkl7IVhBqS", "iclr_2020_SylUiREKvB", "iclr_2020_SylUiREKvB" ]
iclr_2020_r1gPoCEKvH
SINGLE PATH ONE-SHOT NEURAL ARCHITECTURE SEARCH WITH UNIFORM SAMPLING
We revisit the one-shot Neural Architecture Search (NAS) paradigm and analyze its advantages over existing NAS approaches. Existing one-shot method (Benderet al., 2018), however, is hard to train and not yet effective on large scale datasets like ImageNet. This work propose a Single Path One-Shot model to address the challenge in the training. Our central idea is to construct a simplified supernet, where all architectures are single paths so that weight co-adaption problem is alleviated. Training is performed by uniform path sampling. All architectures (and their weights) are trained fully and equally. Comprehensive experiments verify that our approach is flexible and effective. It is easy to train and fast to search. It effortlessly supports complex search spaces(e.g., building blocks, channel, mixed-precision quantization) and different search constraints (e.g., FLOPs, latency). It is thus convenient to use for various needs. It achieves start-of-the-art performance on the large dataset ImageNet.
reject
This paper introduces a simple NAS method based on sampling single paths of the one-shot model based on a uniform distribution. Next to the private discussion with reviewers, I read the paper in detail. During the discussion, first, the reviewer who gave a weak reject upgraded his/her score to a weak accept since all reviewers appreciated the importance of neural architecture search and that the authors' approach is plausible. Then, however, it surfaced that the main claim of novelty in the paper, namely the uniform sampling of paths with weight-sharing, is not novel: Li & Talwalkar already introduced a uniform random sampling of paths with weight-sharing in the one-shot model in their paper "Random Search and Reproducibility in NAS" (https://arxiv.org/abs/1902.07638), which was on arXiv since February 2019 and has been published at UAI 2019. This was their method "RandomNAS with weight sharing". The authors actually cite that paper but do not mention RandomNAS with weight sharing. This may be because their paper also has been on arXiv since March 2019 (6 weeks after the one above), and was therefore likely parallel work. Nevertheless, now, 9 months later, the situation has changed, and the authors should at least point out in their paper that they were not the first to introduce RandomNAS with weight sharing during the search, but that they rather study the benefits of that previously-introduced method. The only real novelty in terms of NAS methods that the authors provide is to use a genetic algorithm to select the architecture with the best one-shot model performance, rather than random search. This is a relatively minor contribution, discussed literally in a single paragraph in the paper (with missing details about the crossover operator used; please fill these in). Also, this step is very cheap, so one could potentially just run random search longer. Finally, the comparison presented may be unfair: evolution uses a population size of 50, and Figure 2 plots iterations. It is unclear whether each iteration for random search also evaluated 50 samples; if not, then evolution got 50x more samples than random search. The authors should fix this in a new version of the paper. The paper also appears to make some wrong claims in Section 2. For example, the authors write that gradient-based NAS methods like DARTS inherit the one-shot weights and fine-tune the discretized architectures, but all methods I know of actually retrain from scratch rather than fine-tuning. Also, equation (3) is not what DARTS does; that does a bi-level optimization. In Section 3, the authors say that their single-path strategy corresponds to a dropout rate of 1. I do not think that this is correct, since a dropout rate of 1 drops every connection (and does not leave one remaining). All of these issues should be rectified. The paper reports good results on ImageNet. Unfortunately, these may well be due to using a better training pipeline than other works, rather than due to a better NAS method (no code is available, so there is no way to verify this). On the other hand, the application to mixed-precision quantization is novel and interesting. AnonReviewer2 asked about the correlation of the one-shot performance and the final evaluation performance, and this question was not answered properly by the authors. This question is relevant, because this correlation has been shown to be very low in several works (e.g., Sciuto et al: "Evaluating the search phase of Neural Architecture Search" (https://arxiv.org/abs/1902.08142), on arXiv since February 2019 and a parallel ICLR submission). In those cases, the proposed approach would definitely not work. The high scores the reviewers gave were based on the understanding that uniform sampling in the one-shot model was a novel contribution of this paper. Adjusting for that, the real score is much lower and right at the acceptance threshold. After a discussion with the PCs, due to limited capacity, the recommendation is to reject the current version. I encourage the authors to address the issues identified by the reviewers and in this meta-review and to submit to a future venue.
train
[ "S1e5ibyY5B", "ByxMJPKGoH", "HJeBo6RWir", "rJeD9SSWiH", "SJxjZISZor", "B1gVLBr-ir", "HylsyHrbiH", "Sklsio4str", "Sygdboybqr", "ryeEHOUi9S", "SklJ0HvNtS", "HJgdIsNNYH" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "public", "author", "public" ]
[ "*UPDATE* I have read the other reviews and authors' responses. All the reviewers agree that improving single-shot NAS is an important problem, and that sampling single-paths can be a plausible approach for it that avoids weight coupling. Consequently, I have updated my rating to weak accept. I think the paper can ...
[ 6, -1, -1, -1, -1, -1, -1, 8, 6, -1, -1, -1 ]
[ 1, -1, -1, -1, -1, -1, -1, 5, 3, -1, -1, -1 ]
[ "iclr_2020_r1gPoCEKvH", "HJeBo6RWir", "HylsyHrbiH", "Sklsio4str", "ryeEHOUi9S", "Sygdboybqr", "S1e5ibyY5B", "iclr_2020_r1gPoCEKvH", "iclr_2020_r1gPoCEKvH", "SklJ0HvNtS", "HJgdIsNNYH", "iclr_2020_r1gPoCEKvH" ]
iclr_2020_HJewiCVFPB
Gradient Surgery for Multi-Task Learning
While deep learning and deep reinforcement learning systems have demonstrated impressive results in domains such as image classification, game playing, and robotic control, data efficiency remains a major challenge, particularly as these algorithms learn individual tasks from scratch. Multi-task learning has emerged as a promising approach for sharing structure across multiple tasks to enable more efficient learning. However, the multi-task setting presents a number of optimization challenges, making it difficult to realize large efficiency gains compared to learning tasks independently. The reasons why multi-task learning is so challenging compared to single task learning are not fully understood. Motivated by the insight that gradient interference causes optimization challenges, we develop a simple and general approach for avoiding interference between gradients from different tasks, by altering the gradients through a technique we refer to as “gradient surgery”. We propose a form of gradient surgery that projects the gradient of a task onto the normal plane of the gradient of any other task that has a conflicting gradient. On a series of challenging multi-task supervised and multi-task reinforcement learning problems, we find that this approach leads to substantial gains in efficiency and performance. Further, it can be effectively combined with previously-proposed multi-task architectures for enhanced performance in a model-agnostic way.
reject
This paper presents a method for improving optimization in multi-task learning settings by minimizing the interference of gradients belonging to different tasks. While the idea is simple and well-motivated, the reviewers felt that the problem is still not studied adequately. The proofs are useful, but there is still a gap when it comes to practicality. The rebuttal clarified some of the concerns, but still there is a feeling that (a) the main assumptions for the method need to be demonstrated in a more convincing way, e.g. by boosting the experiments as suggested with other MTL methods (b) by placing the paper better in the current literature and minimizing the gap between proofs/underlying assumptions and practical usefulness.
val
[ "rJgFulCl9r", "B1gKvIcjsS", "S1eKbadciB", "S1g2WbGLor", "SJecI-MUjr", "HkeUjwlZsS", "B1ljM4ehtB", "Byl3R3e3tr", "HJgFtwo55B", "BJlmAo4d9H" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "This paper proposes as solution to manage the case where gradients are conflicting in gradient-based Multi-Task Learning (MTL), pointing to different directions. They propose a simple “gradient surgery” technique that alters the gradients by projecting a conflicting gradient on the normal vector of the other one, ...
[ 3, -1, -1, -1, -1, -1, 6, 3, -1, -1 ]
[ 4, -1, -1, -1, -1, -1, 3, 3, -1, -1 ]
[ "iclr_2020_HJewiCVFPB", "B1ljM4ehtB", "Byl3R3e3tr", "rJgFulCl9r", "B1ljM4ehtB", "Byl3R3e3tr", "iclr_2020_HJewiCVFPB", "iclr_2020_HJewiCVFPB", "BJlmAo4d9H", "iclr_2020_HJewiCVFPB" ]
iclr_2020_Hkl_sAVtwr
Compressed Sensing with Deep Image Prior and Learned Regularization
We propose a novel method for compressed sensing recovery using untrained deep generative models. Our method is based on the recently proposed Deep Image Prior (DIP), wherein the convolutional weights of the network are optimized to match the observed measurements. We show that this approach can be applied to solve any differentiable linear inverse problem, outperforming previous unlearned methods. Unlike various learned approaches based on generative models, our method does not require pre-training over large datasets. We further introduce a novel learned regularization technique, which incorporates prior information on the network weights. This reduces reconstruction error, especially for noisy measurements. Finally we prove that, using the DIP optimization approach, moderately overparameterized single-layer networks trained can perfectly fit any signal despite the nonconvex nature of the fitting problem. This theoretical result provides justification for early stopping.
reject
This paper proposes a compressed sensing (CS) method which employs deep image prior (DIP) algorithm to recovering signals for images from noisy measurements using untrained deep generative models. A novel learned regularization technique is also introduced. Experimental results show that the proposed methods outperformed the existing work. The theoretical analysis of early stopping is also given. All reviewers agree that it is novel to combine the deep learning method with compressed sensing. The paper is well written and overall good. However the reviewers also proposed many concerns about method and the experiments, but the authors gave no rebuttal almost no revisions were made on the paper. I would suggest the author to consider the reviewers' concern seriously and resubmit the paper to another conference or journal.
test
[ "r1eyn6-Pcr", "H1xHdzo0Fr", "rJehnPMZ9r", "HkecZhUFcr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "== After reading authors' response ==\n\nThe authors didn't really offer a response. Their new PDF just mentions that their proof for the case $A=I$ can be extended to any orthogonal matrix $A$ (since if $G$ is a iid Gaussian matrix with unit variance, so is $AG$ for any orthogonal matrix $A$). Hence, I don't cha...
[ 3, 6, 6, 6 ]
[ 4, 4, 4, 4 ]
[ "iclr_2020_Hkl_sAVtwr", "iclr_2020_Hkl_sAVtwr", "iclr_2020_Hkl_sAVtwr", "iclr_2020_Hkl_sAVtwr" ]
iclr_2020_rJgFjREtwr
Distribution-Guided Local Explanation for Black-Box Classifiers
Existing local explanation methods provide an explanation for each decision of black-box classifiers, in the form of relevance scores of features according to their contributions. To obtain satisfying explainability, many methods introduce ad hoc constraints into the classification loss to regularize these relevance scores. However, the large information gap between the classification loss and these constraints increases the difficulty of tuning hyper-parameters. To bridge this gap, in this paper we present a simple but effective mask predictor. Specifically, we model the above constraints with a distribution controller, and integrate it with a neural network to directly guide the distribution of relevance scores. The benefit of this strategy is to facilitate the setting of involved hyper-parameters, and enable discriminative scores over supporting features. The experimental results demonstrate that our method outperforms others in terms of faithfulness and explainability. Meanwhile, it also provides effective saliency maps for explaining each decision.
reject
This paper proposed a method to estimate the instance-wise saliency map for image classification, for the purpose of improving the faithfulness of the explainer. Based on the U-net, two modifications are proposed in this work. While reviewer #3 is overall positive about this work, both Reviewer #1 and #2 rated weak reject and raised a number of concerns. The major concerns include the modifications either already exist or suffer potential issue. Reviewer #2 considered that the contributions are not enough for ICLR, and the performance improvement is marginal. The authors provided detailed responses to the reviewers’ concerns, which help to make the paper stronger, but did not change the rating. Given the concerns raised by the reviewers, the ACs agree that this paper can not be accepted at its current state.
train
[ "Syxpq7lnjr", "ryxQMfxnjH", "HJlaHllnsB", "S1gPLml2oH", "BJlqm4lhiB", "B1g0hzlnjr", "HJgtwEXBYS", "SJxdo8dRYB", "HJeWjl3VcB" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your detailed and insightful review. We have updated the paper and address the specific questions below.\n===========================\nQ1. The contributions are not enough for this venue. Both the introduced method and the metrics are slight modifications of what already exists. The exact contributio...
[ -1, -1, -1, -1, -1, -1, 6, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, 4, 5, 3 ]
[ "SJxdo8dRYB", "HJeWjl3VcB", "iclr_2020_rJgFjREtwr", "SJxdo8dRYB", "HJgtwEXBYS", "HJeWjl3VcB", "iclr_2020_rJgFjREtwr", "iclr_2020_rJgFjREtwr", "iclr_2020_rJgFjREtwr" ]
iclr_2020_H1lFsREYPS
ASGen: Answer-containing Sentence Generation to Pre-Train Question Generator for Scale-up Data in Question Answering
Numerous machine reading comprehension (MRC) datasets often involve manual annotation, requiring enormous human effort, and hence the size of the dataset remains significantly smaller than the size of the data available for unsupervised learning. Recently, researchers proposed a model for generating synthetic question-and-answer data from large corpora such as Wikipedia. This model is utilized to generate synthetic data for training an MRC model before fine-tuning it using the original MRC dataset. This technique shows better performance than other general pre-training techniques such as language modeling, because the characteristics of the generated data are similar to those of the downstream MRC data. However, it is difficult to have high-quality synthetic data comparable to human-annotated MRC datasets. To address this issue, we propose Answer-containing Sentence Generation (ASGen), a novel pre-training method for generating synthetic data involving two advanced techniques, (1) dynamically determining K answers and (2) pre-training the question generator on the answer-containing sentence generation task. We evaluate the question generation capability of our method by comparing the BLEU score with existing methods and test our method by fine-tuning the MRC model on the downstream MRC data after training on synthetic data. Experimental results show that our approach outperforms existing generation methods and increases the performance of the state-of-the-art MRC models across a range of MRC datasets such as SQuAD-v1.1, SQuAD-v2.0, KorQuAD and QUASAR-T without any architectural modifications to the original MRC model.
reject
Thanks for an interesting discussion. The paper introduces a sound question generation technique for QA. Reviewers are moderately positive, with low confidence. Some issues remain unresolved, though: While the UniLM comparison is currently not apples-to-apples, for example, nothing prevents the authors from using their method to pretrain UniLM. Currently, QA results are low-ish, and it is hard to accept a paper based solely on BLEU scores (questionable metric) for question generation (the task is but a means to an end). Moreover, the authors do not really discuss how their method relates to previous work (see Review 2 and the related work cited there; there's more, e.g., [0]). I also find it a little problematic that the paper completely ignores all work prior to 2017: The NLP community started organizing workshops on question generation in 2010. [1]
train
[ "H1g6clU3sH", "HJxKaJInor", "SygvvArhjS", "SyxuXwNFYS", "SJe7V4KBor", "H1e2tHPNjS", "HJxfRSwVsB", "ryeHYUw4iS", "SkggJDPNiH", "Hylf1HD4jr", "S1xH2fEQcB", "B1xGhx1FcB" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "public" ]
[ "Thank you for your constructive feedback and efforts.\n\nWe have uploaded a new revision with the following changes based on your feedback -\n\n1) Added the effect of our proposed pre-training with existing question generation methods Zhao et. al. 2018 and UniLM (Dong et al. 2019) achieving an increase in BLEU-4 s...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, -1 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 1, -1 ]
[ "SJe7V4KBor", "Hylf1HD4jr", "iclr_2020_H1lFsREYPS", "iclr_2020_H1lFsREYPS", "H1e2tHPNjS", "SyxuXwNFYS", "SyxuXwNFYS", "SyxuXwNFYS", "B1xGhx1FcB", "S1xH2fEQcB", "iclr_2020_H1lFsREYPS", "iclr_2020_H1lFsREYPS" ]
iclr_2020_ByxKo04tvr
Multigrid Neural Memory
We introduce a novel architecture that integrates a large addressable memory space into the core functionality of a deep neural network. Our design distributes both memory addressing operations and storage capacity over many network layers. Distinct from strategies that connect neural networks to external memory banks, our approach co-locates memory with computation throughout the network structure. Mirroring recent architectural innovations in convolutional networks, we organize memory into a multiresolution hierarchy, whose internal connectivity enables learning of dynamic information routing strategies and data-dependent read/write operations. This multigrid spatial layout permits parameter-efficient scaling of memory size, allowing us to experiment with memories substantially larger than those in prior work. We demonstrate this capability on synthetic exploration and mapping tasks, where the network is able to self-organize and retain long-term memory for trajectories of thousands of time steps. On tasks decoupled from any notion of spatial geometry, such as sorting or associative recall, our design functions as a truly generic memory and yields results competitive with those of the recently proposed Differentiable Neural Computer.
reject
This paper investigates convolutional LSTMs with a multi-grid structure. This idea in itself has very little innovation and the experimental results are not entirely convincing.
train
[ "H1g4E9joor", "H1xcoKjsor", "SklBMFijoB", "HJgkHOooiH", "HyeTtPsjoS", "B1lPj6V3tr", "SJgP4SU0YB", "rJlrkMmpYH" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\n\"comparison with more recent memory-augmented models [1,2]\"\n\nDNC is the only state-of-the-art memory architecture that has an official code release. Other methods mentioned by reviewers do not provide source code, which drastically increases the difficulty of comparison.\n\nWe were interested to compare aga...
[ -1, -1, -1, -1, -1, 3, 6, 3 ]
[ -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "B1lPj6V3tr", "rJlrkMmpYH", "rJlrkMmpYH", "SJgP4SU0YB", "iclr_2020_ByxKo04tvr", "iclr_2020_ByxKo04tvr", "iclr_2020_ByxKo04tvr", "iclr_2020_ByxKo04tvr" ]
iclr_2020_rJgqjREtvS
CRNet: Image Super-Resolution Using A Convolutional Sparse Coding Inspired Network
Convolutional Sparse Coding (CSC) has been attracting more and more attention in recent years, for making full use of image global correlation to improve performance on various computer vision applications. However, very few studies focus on solving CSC based image Super-Resolution (SR) problem. As a consequence, there is no significant progress in this area over a period of time. In this paper, we exploit the natural connection between CSC and Convolutional Neural Networks (CNN) to address CSC based image SR. Specifically, Convolutional Iterative Soft Thresholding Algorithm (CISTA) is introduced to solve CSC problem and it can be implemented using CNN architectures. Then we develop a novel CSC based SR framework analogy to the traditional SC based SR methods. Two models inspired by this framework are proposed for pre-/post-upsampling SR, respectively. Compared with recent state-of-the-art SR methods, both of our proposed models show superior performance in terms of both quantitative and qualitative measurements.
reject
All three reviewers agreed that the paper should not be accepted. No rebuttal was offered, thus the paper is rejected.
train
[ "BJxXb346Fr", "ryxYnhNCFB", "ryxv0ry1qr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This work exploits the natural connection between CSC and Convolutional Neural Networks (CNN) to address CSC based image SR. Specifically, Convolutional Iterative Soft Thresholding Algorithm (CISTA) is introduced to solve CSC problem and state-of-the-art performance is achieved on popular benchmarks.\n\n[Strengths...
[ 3, 1, 1 ]
[ 5, 4, 5 ]
[ "iclr_2020_rJgqjREtvS", "iclr_2020_rJgqjREtvS", "iclr_2020_rJgqjREtvS" ]
iclr_2020_Syxss0EYPS
Agent as Scientist: Learning to Verify Hypotheses
In this paper, we formulate hypothesis verification as a reinforcement learning problem. Specifically, we aim to build an agent that, given a hypothesis about the dynamics of the world can take actions to generate observations which can help predict whether the hypothesis is true or false. Our first observation is that agents trained end-to-end with the reward fail to learn to solve this problem. In order to train the agents, we exploit the underlying structure in the majority of hypotheses -- they can be formulated as triplets (pre-condition, action sequence, post-condition). Once the agents have been pretrained to verify hypotheses with this structure, they can be fine-tuned to verify more general hypotheses. Our work takes a step towards a ``scientist agent'' that develops an understanding of the world by generating and testing hypotheses about its environment.
reject
The authors propose an agent that can act in an RL environment to verify hypotheses about it, using hypotheses formulated as triplets of pre-condition, action sequence, and post-condition variables. Training then proceeds in multiple stages, including a pretraining phase using a reward function that encourages the agent to learn the hypothesis triplets. Strengths: Reviewers generally agreed it’s an important problem and interesting approach Weaknesses: There were some points of convergence among reviewer comments: lack of connection to existing literature (ie to causal reasoning and POMDPs), and concerns about the robustness of the results (which were only reporting the max seeds). Two reviewers also found the use of natural language to unnecessarily complicate their setup. Overall, clarity seemed to be an issue. Other comments concerned lack of comparisons, analyses, and suggestions for alternate methods of rewarding the agent (to improve understandability). The authors deserve credit for their responsiveness to reviewer comments and for the considerable amount of additional work done in the rebuttal period. However, these efforts ultimately didn’t satisfy the reviewers enough to change their scores. Although I find that the additional experiments and revisions have significantly strengthened the paper, I don't believe it's currently ready for publication at ICLR. I urge the authors to focus on clearly presenting and integrating these new results in a future submission, which I look forward to.
train
[ "BkgYVAox5B", "r1xcgUU0YH", "B1l3DYTjsr", "HkgIb_6ssS", "HkgdawpsiB", "BkgE98TiiH", "HkeUjrpojr", "B1xsnehaKH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper looks into the problem of training agents that can interact with their environments to verify hypotheses about it. It first formulates the problem as a MDP, where the agent takes actions to explore the environment and has two special actions (Answer_True, and Answer_False) to indicate that the agent has ...
[ 3, 3, -1, -1, -1, -1, -1, 1 ]
[ 1, 3, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_Syxss0EYPS", "iclr_2020_Syxss0EYPS", "r1xcgUU0YH", "HkgdawpsiB", "B1xsnehaKH", "BkgYVAox5B", "iclr_2020_Syxss0EYPS", "iclr_2020_Syxss0EYPS" ]
iclr_2020_Syejj0NYvr
Adversarial Interpolation Training: A Simple Approach for Improving Model Robustness
We propose a simple approach for adversarial training. The proposed approach utilizes an adversarial interpolation scheme for generating adversarial images and accompanying adversarial labels, which are then used in place of the original data for model training. The proposed approach is intuitive to understand, simple to implement and achieves state-of-the-art performance. We evaluate the proposed approach on a number of datasets including CIFAR10, CIFAR100 and SVHN. Extensive empirical results compared with several state-of-the-art methods against different attacks verify the effectiveness of the proposed approach.
reject
Reviewers agree that the proposed method is interesting and achieves impressive results. Clarifications were needed in terms of motivating and situating the work. Thee rebuttal helped, but unfortunately not enough to push the paper above the threshold. We encourage the authors to further improve the presentation of their method and take into accounts the comments in future revisions.
train
[ "HJgD51k7iS", "r1e432RzoS", "H1gtzKuior", "HJxv7evfYr", "SylrVLzjsr", "rkeBhuBMjS", "HkxpPLeXjr", "HygaLWkmiS", "HJgUgw0GiS", "HJllIEPfiH", "SklX1iUfoH", "S1esl_BpcH", "SylNE5kstr", "BJenNC1TcH", "rylY0mw35S", "rkl4FB3H_S", "SJlQi-Vmdr" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "public", "public", "author", "author", "public", "author", "official_reviewer", "official_reviewer", "author", "public", "author", "public" ]
[ "A1: Thanks for the suggestion. The number of attack iterations is set as $L\\!\\!=\\!\\!1$ in our training. We apologize for not making this point clear and have improved it in our revised paper (Section 4). \nOne-step adversary is used for training in our model as shown in the code in Section A.1 (and now in Sec...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 3, 3, -1, -1, -1, -1 ]
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 3, 4, -1, -1, -1, -1 ]
[ "SylNE5kstr", "S1esl_BpcH", "SylrVLzjsr", "iclr_2020_Syejj0NYvr", "HJgUgw0GiS", "iclr_2020_Syejj0NYvr", "SklX1iUfoH", "iclr_2020_Syejj0NYvr", "HJxv7evfYr", "SklX1iUfoH", "rkeBhuBMjS", "iclr_2020_Syejj0NYvr", "iclr_2020_Syejj0NYvr", "rylY0mw35S", "iclr_2020_Syejj0NYvr", "SJlQi-Vmdr", ...
iclr_2020_rkehoAVtvS
Adversarial Paritial Multi-label Learning
Partial multi-label learning (PML), which tackles the problem of learning multi-label prediction models from instances with overcomplete noisy annotations, has recently started gaining attention from the research community. In this paper, we propose a novel adversarial learning model, PML-GAN, under a generalized encoder-decoder framework for partial multi-label learning. The PML-GAN model uses a disambiguation network to identify noisy labels and uses a multi-label prediction network to map the training instances to the disambiguated label vectors, while deploying a generative adversarial network as an inverse mapping from label vectors to data samples in the input feature space. The learning of the overall model corresponds to a minimax adversarial game, which enhances the correspondence of input features with the output labels. Extensive experiments are conducted on multiple datasets, while the proposed model demonstrates the state-of-the-art performance for partial multi-label learning.
reject
The paper considers a problem of clearly practical importance: multi-label classification where the ground truth label sets are noisy, specifically they are known (or at least assumed) to be a superset of the true ground truth labels. Learning a classifier in this setting require simultaneous identification of irrelevant labels. The proposed solution is a 4-part neural architecture, wherein a multi-label classifier is composed with a disambiguation or "cleanup" network, which is used as conditioning input to a conditional GAN which learns an inverse mapping, trained via an adversarial loss and also a least squares reconstruction loss ("generation loss"). Reviews were split 2 to 1 in favour of rejection, and the discussion phase did not resolve this split, as two reviewers did not revisit their assessments. R2 and R3 were concerned about the overall novelty and degeneracy of the inverse mapping problem. R1 increased their score after the rebuttal phase as they felt their concerns were addressed in comments (regarding issues surrounding the related work, the possibility of trivial solutions, and intuition for why the adversarial objective helps), but these were not addressed in the text as no updates were made. I agree with the authors that PML is an important problem (one that receives perhaps less attention than it should from our community), and their empirical validation seems to support that their method outperforms (marginally, in many cases) methods from the literature. While the ablation study offers preliminary evidence that the inverse mapping is responsible for some of the gains, there are a lot of moving parts here and the authors haven't done a great job of motivating why this should help, or investigating why it in fact does. Based on the scores and my own reading of the paper, I'd recommend rejection at this time. My own comments for the authors: I'd urge efforts to clarify the motivation for learning the inverse mapping, in particular adversarially (rather than just with the generation loss) in the text of the paper as you have in your rebuttals, and to improve the notation (the use of both D-tilde and D is confusing, and the omega notation seems unnecessary). I'm also not entirely clear whether the generator is stochastic or not, as the notation doesn't mention a randomly sampled latent variable (the traditional "z" here is a conditioning vector). Either way, the answer should be made more explicit.
train
[ "S1gWhqG2Yr", "HkeQUY2aYS", "S1xtGfhRFS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposed a new method for partial multi-label learning, based on the idea of generative adversarial networks. The partial multi-label learning is the problem that one instance is associated with several ground truth labels simultaneously, but we are given a superset of the ground truth labels for traini...
[ 8, 3, 3 ]
[ 5, 4, 4 ]
[ "iclr_2020_rkehoAVtvS", "iclr_2020_rkehoAVtvS", "iclr_2020_rkehoAVtvS" ]
iclr_2020_SJg2j0VFPB
Knowledge Graph Embedding: A Probabilistic Perspective and Generalization Bounds
We study theoretical properties of embedding methods for knowledge graph completion under the missing completely at random assumption. We prove generalization error bounds for this setting. Even though the missing completely at random setting may seem naive, it is actually how knowledge graph embedding methods are typically benchmarked in the literature. Our results provide, to certain extent, an explanation for why knowledge graph embedding methods work (as much as classical learning theory results provide explanations for classical learning from i.i.d. data).
reject
The paper provides a generalization error bound, which extends the results from PU learning, for the problem of knowledge graph completion. The authors assume a missing at random setting, and provide bounds on the triples (two nodes and an edge) that could be mistakes. Then the paper provides a maximum likelihood interpretation, as well as relations to existing knowledge graph completion methods. The problem setting is interesting, and the writing clear. This discussion was extensive, with reviewers and authors following the spirit of ICLR and having a constructive discussion which resulted in improvements to the paper. However, there seems to be still some remaining improvements to be made in terms of clarity of presentation, as well as precision of the theoretical arguments. Unfortunately, there are many strong submissions, and the paper as it currently stands does not satisfy the quality threshold of ICLR.
train
[ "r1xu3P7hjH", "S1gOMVG2ir", "Hke1YGg3oH", "S1g4KR9cjS", "HJedhpcqir", "H1x6C-95sH", "HJxDPTcKjH", "HylfIAtYsr", "BJgjnpFtjS", "rkxQwpFKiS", "HJxgWpFtsS", "ryeKpGJiFH", "H1emzi3atr", "Skgitzsg9B" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Sorry for spamming you if this point is obvious. We would still like to make sure that there is no confusion about the following point:\n\n> \"...and there are no fear of false positives...\"\n\nThat is only true for the naive method (that is no longer in the revised paper and that only served for the hypothetical...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 1, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 3, 5 ]
[ "H1x6C-95sH", "Hke1YGg3oH", "H1x6C-95sH", "BJgjnpFtjS", "H1x6C-95sH", "BJgjnpFtjS", "BJgjnpFtjS", "Skgitzsg9B", "H1emzi3atr", "H1emzi3atr", "ryeKpGJiFH", "iclr_2020_SJg2j0VFPB", "iclr_2020_SJg2j0VFPB", "iclr_2020_SJg2j0VFPB" ]
iclr_2020_Hkl6i0EFPH
Scalable Differentially Private Data Generation via Private Aggregation of Teacher Ensembles
We present a novel approach named G-PATE for training differentially private data generator. The generator can be used to produce synthetic datasets with strong privacy guarantee while preserving high data utility. Our approach leverages generative adversarial nets to generate data and exploits the PATE (Private Aggregation of Teacher Ensembles) framework to protect data privacy. Compared to existing methods, our approach significantly improves the use of privacy budget. This is possible since we only need to ensure differential privacy for the generator, which is the part of the model that actually needs to be published for private data generation. To achieve this, we connect a student generator with an ensemble of teacher discriminators and propose a private gradient aggregation mechanism to ensure differential privacy on all the information that flows from the teacher discriminators to the student generator. Theoretically, we prove that our algorithm ensures differential privacy for the generator. Empirically, we provide thorough experiments to demonstrate the superiority of our method over prior work on both image and non-image datasets.
reject
This paper addresses the problem of differential private data generator. The paper presents a novel approach called G_PATE which builds on the existing PATE framework. The main contribution is in using a student generator with an ensemble of teacher discriminators and in proposing a new private gradient aggregation mechanism which ensures differential privacy in the information flow from discriminator to generator. Although the idea is interesting, there are significant concerns raised by the reviewers about the experiments and analysis done in the paper which seem to be valid and have not been addressed yet in the final revision. I believe upon making significant changes to the paper, this could be a good contribution. Thus, as of now, I am recommending a Rejection.
val
[ "HJxeRcd5jr", "rygDyc_cjS", "SyeP5L_cir", "Ske9_YdqsB", "r1exqds19H", "ByxX-UTWcH", "HyxAyKi7cH" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewers for their valuable feedback. We have made the following changes in our revision:\n- We moved the definitions of differential privacy and Renyi differential privacy from the Appendix to Section 3.\n- We moved algorithm 2 from the Appendix to Section 4. \n- We added additional experiment resul...
[ -1, -1, -1, -1, 3, 1, 8 ]
[ -1, -1, -1, -1, 4, 4, 1 ]
[ "iclr_2020_Hkl6i0EFPH", "HyxAyKi7cH", "ByxX-UTWcH", "r1exqds19H", "iclr_2020_Hkl6i0EFPH", "iclr_2020_Hkl6i0EFPH", "iclr_2020_Hkl6i0EFPH" ]
iclr_2020_H1g6s0NtwS
Learning Neural Surrogate Model for Warm-Starting Bayesian Optimization
Bayesian optimization is an effective tool to optimize black-box functions and popular for hyper-parameter tuning in machine learning. Traditional Bayesian optimization methods are based on Gaussian process (GP), relying on a GP-based surrogate model for sampling points of the function of interest. In this work, we consider transferring knowledge from related problems to target problem by learning an initial surrogate model for warm-starting Bayesian optimization. We propose a neural network-based surrogate model to estimate the function mean value in GP. Then we design a novel weighted Reptile algorithm with sampling strategy to learn an initial surrogate model from meta train set. The initial surrogate model is learned to be able to well adapt to new tasks. Extensive experiments show that this warm-starting technique enables us to find better minimizer or hyper-parameters than traditional GP and previous warm-starting methods.
reject
This paper is concerned with warm-starting Bayesian optimization (i.e. starting with a better surrogate model) through transfer learning among related problems. While the key motivation for warm-starting BO is certainly important (although not novel), there are important shortcomings in the way the method is developed and demonstrated. Firstly, the reviewers questioned design decisions, such as why combine NNs and GPs in this particular way or why the posterior variance of the hybrid model is not calculated. Moreover, there are issues with the experimental methodology that do not allow extraction of confident conclusions (e.g. repeating the experiments for different initial points is highly desirable). Finally, there are presentation issues. The authors replied only to some of these concerns, but ultimately the shortcomings seem to persist and hint towards a paper that needs more work.
train
[ "H1exBwhaKS", "H1lJK0-nsr", "r1lS3UznsB", "H1e2ZefhiS", "BylBDJDFYB", "SJe6YTeoKH" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "POST-REBUTTAL FEEDBACK\n\nThanks for your response. \n\nThe justifications provided in the response have not convinced me to improve my score. They are at times hard to understand: For example, the authors have claimed that while their design choice is not reasonable, it is less unreasonable than the other.\n\nSUM...
[ 3, -1, -1, -1, 3, 1 ]
[ 5, -1, -1, -1, 5, 4 ]
[ "iclr_2020_H1g6s0NtwS", "SJe6YTeoKH", "H1exBwhaKS", "BylBDJDFYB", "iclr_2020_H1g6s0NtwS", "iclr_2020_H1g6s0NtwS" ]
iclr_2020_Byx0iAEYPH
Fully Polynomial-Time Randomized Approximation Schemes for Global Optimization of High-Dimensional Folded Concave Penalized Generalized Linear Models
Global solutions to high-dimensional sparse estimation problems with a folded concave penalty (FCP) have been shown to be statistically desirable but are strongly NP-hard to compute, which implies the non-existence of a pseudo-polynomial time global optimization schemes in the worst case. This paper shows that, with high probability, a global solution to the formulation for a FCP-based high-dimensional generalized linear model coincides with a stationary point characterized by the significant subspace second order necessary conditions (S3ONC). Since the desired S3ONC solution admits a fully polynomial-time approximation schemes (FPTAS), we thus have shown the existence of fully polynomial-time randomized approximation scheme (FPRAS) for a strongly NP-hard problem. We further demonstrate two versions of the FPRAS for generating the desired S3ONC solutions. One follows the paradigm of an interior point trust region algorithm and the other is the well-studied local linear approximation (LLA). Our analysis thus provides new techniques for global optimization of certain NP-Hard problems and new insights on the effectiveness of LLA.
reject
Thanks for your detailed feedback to the reviewers, which clarified us a lot in many respects. However, the novelty of this paper is rather marginal and given the high competition at ICLR2020, this paper is unfortunately below the bar. We hope that the reviewers' comments are useful for improving the paper for potential future publication.
train
[ "SygBM7unKH", "B1xgSyH2sB", "H1lBv_Ensr", "HyxuvaVnir", "rklfq4Jf5S", "HJlcKC0P5S" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nSummary: The paper studies the problem of global optimization of high-dimensional sparse estimators regularized by PCP (folded concave penalty). The main result is showing that under certain conditions, with high probability, the desired global solution is an oracle stationary point satisfying the so-called S^3...
[ 3, -1, -1, -1, 3, 8 ]
[ 5, -1, -1, -1, 3, 4 ]
[ "iclr_2020_Byx0iAEYPH", "rklfq4Jf5S", "HJlcKC0P5S", "SygBM7unKH", "iclr_2020_Byx0iAEYPH", "iclr_2020_Byx0iAEYPH" ]
iclr_2020_SJx0oAEYwH
Cover Filtration and Stable Paths in the Mapper
The contributions of this paper are two-fold. We define a new filtration called the cover filtration built from a single cover based on a generalized Steinhaus distance, which is a generalization of Jaccard distance. We then develop a language and theory for stable paths within this filtration, inspired by ideas of persistent homology. This framework can be used to develop several new learning representations in applications where an obvious metric may not be defined but a cover is readily available. We demonstrate the utility of our framework as applied to recommendation systems and explainable machine learning. We demonstrate a new perspective for modeling recommendation system data sets that does not require manufacturing a bespoke metric. As a direct application, we find that the stable paths identified by our framework in a movies data set represent a sequence of movies constituting a gentle transition and ordering from one genre to another. For explainable machine learning, we apply the Mapper for model induction, providing explanations in the form of paths between subpopulations. Our framework provides an alternative way of building a filtration from a single mapper that is then used to explore stable paths. As a direct illustration, we build a mapper from a supervised machine learning model trained on the FashionMNIST data set. We show that the stable paths in the cover filtration provide improved explanations of relationships between subpopulations of images.
reject
The paper proposes a filtration based on the covers of data sets and demonstrates its effectiveness in recommendation systems and explainable machine learning. The paper is theory focused, and the discussion was mainly centered around one very detailed and thorough review. The main concerns raised in the reviews and reiterated at the end of the rebuttal cycle was lack of clarity, relatively incremental contribution, and limited experimental evaluation. Due to my limited knowledge of this particular field, I base my recommendation mostly on R1's assessment and recommend rejecting this submission.
train
[ "HJxtGnD3jH", "ryeddrwhor", "SJldYkNnoS", "HklZVJ42ir", "S1efh072jB", "HJeJ8RXhjS", "H1xbfnm3sS", "SylLQHX9tH", "Skl0rFWi9B" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Here are some more details for the paragraph next to Figure 10 (in Page 18, part of Appendix) in the paper:\n\n\n We initially train a model using a very standard and naive approach. First, we reduce the dimensionality of the images to 100 using Principal Components Analysis and use this 100 dimensional space t...
[ -1, -1, -1, -1, -1, -1, -1, 1, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ "ryeddrwhor", "SJldYkNnoS", "HklZVJ42ir", "S1efh072jB", "HJeJ8RXhjS", "SylLQHX9tH", "Skl0rFWi9B", "iclr_2020_SJx0oAEYwH", "iclr_2020_SJx0oAEYwH" ]
iclr_2020_rklJ2CEYPH
Point Process Flows
Event sequences can be modeled by temporal point processes (TPPs) to capture their asynchronous and probabilistic nature. We propose an intensity-free framework that directly models the point process as a non-parametric distribution by utilizing normalizing flows. This approach is capable of capturing highly complex temporal distributions and does not rely on restrictive parametric forms. Comparisons with state-of-the-art baseline models on both synthetic and challenging real-life datasets show that the proposed framework is effective at modeling the stochasticity of discrete event sequences.
reject
The paper proposed to use normalizing flow to model point processes. However, the reviews find that the paper is incremental. There have been several works using deep generative models to temporal data, and the proposed method is a simple combination of well-established existing works without problem-specific adaptation.
train
[ "B1lrbUAiir", "Skx4OdaooH", "rJlfQrRijB", "H1g4BxCojH", "SygmggRjiS", "HJlmMtTosH", "H1gYWzbatr", "rJlg7oH2Kr", "SyeFu0fRqr", "rkezlZ-x5S", "SylcvuwPtr" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "\nThere are a few works on intensity-free modeling of point process distributions. To the best of our knowledge, this is the first work that treats modeling point process distribution as density estimation using normalizing flow technique while being intensity-free and also being able to evaluate likelihood. We ag...
[ -1, -1, -1, -1, -1, -1, 6, 3, 3, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, 5, 3, 3, -1, -1 ]
[ "rJlfQrRijB", "SyeFu0fRqr", "rJlg7oH2Kr", "SygmggRjiS", "H1gYWzbatr", "Skx4OdaooH", "iclr_2020_rklJ2CEYPH", "iclr_2020_rklJ2CEYPH", "iclr_2020_rklJ2CEYPH", "SylcvuwPtr", "iclr_2020_rklJ2CEYPH" ]
iclr_2020_r1xMnCNYvB
JAX MD: End-to-End Differentiable, Hardware Accelerated, Molecular Dynamics in Pure Python
A large fraction of computational science involves simulating the dynamics of particles that interact via pairwise or many-body interactions. These simulations, called Molecular Dynamics (MD), span a vast range of subjects from physics and materials science to biochemistry and drug discovery. Most MD software involves significant use of handwritten derivatives and code reuse across C++, FORTRAN, and CUDA. This is reminiscent of the state of machine learning before automatic differentiation became popular. In this work we bring the substantial advances in software that have taken place in machine learning to MD with JAX, M.D. (JAX MD). JAX MD is an end-to-end differentiable MD package written entirely in Python that can be just-in-time compiled to CPU, GPU, or TPU. JAX MD allows researchers to iterate extremely quickly and lets researchers easily incorporate machine learning models into their workflows. Finally, since all of the simulation code is written in Python, researchers can have unprecedented flexibility in setting up experiments without having to edit any low-level C++ or CUDA code. In addition to making existing workloads easier, JAX MD allows researchers to take derivatives through whole-simulations as well as seamlessly incorporate neural networks into simulations. This paper explores the architecture of JAX MD and its capabilities through several vignettes. Code is available at github.com/jaxmd/jax-md along with an interactive Colab notebook.
reject
The paper is about a software library that allows for relatively easy simulation of molecular dynamics. The library is based on JAX and draws heavily from its benefits. To be honest, this is a difficult paper to evaluate for everyone involved in this discussion. The reason for this is that it is an unconventional paper (software) whose target application centered around molecular dynamics. While the package seems to be useful for this purpose (and some ML-related purposes), the paper does not expose which of the benefits come from JAX and which ones the authors added in JAX MD. It looks like that most of the benefits are built-in benefits in JAX. Furthermore, I am missing a detailed analysis of computation speed (the authors do mention this in the discussion below and in a sentence in the paper, but this insufficient). Currently, it seems that the package is relatively slow compared to existing alternatives. Here are some recommendations: 1. It would be good if the authors focused more on ML-related problems in the paper, because this would also make sure that the package is not considered a specialized package that overfits to molecular dynamics. 2. Please work out the contribution/delta of JAX MD compared to JAX. 3. Provide a thorough analysis of the computation speed 4. Make a better case, why JAX MD should be the go-to method for practitioners. Overall, I recommend rejection of this paper. A potential re-submission venue could be JMLR, which has an explicit software track.
train
[ "SkxoFBTijr", "r1loEHaiiH", "rkeCer6iiB", "Hylmq1Ik9H", "SklDy3jycr", "Hkga6iGecB" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your careful review of our work and useful suggestions!\n\n> Description of the elements of the design of JAX which are useful here are presented, and appear distinct from other \n> AD libraries like Tensorflow or PyTorch, although the authors stop short of explicitly stating which functionality \n> ...
[ -1, -1, -1, 6, 3, 3 ]
[ -1, -1, -1, 1, 1, 1 ]
[ "Hylmq1Ik9H", "SklDy3jycr", "Hkga6iGecB", "iclr_2020_r1xMnCNYvB", "iclr_2020_r1xMnCNYvB", "iclr_2020_r1xMnCNYvB" ]
iclr_2020_BkxX30EFPS
Perceptual Generative Autoencoders
Modern generative models are usually designed to match target distributions directly in the data space, where the intrinsic dimensionality of data can be much lower than the ambient dimensionality. We argue that this discrepancy may contribute to the difficulties in training generative models. We therefore propose to map both the generated and target distributions to the latent space using the encoder of a standard autoencoder, and train the generator (or decoder) to match the target distribution in the latent space. The resulting method, perceptual generative autoencoder (PGA), is then incorporated with a maximum likelihood or variational autoencoder (VAE) objective to train the generative model. With maximum likelihood, PGAs generalize the idea of reversible generative models to unrestricted neural network architectures and arbitrary latent dimensionalities. When combined with VAEs, PGAs can generate sharper samples than vanilla VAEs. Compared to other autoencoder-based generative models using simple priors, PGAs achieve state-of-the-art FID scores on CIFAR-10 and CelebA.
reject
The authors present a new training procedure for generative models where the target and generated distributions are first mapped to a latent space and the divergence between then is minimised in this latent space. The authors achieve state of the art results on two datasets. All reviewers agreed that the idea was vert interesting and has a lot of potential. Unfortunately, in the initial version of the paper the main section (section 3) was not very clear with confusing notation and statements. I thank the authors for taking this feedback positively and significantly revising the writeup. However, even after revising the writeup some of the ideas are still not clear. In particular, during discussions between the AC and reviewers it was pointed out that the training procedure is still not convincing. It was not clear whether the heuristic combination of the deterministic PGA parts of the objective (3) with the likelihood/VAE based terms (9) and (12,13), was conceptually very sound. Unfortunately, most of the initial discussions with the authors revolved around clarity and once we crossed the "clarity" barrier there wasn't enough time to discuss the other technical details of the paper. As a result, even though the paper seems interesting, the initial lack of clarity went against the paper. In summary, based on the reviewer comments, I recommend that the paper cannot be accepted.
train
[ "H1xr8ihjiS", "H1lbwuw8sH", "rJgR_Pw8oS", "H1gnlPvUoH", "SklgvIDLsr", "HkxXiM0zsB", "B1gdUc7CKS", "H1e5UOxLqB" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewers,\n\nThank you again for the valuable comments. We have revised the manuscript to improve the clarity and readability of Sec. 3.\n\n- Improved the writing of Sec. 3.1 to avoid confusion.\n\n- Improved the clarity of notation and figures (e.g., $f_{\\phi}$, $g_{\\theta}$, Fig. 1). The encoder and deco...
[ -1, -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, -1, 3, 3, 5 ]
[ "iclr_2020_BkxX30EFPS", "iclr_2020_BkxX30EFPS", "B1gdUc7CKS", "H1e5UOxLqB", "HkxXiM0zsB", "iclr_2020_BkxX30EFPS", "iclr_2020_BkxX30EFPS", "iclr_2020_BkxX30EFPS" ]
iclr_2020_Bye8hREtvB
Natural Image Manipulation for Autoregressive Models Using Fisher Scores
Deep autoregressive models are one of the most powerful models that exist today which achieve state-of-the-art bits per dim. However, they lie at a strict disadvantage when it comes to controlled sample generation compared to latent variable models. Latent variable models such as VAEs and normalizing flows allow meaningful semantic manipulations in latent space, which autoregressive models do not have. In this paper, we propose using Fisher scores as a method to extract embeddings from an autoregressive model to use for interpolation and show that our method provides more meaningful sample manipulation compared to alternate embeddings such as network activations.
reject
The paper proposes learning a latent embedding for image manipulation for PixelCNN by using Fisher scores projected to a low-dimensional space. The reviewers have several concerns about this paper: * Novelty * Random projection doesn’t learn useful representation * Weak evaluations Since two expert reviewers are negative about this paper, I cannot recommend acceptance at this stage.
train
[ "BJleU5H-qB", "H1gGc4otjS", "ryekfzLKjB", "BkeWh3hmsS", "SJejdn2msH", "H1gJnKcl9B", "rygwa-Oi5r" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Motivated by the observation that powerful deep autoregressive models such as PixelCNNs lack the ability to produce semantically meaningful latent embeddings and generate visually appealing interpolated images by latent representation manipulations, this paper proposes using Fisher scores projected to a reasonably...
[ 3, -1, -1, -1, -1, 8, 1 ]
[ 4, -1, -1, -1, -1, 1, 5 ]
[ "iclr_2020_Bye8hREtvB", "BkeWh3hmsS", "iclr_2020_Bye8hREtvB", "BJleU5H-qB", "rygwa-Oi5r", "iclr_2020_Bye8hREtvB", "iclr_2020_Bye8hREtvB" ]
iclr_2020_H1gL3RVtwr
CURSOR-BASED ADAPTIVE QUANTIZATION FOR DEEP NEURAL NETWORK
Deep neural network (DNN) has rapidly found many applications in different scenarios. However, its large computational cost and memory consumption are barriers to computing restrained applications. DNN model quantization is a widely used method to reduce the DNN storage and computation burden by decreasing the bit width. In this paper, we propose a novel cursor based adaptive quantization method using differentiable architecture search (DAS). The multiple bits’ quantization mechanism is formulated as a DAS process with a continuous cursor that represents the possible quantization bit. The cursor-based DAS adaptively searches for the desired quantization bit for each layer. The DAS process can be solved via an alternative approximate optimization process, which is designed for mixed quantization scheme of a DNN model. We further devise a new loss function in the search process to simultaneously optimize accuracy and parameter size of the model. In the quantization step, based on a new strategy, the closest two integers to the cursor are adopted as the bits to quantize the DNN together to reduce the quantization noise and avoid the local convergence problem. Comprehensive experiments on benchmark datasets show that our cursor based adaptive quantization approach achieves the new state-of-the-art for multiple bits’ quantization and can efficiently obtain lower size model with comparable or even better classification accuracy.
reject
This paper presents a method to compress DNNs by quantization. The core idea is to use NAS techniques to adaptively set quantization bits at each layer. The proposed method is shown to achieved good results on the standard benchmarks. Through our final discussion, one reviewer agreed to raise the score from ‘Reject’ to ‘Weak Reject’, but still on negative side. Another reviewer was not satisfied with the author’s rebuttal, particularly regarding the appropriateness of training strategy and evaluation. Moreover, as reviewers pointed out, there were so many unclear writings and explanations in the original manuscript. Although we admit that authors made great effort to address the comments, the revision seems too major and need to go through another complete peer reviewing. As there was no strong opinion to push this paper, I’d like to recommend rejection.
train
[ "H1e4UrfvqB", "S1xU_gJhor", "Skg24q52sH", "rylQbA0isB", "HyghQL1njB", "Syx6-A1ZqH", "BJlKxH2N5r" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors developed a novel quantization technique that yields layer-wise different mixed-precision quantization. To do so, they alternatively update the pre-trained weights and the quantizer, which they call cursor. The following two features distinguish this paper: using two precision values around the cursor'...
[ 3, -1, -1, -1, -1, 6, 3 ]
[ 4, -1, -1, -1, -1, 1, 5 ]
[ "iclr_2020_H1gL3RVtwr", "BJlKxH2N5r", "iclr_2020_H1gL3RVtwr", "H1e4UrfvqB", "Syx6-A1ZqH", "iclr_2020_H1gL3RVtwr", "iclr_2020_H1gL3RVtwr" ]
iclr_2020_S1lvn0NtwH
Mutual Exclusivity as a Challenge for Deep Neural Networks
Strong inductive biases allow children to learn in fast and adaptable ways. Children use the mutual exclusivity (ME) bias to help disambiguate how words map to referents, assuming that if an object has one label then it does not need another. In this paper, we investigate whether or not standard neural architectures have a ME bias, demonstrating that they lack this learning assumption. Moreover, we show that their inductive biases are poorly matched to lifelong learning formulations of classification and translation. We demonstrate that there is a compelling case for designing neural networks that reason by mutual exclusivity, which remains an open challenge.
reject
This paper presents an understudied bias known to exist in the learning patterns of children, but not present in trained NN models. This bias is the mutual exclusivity bias: if the child already knows the word for an object, they can recognize that the object is likely not the referent when a new word is introduced. So that is, the names of objects are mutually exclusive. The authors and reviewers had a healthy discussion. In particular, Reviewer 3 would have liked to have seen a new algorithm or model proposed, as well as an analysis of when ME would help or hurt. I hope these ideas can be incorporated into a future submission of this paper.
train
[ "BklyWIdlcS", "S1xAWnjRFH", "SkedBT5ntB", "Hygl1Z-hoB", "BklvRdsjsr", "H1ew95gooS", "rklpjoQ9oH", "HkeodF7qir", "rklsz575oB", "rklKFaQ9jr", "B1xHDTm9iS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "This paper targets at studying the mutual exclusive bias which existed in children learning, to help understand whether there exists similar bias in deep networks. \n\nIn general, the whole paper tries to tell a very interesting, and good story. The paper is very well organized and written. However, I have the fol...
[ 6, 8, 6, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 3, 1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_S1lvn0NtwH", "iclr_2020_S1lvn0NtwH", "iclr_2020_S1lvn0NtwH", "BklvRdsjsr", "H1ew95gooS", "B1xHDTm9iS", "S1xAWnjRFH", "SkedBT5ntB", "BklyWIdlcS", "B1xHDTm9iS", "iclr_2020_S1lvn0NtwH" ]
iclr_2020_SkxOhANKDr
Generative Cleaning Networks with Quantized Nonlinear Transform for Deep Neural Network Defense
Effective defense of deep neural networks against adversarial attacks remains a challenging problem, especially under white-box attacks. In this paper, we develop a new generative cleaning network with quantized nonlinear transform for effective defense of deep neural networks. The generative cleaning network, equipped with a trainable quantized nonlinear transform block, is able to destroy the sophisticated noise pattern of adversarial attacks and recover the original image content. The generative cleaning network and attack detector network are jointly trained using adversarial learning to minimize both perceptual loss and adversarial loss. Our extensive experimental results demonstrate that our approach outperforms the state-of-art methods by large margins in both white-box and black-box attacks. For example, it improves the classification accuracy for white-box attacks upon the second best method by more than 40\% on the SVHN dataset and more than 20\% on the challenging CIFAR-10 dataset.
reject
This paper presents a method to defend neural networks from adversarial attack. The proposed generative cleaning network has a trainable quantization module which is claimed to be able to eliminate adversarial noise and recover the original image. After the intensive interaction with authors and discussion, one expert reviewer (R3) admitted that the experimental procedure basically makes sense and increased the score to Weak Reject. Yet, R3 is still not satisfied with some details such as the number of BPDA iterations, and more importantly, concludes that the meaningful numbers reported in the paper show only small gains, making the claim of the paper less convincing. As authors seem to have less interest in providing theoretical analysis and support, this issue is critical for decision, and there was no objection from other reviewers. After carefully reading the paper myself, I decided to support the opinion and therefore would like to recommend rejection.
train
[ "B1erP1osoS", "rJlzPUN9jS", "BylE_J6vsr", "ryea03dwoB", "S1eBQkXmjr", "B1xYZtPzoB", "HJgwJxIzsB", "ryeM4izzsB", "rklNHwMGsS", "rJgRrNMfoB", "BJgs00VTKS", "SkxWHoIRYH", "SygPDfaXcr" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nWe really appreciate your valuable comment! \n\nBPDA is an attack method. In the original paper of BPDA, they provided results on MINST, CIFAR-10, and ImageNet. Here is the reason that we only had BPDA defense results on the CIFAR-10. (1) We found that only one defense paper had results on MINST, so we did not p...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 8, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 1, 4 ]
[ "rJlzPUN9jS", "BylE_J6vsr", "ryea03dwoB", "S1eBQkXmjr", "HJgwJxIzsB", "HJgwJxIzsB", "ryeM4izzsB", "BJgs00VTKS", "SkxWHoIRYH", "SygPDfaXcr", "iclr_2020_SkxOhANKDr", "iclr_2020_SkxOhANKDr", "iclr_2020_SkxOhANKDr" ]
iclr_2020_HylthC4twr
Frequency Analysis for Graph Convolution Network
In this work, we develop quantitative results to the learnablity of a two-layers Graph Convolutional Network (GCN). Instead of analyzing GCN under some classes of functions, our approach provides a quantitative gap between a two-layers GCN and a two-layers MLP model. Our analysis is based on the graph signal processing (GSP) approach, which can provide much more useful insights than the message-passing computational model. Interestingly, based on our analysis, we have been able to empirically demonstrate a few case when GCN and other state-of-the-art models cannot learn even when true vertex features are extremely low-dimensional. To demonstrate our theoretical findings and propose a solution to the aforementioned adversarial cases, we build a proof of concept graph neural network model with stacked filters named Graph Filters Neural Network (gfNN).
reject
This paper studies two-layer graph convolutional networks and two-layer multi-layer perceptions and develops quantitative results of their effect in signal processing settings. The paper received 3 reviews by experts working in this area. R1 recommends Weak Accept, indicating that the paper provides some useful insight (e.g. into when graph neural networks are or are not appropriate for particular problems) and poses some specific technical questions. In follow up discussions after the author response, R1 and authors agree that there are some over claims in the paper but that these could be addressed with some toning down of claims and additional discussion. R2 recommends Weak Accept but raises several concerns about the technical contribution of the paper, indicating that some of the conclusions were already known or are unsurprising. R2 concludes "I vote for weak accept, but I am fine if it is rejected." R3 recommends Reject, also questioning the significance of the technical contribution and whether some of the conclusions are well-supported by experiments, as well as some minor concerns about clarity of writing. In their thoughtful responses, authors acknowledge these concerns. Given the split decision, the AC also read the paper. While it is clear it has significant merit, the concerns about significance of the contribution and support for conclusions (as acknowledged by authors) are important, and the AC feels a revision of the paper and another round of peer review is really needed to flesh these issues out.
val
[ "HJl80xU5jr", "Syl_Dy8cjS", "rklf4em5oH", "HJg8vTjKor", "SJlsqBjJjS", "BJxKT2hvjS", "Hkx7hw9JoB", "SJxiGab2tS", "B1xVuu6aKr", "rklob41CKr" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Update: We add a footnote to the paragraph and extra information of the three datasets in Table 2 (appendix).\n\nIndeed, we will update our manuscript with this point. Thank you again for pointing this out!", "I thank the authors for considering my comment seriously and for providing additional information for c...
[ -1, -1, -1, -1, -1, -1, -1, 6, 1, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "Syl_Dy8cjS", "rklf4em5oH", "HJg8vTjKor", "BJxKT2hvjS", "B1xVuu6aKr", "rklob41CKr", "SJxiGab2tS", "iclr_2020_HylthC4twr", "iclr_2020_HylthC4twr", "iclr_2020_HylthC4twr" ]
iclr_2020_Skeq30NFPr
Stochastic Mirror Descent on Overparameterized Nonlinear Models
Most modern learning problems are highly overparameterized, meaning that the model has many more parameters than the number of training data points, and as a result, the training loss may have infinitely many global minima (in fact, a manifold of parameter vectors that perfectly interpolates the training data). Therefore, it is important to understand which interpolating solutions we converge to, how they depend on the initialization point and the learning algorithm, and whether they lead to different generalization performances. In this paper, we study these questions for the family of stochastic mirror descent (SMD) algorithms, of which the popular stochastic gradient descent (SGD) is a special case. Recently it has been shown that, for overparameterized linear models, SMD converges to the global minimum that is closest (in terms of the Bregman divergence of the mirror used) to the initialization point, a phenomenon referred to as implicit regularization. Our contributions in this paper are both theoretical and experimental. On the theory side, we show that in the overparameterized nonlinear setting, if the initialization is close enough to the manifold of global optima, SMD with sufficiently small step size converges to a global minimum that is approximately the closest global minimum in Bregman divergence, thus attaining approximate implicit regularization. For highly overparametrized models, this closeness comes for free: the manifold of global optima is so high dimensional that with high probability an arbitrarily chosen initialization will be close to the manifold. On the experimental side, our extensive experiments on the MNIST and CIFAR-10 datasets, using various initializations, various mirror descents, and various Bregman divergences, consistently confirms that this phenomenon indeed happens in deep learning: SMD converges to the closest global optimum to the initialization point in the Bregman divergence of the mirror used. Our experiments further indicate that there is a clear difference in the generalization performance of the solutions obtained from different SMD algorithms. Experimenting on the CIFAR-10 dataset with different regularizers, l1 to encourage sparsity, l2 (yielding SGD) to encourage small Euclidean norm, and l10 to discourage large components in the parameter vector, consistently and definitively shows that, for small initialization vectors, l10-SMD has better generalization performance than SGD, which in turn has better generalization performance than l1-SMD. This surprising, and perhaps counter-intuitive, result strongly suggests the importance of a comprehensive study of the role of regularization, and the choice of the best regularizer, to improve the generalization performance of deep networks.
reject
This paper takes results related to the convergence and implicit regularization of stochastic mirror descent, as previously applied within overparameterized linear models, and extends them to the nonlinear case. Among other things, conditions are derived for guaranteeing convergence to a global minimizer that is (nearly) closest to the initialization with respect to a divergence that depends upon the mirror potential. Overall the paper is well-written and likely at least somewhat accessible even for non-experts in this field. That being said, two reviewers voted to reject while one chose accept; however, during the rebuttal period the accept reviewer expressed a somewhat borderline sentiment. As for the reviewers that voted to reject, a common criticism was the perceived similarity with reference (Azizan and Hassibi, 2019), as well as unsettled concerns about the reasonableness of the assumptions involved (e.g., Assumption 1). With respect to the former, among other similarities the proof technique from both papers relies heavily on Lemma 6. It was then felt that this undercut the novelty somewhat. Beyond this though, even the accept reviewer raised an unsettled issue regarding the ease of finding an initialization point close to the manifold that nonetheless satisfies the conditions of Assumption 1. In other words, as networks become more complex such that points are closer to the manifold of optimal solutions, further non-convexity could be introduced such that the non-negativity of the stated divergence becomes more difficult to achieve. While the author response to this point is reasonable, it feels a bit like thoughtful speculation forged in the crunch time of a short rebuttal period, and possibly subject to change upon further reflection. In this regard a less time-constrained revision could be beneficial (including updates to address the other points mentioned above), and I am confident that this work can be positively received at another venue in the near future.
train
[ "S1gJmHhziH", "ryx4tvdTKB", "B1eM6yo3jH", "Hylm788hsH", "H1lqJc6oor", "Hkg6jFpioS", "HJeBf9TsjH", "SJxY1oCnKS" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper studies the performance of the mirror gradient method when applied to the overparameterized network. The authors claim that the SMD method could find the regularized global minimize for different potential functions, in terms of minimal Bregman distance. Further experiments are carried out to back up th...
[ 3, 3, -1, -1, -1, -1, -1, 6 ]
[ 4, 4, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_Skeq30NFPr", "iclr_2020_Skeq30NFPr", "Hylm788hsH", "Hkg6jFpioS", "ryx4tvdTKB", "SJxY1oCnKS", "S1gJmHhziH", "iclr_2020_Skeq30NFPr" ]
iclr_2020_Sygn20VtwH
Metagross: Meta Gated Recursive Controller Units for Sequence Modeling
This paper proposes Metagross (Meta Gated Recursive Controller), a new neural sequence modeling unit. Our proposed unit is characterized by recursive parameterization of its gating functions, i.e., gating mechanisms of Metagross are controlled by instances of itself, which are repeatedly called in a recursive fashion. This can be interpreted as a form of meta-gating and recursively parameterizing a recurrent model. We postulate that our proposed inductive bias provides modeling benefits pertaining to learning with inherently hierarchically-structured sequence data (e.g., language, logical or music tasks). To this end, we conduct extensive experiments on recursive logic tasks (sorting, tree traversal, logical inference), sequential pixel-by-pixel classification, semantic parsing, code generation, machine translation and polyphonic music modeling, demonstrating the widespread utility of the proposed approach, i.e., achieving state-of-the-art (or close) performance on all tasks.
reject
This paper proposes a recurrent architecture based on a recursive gating mechanism. The reviewers leaned towards rejection on the basis of questions regarding novelty, analysis, and the experimental setting. Surprisingly, the authors chose not to engage in discussion, as all reviewers seems pretty open to having their minds changed. If none of the reviewers will champion the paper, and the authors cannot be bothered to champion their own work, I see no reason to recommend acceptance.
train
[ "BJlXlGyJ9H", "S1gofkQqYS", "S1xtQSg0Fr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Update: As no rebuttal has been posted I stand by my assessment.\n\nSummary\nThis papers proposes a recursive parameterization of gates in a recurrent model. Instead of directly conditioning gates on the input and previous hidden representation, the proposed model recursively calls itself to parameterize the gate....
[ 3, 3, 3 ]
[ 3, 5, 3 ]
[ "iclr_2020_Sygn20VtwH", "iclr_2020_Sygn20VtwH", "iclr_2020_Sygn20VtwH" ]
iclr_2020_r1eh30NFwB
Variational Autoencoders with Normalizing Flow Decoders
Recently proposed normalizing flow models such as Glow (Kingma & Dhariwal, 2018) have been shown to be able to generate high quality, high dimensional images with relatively fast sampling speed. Due to the inherently restrictive design of architecture , however, it is necessary that their model are excessively deep in order to achieve effective training. In this paper we propose to combine Glow model with an underlying variational autoencoder in order to counteract this issue. We demonstrate that such our proposed model is competitive with Glow in terms of image quality while requiring far less time for training. Additionally, our model achieves state-of-the-art FID score on CIFAR-10 for a likelihood-based model.
reject
The paper received mixed reviews: WR (R1,R3) and WA (R2). AC has carefully read reviews and rebuttal and examined the paper. Unfortunately, the AC sides with R1 & R3, who are more experienced in this field than R2, and feels that paper does not quite meet the acceptance threshold. The authors should incorporate the comments of the reviewers and resubmit to another venue.
train
[ "rJgx7oKM5B", "HJex9qu3tH", "HyxZH1Bnjr", "SJxeFpEnjH", "Sylf32V2ir", "H1eIM2VhoH", "B1e3Kw7atH", "SJx03SXs9S" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "public" ]
[ "This paper proposes adding additional flow layers on the decoder of VAEs. The authors make two claims\n1. The proposed model achieves better image quality than a standalone Glow.\n2. The proposed model is faster to train than Glows.\nThe intuition is a VAE can learn a distribution close enough to be target distrib...
[ 3, 3, -1, -1, -1, -1, 6, -1 ]
[ 3, 5, -1, -1, -1, -1, 4, -1 ]
[ "iclr_2020_r1eh30NFwB", "iclr_2020_r1eh30NFwB", "SJx03SXs9S", "HJex9qu3tH", "B1e3Kw7atH", "rJgx7oKM5B", "iclr_2020_r1eh30NFwB", "iclr_2020_r1eh30NFwB" ]
iclr_2020_ryga2CNKDH
Evaluating Lossy Compression Rates of Deep Generative Models
Deep generative models have achieved remarkable progress in recent years. Despite this progress, quantitative evaluation and comparison of generative models remains as one of the important challenges. One of the most popular metrics for evaluating generative models is the log-likelihood. While the direct computation of log-likelihood can be intractable, it has been recently shown that the log-likelihood of some of the most interesting generative models such as variational autoencoders (VAE) or generative adversarial networks (GAN) can be efficiently estimated using annealed importance sampling (AIS). In this work, we argue that the log-likelihood metric by itself cannot represent all the different performance characteristics of generative models, and propose to use rate distortion curves to evaluate and compare deep generative models. We show that we can approximate the entire rate distortion curve using one single run of AIS for roughly the same computational cost as a single log-likelihood estimate. We evaluate lossy compression rates of different deep generative models such as VAEs, GANs (and its variants) and adversarial autoencoders (AAE) on MNIST and CIFAR10, and arrive at a number of insights not obtainable from log-likelihoods alone.
reject
The paper proposed a method to evaluate latent variable based generative models by estimating the compression in the latents (rate) and the distortion in the resulting reconstructions. While reviewers have clearly appreciated the theoretical novelty in using AIS to get an upper bound on the rate, there are concerns on missing empirical comparison with other related metrics (precision-recall) and limited practical applicability of the method due to large computational cost. Authors should consider comparing with PR metric and discuss some directions that can make the method practically as relevant as other related metrics.
test
[ "Bkeatl8PcB", "H1xXH6qnsS", "Bkl0g_q3iH", "BkxW5Bc2sH", "Skx4izqhsH", "ryldo2qvYr", "BygiiXj6KS" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper presents a method for evaluating latent-variable generative models in terms of the rate-distortion curve that compares the number of bits needed to encode the representation with how well you can reconstruct an input under some distortion measure. To estimate this curve, the author’s use AIS and show ho...
[ 3, -1, -1, -1, -1, 8, 3 ]
[ 5, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_ryga2CNKDH", "ryldo2qvYr", "BygiiXj6KS", "Bkeatl8PcB", "iclr_2020_ryga2CNKDH", "iclr_2020_ryga2CNKDH", "iclr_2020_ryga2CNKDH" ]
iclr_2020_B1gR3ANFPS
Non-linear System Identification from Partial Observations via Iterative Smoothing and Learning
System identification is the process of building a mathematical model of an unknown system from measurements of its inputs and outputs. It is a key step for model-based control, estimator design, and output prediction. This work presents an algorithm for non-linear offline system identification from partial observations, i.e. situations in which the system's full-state is not directly observable. The algorithm presented, called SISL, iteratively infers the system's full state through non-linear optimization and then updates the model parameters. We test our algorithm on a simulated system of coupled Lorenz attractors, showing our algorithm's ability to identify high-dimensional systems that prove intractable for particle-based approaches. We also use SISL to identify the dynamics of an aerobatic helicopter. By augmenting the state with unobserved fluid states, we learn a model that predicts the acceleration of the helicopter better than state-of-the-art approaches.
reject
The paper is about nonlinear system identification in an EM-style learning framework. The idea is to use nonlinear programming for the E step (finding a MAP estimate) and then refine the model parameters. In flavor, this approach is similar to the work by Roweis and Ghahramani. However, this paper does not offer any new insights whatsoever and the (very short) methods section arrives at proposing to compute the maximum a posteriori estimate (eq. 5). While the motivation for this given in the paper is a bit hard to understand it is of course a very well-known and useful estimator. Besides the maximum likelihood estimator this is one of the most commonly used point estimators, see any textbook on statistical signal processing. There has been quite a bit of work in the signal processing community over the last 10 years, and a good overview can be found here: https://web.stanford.edu/~boyd/papers/pdf/rt_cvx_sig_proc.pdf This should give evidence that this is indeed a standard way of solving the problem and it does work really well. Given that we have so fast and good optimizers these days it is common to solve Kalman filtering/smoothing problems via this optimization problem. The paper does not contain any analysis at all. The experiments do of course show that the method works (when there is low noise). Again, we know very well that the MAP estimate is a decent estimator for unimodal problems. The MAP estimator can also be made to work well for noisy situations. As for the comments that the sequential Monte Carlo methods do not work in higher dimensions that is indeed true. However, there are now algorithms that work in much higher dimensions than those considered by the authors of this paper, e.g. https://ieeexplore.ieee.org/document/8752074 which also contains an up-to-date survey on the topic. Furthermore, when it comes to particle smoothing there are also much more efficient smoothers than 10 years ago. The area of particle smoothing has also evolved rapidly over the past years. Summary: The paper makes use of the well-known MAP estimator for learning nonlinear dynamical systems (states and parameters). This is by now a standard technique in signal processing. There are several throw-away comments on SMC that are not valid and that are not grounded in the intense research of that field over the past decade.
train
[ "BJehPT0uKB", "HJxdbmTYoS", "H1x2_HhroB", "HJlV2DnBsH", "HJgcNDhroB", "BkgPh4nroB", "HylMVQnrjr", "rygPOM5kqH", "B1eBS0jm5B" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\n\n#######################\nRebuttal Response:\nThanks for these clarifications and updating the paper. I adapted my score to weak accept. To increase my score to accept, I would like to see evaluations of control experiments rather than just comparing the MSE. \n\n#######################\nReview: \n\nSummary: \n...
[ 6, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2020_B1gR3ANFPS", "iclr_2020_B1gR3ANFPS", "BJehPT0uKB", "BJehPT0uKB", "BJehPT0uKB", "rygPOM5kqH", "B1eBS0jm5B", "iclr_2020_B1gR3ANFPS", "iclr_2020_B1gR3ANFPS" ]
iclr_2020_S1gR2ANFvB
Model Comparison of Beer data classification using an electronic nose
Olfaction has been and still is an area which is challenging to the research community. Like other senses of the body, there has been a push to replicate the sense of smell to aid in identifying odorous compounds in the form of an electronic nose. At IBM, our team (Cogniscent) has designed a modular sensor board platform based on the artificial olfaction concept we called EVA (Electronic Volatile Analyzer). EVA is an IoT electronic nose device that aims to reproduce olfaction in living begins by integrating an array of partially specific and uniquely selective smell recognition sensors which are directly exposed to the target chemical analyte or the environment. We are exploring a new technique called temperature-controlled oscillation, which gives us virtual array of sensors to represent our signals/ fingerprint. In our study, we run experiments on identifying different types of beers using EVA. In order to successfully carry this classification task, the entire process starting from preparation of samples, having a consistent protocol of data collection in place all the way to providing the data to be analyzed and input to a machine learning model is very important. On this paper, we will discuss the process of sniffing volatile organic compounds from liquid beer samples and successfully classifying different brands of beers as a pilot test. We researched on different machine learning models in order to get the best classification accuracy for our Beer samples. The best classification accuracy is achieved by using a multi-level perceptron (MLP) artificial neural network (ANN) model, classification of three different brands of beers after splitting one-week data to a training and testing set yielded an accuracy of 97.334. While using separate weeks of data for training and testing set the model yielded an accuracy of 67.812, this is because of drift playing a role in the overall classification process. Using Random forest, the classification accuracy achieved by the model is 0.923. And Decision Tree achieved 0.911.
reject
The paper has received all negative scores. Furthermore, one of the reviewers identified an anonymity violation. This is a reject.
train
[ "Bkxb4isLFS", "HJgZHiyqYS", "BylpBu5BcS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This work designs an electronic nose to classify beers and gets extremely high accuracy through neural network methods. \nHowever, it seems that ICLR is not a proper conference for such a work. \nThe main contribution of this work may be the electronic devices rather than the machine learning methods.\nThe author ...
[ 1, 1, 1 ]
[ 1, 5, 1 ]
[ "iclr_2020_S1gR2ANFvB", "iclr_2020_S1gR2ANFvB", "iclr_2020_S1gR2ANFvB" ]
iclr_2020_BklC2RNKDS
Scalable Neural Learning for Verifiable Consistency with Temporal Specifications
Formal verification of machine learning models has attracted attention recently, and significant progress has been made on proving simple properties like robustness to small perturbations of the input features. In this context, it has also been observed that folding the verification procedure into training makes it easier to train verifiably robust models. In this paper, we extend the applicability of verified training by extending it to (1) recurrent neural network architectures and (2) complex specifications that go beyond simple adversarial robustness, particularly specifications that capture temporal properties like requiring that a robot periodically visits a charging station or that a language model always produces sentences of bounded length. Experiments show that while models trained using standard training often violate desired specifications, our verified training method produces models that both perform well (in terms of test error or reward) and can be shown to be provably consistent with specifications.
reject
This submission proposes a deep network training method to verify desired temporal properties of the resultant model. Strengths: -The proposed approach is valid and has some interesting components. Weaknesses: -The novelty is limited. -The experimental validation could be improved. Opinion on this paper was mixed but the more confident reviewers believed that novelty is insufficient for acceptance.
train
[ "S1lT9sNZqr", "HJlf4MBiiS", "SkgNmtmjsr", "HkghtdJiiB", "SyexW71osB", "SJljG_TKoH", "HyeROP6toS", "ryxnVICFsB", "ryepDdTFor", "H1gcV_6Ysr", "rJlni8-AYS", "Skxg_LZW5r", "rylcDWV_5S" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper extends bound propagation based robust training method to complicated settings where temporal specifications are given. Previous works mainly focus on using bound propagation for robust classification only. The authors first extend bound propagation to more complex networks with gates and softmax, and d...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 1 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 1, 4 ]
[ "iclr_2020_BklC2RNKDS", "S1lT9sNZqr", "SyexW71osB", "ryepDdTFor", "HyeROP6toS", "S1lT9sNZqr", "rylcDWV_5S", "iclr_2020_BklC2RNKDS", "rJlni8-AYS", "Skxg_LZW5r", "iclr_2020_BklC2RNKDS", "iclr_2020_BklC2RNKDS", "iclr_2020_BklC2RNKDS" ]
iclr_2020_B1gkpR4FDB
Statistical Adaptive Stochastic Optimization
We investigate statistical methods for automatically scheduling the learning rate (step size) in stochastic optimization. First, we consider a broad family of stochastic optimization methods with constant hyperparameters (including the learning rate and various forms of momentum) and derive a general necessary condition for the resulting dynamics to be stationary. Based on this condition, we develop a simple online statistical test to detect (non-)stationarity and use it to automatically drop the learning rate by a constant factor whenever stationarity is detected. Unlike in prior work, our stationarity condition and our statistical test applies to different algorithms without modification. Finally, we propose a smoothed stochastic line-search method that can be used to warm up the optimization process before the statistical test can be applied effectively. This removes the expensive trial and error for setting a good initial learning rate. The combined method is highly autonomous and it attains state-of-the-art training and testing performance in our experiments on several deep learning tasks.
reject
The paper proposes an approach to automatically tune the learning rate by using a statistical test that detects the stationarity of the learning dynamics. It also proposes a robust line search algorithm to reduce the need to tune the initial learning rate. The statistical test uses a test function which is taken to be a quadratic function in the paper for simplicity, although any choice of test function is valid. Although the method itself is interesting, the empirical benefits over SGD/ADAM seem to be minor.
val
[ "Syeng6KvsS", "r1gSNkwoiB", "SJlnmoD2Yr", "rkgjgoSiiB", "r1eIFr3OjS", "HygZW0FDoS", "HygN5CtvjS", "BkxTCIPAuH", "ByxuWGXCKS" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We thank Reviewer #1 for the feedback. However, the review comments show major misunderstandings of the method and the goal of the paper, which we elaborate below.\n\nFirst, we do *not* make any quadratic approximation of the objective function F(x). On the contrary, the advantage of our method is that it applies ...
[ -1, -1, 6, -1, -1, -1, -1, 8, 3 ]
[ -1, -1, 3, -1, -1, -1, -1, 1, 3 ]
[ "ByxuWGXCKS", "Syeng6KvsS", "iclr_2020_B1gkpR4FDB", "HygZW0FDoS", "iclr_2020_B1gkpR4FDB", "SJlnmoD2Yr", "BkxTCIPAuH", "iclr_2020_B1gkpR4FDB", "iclr_2020_B1gkpR4FDB" ]
iclr_2020_r1gx60NKPS
JAUNE: Justified And Unified Neural language Evaluation
We review the limitations of BLEU and ROUGE -- the most popular metrics used to assess reference summaries against hypothesis summaries, and introduce JAUNE: a set of criteria for what a good metric should behave like and propose concrete ways to use recent Transformers-based Language Models to assess reference summaries against hypothesis summaries.
reject
The authors tackle the questions of automatic metrics for assessing document similarity and propose the use of Transformed-based language models as a critic providing scores to samples. As a note, ideas like these have been also adopted in Computer Vision with the use of the Inception score as a proxy the quality of generated images. The authors ask great questions in the paper and they clearly tackle a very important problem, that of automatic measures for assessing text quality. While their first indications are not negative, this paper lacks the rigor and depth of experiments of a conference paper that would convince the research community to abandon BLEU and ROUGE in lieu of some other metric. It's perhaps a good workshop paper or a short paper at a *CL conference. Specifically, we would need more tasks where BLEU/ROUGE is the standard measure and so how the proposed measure correlates better with humans, so cases where word overlap is in theory a good proxy of similarity assuming reference sentence (e.g., logical entailment is not such a prototypical task). MT is a first step towards that, but summarization is also a necessary I would say. Other questions of interest relate to the type of LM (does it only need to be Roberta?) and the quality of LM (what if i badly tune my LM?) On a more personal note: We all know that BLEU is not a good metric (especially for document-level judgements) and every now and then there have been proposals to replace BLEU that do correlate better (e.g., http://ccc.inaoep.mx/~villasen/bib/Regression%20for%20machine%20translation%20evaluation.pdf) . However, BLEU is still here due to each simplicity. Please keep pushing this research and I’m looking forward to seeing more experimental evidence.
train
[ "rJxVtblnjB", "BkxHgCJ3oB", "SyglBxlnjr", "BylXkCeatH", "HyeKi-nlcH", "H1xKiqomqH" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewer #3,\n\nThank you for your comments. We have evaluated the proposed metric according to the scorecard showing it performs better on the proposed scorecard than BLEU/ROUGE.\n\nWe further show how this scorecard performance translates into a similar performance on WMT sentence pairs.\n\nFinally, the sco...
[ -1, -1, -1, 1, 1, 1 ]
[ -1, -1, -1, 3, 5, 3 ]
[ "HyeKi-nlcH", "H1xKiqomqH", "BylXkCeatH", "iclr_2020_r1gx60NKPS", "iclr_2020_r1gx60NKPS", "iclr_2020_r1gx60NKPS" ]
iclr_2020_rylxpA4YwH
On the Evaluation of Conditional GANs
Conditional Generative Adversarial Networks (cGANs) are finding increasingly widespread use in many application domains. Despite outstanding progress, quantitative evaluation of such models often involves multiple distinct metrics to assess different desirable properties, such as image quality, conditional consistency, and intra-conditioning diversity. In this setting, model benchmarking becomes a challenge, as each metric may indicate a different "best" model. In this paper, we propose the Frechet Joint Distance (FJD), which is defined as the Frechet distance between joint distributions of images and conditioning, allowing it to implicitly capture the aforementioned properties in a single metric. We conduct proof-of-concept experiments on a controllable synthetic dataset, which consistently highlight the benefits of FJD when compared to currently established metrics. Moreover, we use the newly introduced metric to compare existing cGAN-based models for a variety of conditioning modalities (e.g. class labels, object masks, bounding boxes, images, and text captions). We show that FJD can be used as a promising single metric for model benchmarking.
reject
The paper presents an extension of FID for conditional generation settings. While it's an important problem to address, the reviewers were concerned about the novelty and advantage of the proposed method over the existing methods. The evaluation is reported on toy datasets, and the significance is limited.
train
[ "H1lQRO9ior", "H1lQQ99jiB", "SklqZccijS", "r1eSDK5isH", "B1xTzKcsjB", "BJxo_VsaOH", "HJgKpMenKH", "rJxA3B9S5r" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to thank all reviewers for their critical reviews and insights. A common concern among reviewers was that FID and FJD appeared to give similar model rankings and therefore the advantage of FJD over FID was unclear. We hope to address this common concern here. \n\nR1: “Basically, FID can also give a g...
[ -1, -1, -1, -1, -1, 3, 1, 3 ]
[ -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2020_rylxpA4YwH", "SklqZccijS", "BJxo_VsaOH", "rJxA3B9S5r", "HJgKpMenKH", "iclr_2020_rylxpA4YwH", "iclr_2020_rylxpA4YwH", "iclr_2020_rylxpA4YwH" ]
iclr_2020_rkxWpCNKvS
Improved Image Augmentation for Convolutional Neural Networks by Copyout and CopyPairing
Image augmentation is a widely used technique to improve the performance of convolutional neural networks (CNNs). In common image shifting, cropping, flipping, shearing and rotating are used for augmentation. But there are more advanced techniques like Cutout and SamplePairing. In this work we present two improvements of the state-of-the-art Cutout and SamplePairing techniques. Our new method called Copyout takes a square patch of another random training image and copies it onto a random location of each image used for training. The second technique we discovered is called CopyPairing. It combines Copyout and SamplePairing for further augmentation and even better performance. We apply different experiments with these augmentation techniques on the CIFAR-10 dataset to evaluate and compare them under different configurations. In our experiments we show that Copyout reduces the test error rate by 8.18% compared with Cutout and 4.27% compared with SamplePairing. CopyPairing reduces the test error rate by 11.97% compared with Cutout and 8.21% compared with SamplePairing. Copyout and CopyPairing implementations are available at https://github.com/anonym/anonym.
reject
The reviewers have issues with novelty and quality of exposition. I recommend rejection.
train
[ "SylmDA_-FS", "ryg1OCnTtr", "SJgZAoMH9S" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I think this paper is not enough to accept in ICLR because\n- Lack of novelty.\n - CutMix [1] is very similar to Copyout.\n - To verify the novelty, a more sophisticated description and experimental supports should be required.\n- Insufficient experiments for supporting the effectiveness of the proposed method.\...
[ 1, 1, 1 ]
[ 4, 3, 3 ]
[ "iclr_2020_rkxWpCNKvS", "iclr_2020_rkxWpCNKvS", "iclr_2020_rkxWpCNKvS" ]
iclr_2020_Hye-p0VFPB
Efficient Systolic Array Based on Decomposable MAC for Quantized Deep Neural Networks
Deep Neural Networks (DNNs) have achieved high accuracy in various machine learning applications in recent years. As the recognition accuracy of deep learning applications increases, reducing the complexity of these neural networks and performing the DNN computation on embedded systems or mobile devices become an emerging and crucial challenge. Quantization has been presented to reduce the utilization of computational resources by compressing the input data and weights from floating-point numbers to integers with shorter bit-width. For practical power reduction, it is necessary to operate these DNNs with quantized parameters on appropriate hardware. Therefore, systolic arrays are adopted to be the major computation units for matrix multiplication in DNN accelerators. To obtain a better tradeoff between the precision/accuracy and power consumption, using parameters with various bit-widths among different layers within a DNN is an advanced quantization method. In this paper, we propose a novel decomposition strategy to construct a low-power decomposable multiplier-accumulator (MAC) for the energy efficiency of quantized DNNs. In the experiments, when 65% multiplication operations of VGG-16 are operated in shorter bit-width with at most 1% accuracy loss on the CIFAR-10 dataset, our decomposable MAC has 50% energy reduction compared with a non-decomposable MAC.
reject
This paper presents an energy-efficient architecture for quantized deep neural networks based on decomposable multiplication using MACs. Although the proposed approach is shown to be somehow effective, two reviewers pointed out that the very similar idea was already proposed in the previous work, BitBlade [1]. As the authors did not submit a rebuttal to defend this critical point, I’d like to recommend rejection. I recommend authors to discuss and clarify the difference from [1] in the future version of the paper. [1] Sungju Ryu, Hyungjun Kim, Wooseok Yi, Jae-Joon Kim. BitBlade: Area and Energy-Efficient Precision-Scalable Neural Network Accelerator with Bitwise Summation. DAC'2019
test
[ "HJxryQQy5r", "SyeLKScX5H", "Skg4roPYcr", "HkgSF3sRKr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public" ]
[ "This paper proposes to shorten the shift-addition operations in the straightforward configurable MACs (Sharma et al., 2018), to an addition-shift style. The authors claim that the new design is able to lower the energy consumption in the matrix multiplication. In the experimental analysis, the authors demonstrate ...
[ 1, 3, 3, -1 ]
[ 3, 4, 1, -1 ]
[ "iclr_2020_Hye-p0VFPB", "iclr_2020_Hye-p0VFPB", "iclr_2020_Hye-p0VFPB", "iclr_2020_Hye-p0VFPB" ]
iclr_2020_H1x-pANtDB
A closer look at network resolution for efficient network design
There is growing interest in designing lightweight neural networks for mobile and embedded vision applications. Previous works typically reduce computations from the structure level. For example, group convolution based methods reduce computations by factorizing a vanilla convolution into depth-wise and point-wise convolutions. Pruning based methods prune redundant connections in the network structure. In this paper, we explore the importance of network input for achieving optimal accuracy-efficiency trade-off. Reducing input scale is a simple yet effective way to reduce computational cost. It does not require careful network module design, specific hardware optimization and network retraining after pruning. Moreover, different input scales contain different representations to learn. We propose a framework to mutually learn from different input resolutions and network widths. With the shared knowledge, our framework is able to find better width-resolution balance and capture multi-scale representations. It achieves consistently better ImageNet top-1 accuracy over US-Net under different computation constraints, and outperforms the best compound scale model of EfficientNet by 1.5%. The superiority of our framework is also validated on COCO object detection and instance segmentation as well as transfer learning.
reject
Main content:new training regime for multi-resolution slimmable networks. Discussion: reviewer 4: believes the main contribution of mutual learning from width and resolution is a bit weak reviewer 1: incremental work, details/baselines missing in experimental section reviewer 2: (least detailed): well-written with good results Recommendation: I agree with reviewer 1, 4 that the experimental section could be improved. Leaning to reject.
train
[ "Bylv6b42iS", "r1gxzZNhjH", "r1geFxmoiB", "HkljUvZior", "S1evBrJ9sr", "Sygn73jKiB", "ryxDPoitor", "S1x7mOvMoS", "rkgDydwMjH", "BJg3XvPMoH", "SygTdtzCKH", "ryg1bSmy5H", "Byee1Y_MqS", "S1gKEglM5H" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "\n(2). The objective and motivation of our framework and multi-resolution augmentation is different. The “multi-resolution data augmentation” aims to improve the accuracy of a specific network. However, the multi-resolution learning in our framework aims to train an adaptive network that can execute in a spectrum ...
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, 6, 3, -1, -1 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, 5, 4, -1, -1 ]
[ "r1gxzZNhjH", "r1geFxmoiB", "HkljUvZior", "S1evBrJ9sr", "iclr_2020_H1x-pANtDB", "iclr_2020_H1x-pANtDB", "rkgDydwMjH", "SygTdtzCKH", "BJg3XvPMoH", "ryg1bSmy5H", "iclr_2020_H1x-pANtDB", "iclr_2020_H1x-pANtDB", "S1gKEglM5H", "iclr_2020_H1x-pANtDB" ]
iclr_2020_HkeQ6ANYDB
Blending Diverse Physical Priors with Neural Networks
Rethinking physics in the era of deep learning is an increasingly important topic. This topic is special because, in addition to data, one can leverage a vast library of physical prior models (e.g. kinematics, fluid flow, etc) to perform more robust inference. The nascent sub-field of physics-based learning (PBL) studies this problem of blending neural networks with physical priors. While previous PBL algorithms have been applied successfully to specific tasks, it is hard to generalize existing PBL methods to a wide range of physics-based problems. Such generalization would require an architecture that can adapt to variations in the correctness of the physics, or in the quality of training data. No such architecture exists. In this paper, we aim to generalize PBL, by making a first attempt to bring neural architecture search (NAS) to the realm of PBL. We introduce a new method known as physics-based neural architecture search (PhysicsNAS) that is a top-performer across a diverse range of quality in the physical model and the dataset.
reject
This paper constitutes interesting progress on an important topic; the reviewers identify certain improvements and directions for future work, and I urge the authors to continue to develop refinements and extensions.
train
[ "Skg7O9Ynjr", "r1li0KL2iS", "B1gzpez3oB", "SJxhxgf2jS", "rJePcj-3oB", "rklWIc-3sB", "H1lojMTjKH", "HkxVmBAy9S", "rklrUUyI5B" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your comments. \n\nComment 2: \nA2) Thank you for your understanding. We are interested in more practical tasks such as physics-based imaging problems. We will extend PhysicsNAS to these tasks in future work. \n\nComment 4: \nA4) The evaluation is conducted on tossing task with 32 samples under low ...
[ -1, -1, -1, -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, 1, 1, 1 ]
[ "r1li0KL2iS", "B1gzpez3oB", "H1lojMTjKH", "HkxVmBAy9S", "rklrUUyI5B", "iclr_2020_HkeQ6ANYDB", "iclr_2020_HkeQ6ANYDB", "iclr_2020_HkeQ6ANYDB", "iclr_2020_HkeQ6ANYDB" ]
iclr_2020_S1gV6AVKwB
Cross Domain Imitation Learning
We study the question of how to imitate tasks across domains with discrepancies such as embodiment and viewpoint mismatch. Many prior works require paired, aligned demonstrations and an additional RL procedure for the task. However, paired, aligned demonstrations are seldom obtainable and RL procedures are expensive. In this work, we formalize the Cross Domain Imitation Learning (CDIL) problem, which encompasses imitation learning in the presence of viewpoint and embodiment mismatch. Informally, CDIL is the process of learning how to perform a task optimally, given demonstrations of the task in a distinct domain. We propose a two step approach to CDIL: alignment followed by adaptation. In the alignment step we execute a novel unsupervised MDP alignment algorithm, Generative Adversarial MDP Alignment (GAMA), to learn state and action correspondences from unpaired, unaligned demonstrations. In the adaptation step we leverage the correspondences to zero-shot imitate tasks across domains. To describe when CDIL is feasible via alignment and adaptation, we introduce a theory of MDP alignability. We experimentally evaluate GAMA against baselines in both embodiment and viewpoint mismatch scenarios where aligned demonstrations don’t exist and show the effectiveness of our approach.
reject
The authors propose a novel approach for imitation learning in settings where demonstrations are unaligned with the task (e.g., differ in terms of state and action space). The proposed approach consists of alignment and adaptation steps and theoretical insights are provided on whether given MDPs can be aligned. Reviewers were positive about the ideas presented in the paper, and several requests for clarification were well addressed by the authors during the rebuttal phase. Key evaluation issues remained unresolved. In particular, it was unclear to what degree performance differences were purely caused by issues in alignment, and reviewers did not see sufficient evidence to support claims about performance on the full cross domain learning setting.
train
[ "HJeDkFvGiH", "BklC9xwfir", "rJer0YDfoH", "rJgDBowGjH", "ryed3qvMoH", "H1g_go22tH", "HJgFzgcaYS", "Hyx8KLvE5S", "HkglyI0oDH" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "\nThank you for your constructive feedback. Below we respond to your questions. We've uploaded a revised draft that addresses all of your suggestions for improvement. \n\n\nQ. In the discussion after Def.4, given an alignment task set D_{x,y}, how do we know whether a common reduction exists? \n\nA: Applying Theor...
[ -1, -1, -1, -1, -1, 3, 6, 8, -1 ]
[ -1, -1, -1, -1, -1, 4, 3, 3, -1 ]
[ "HJgFzgcaYS", "Hyx8KLvE5S", "H1g_go22tH", "H1g_go22tH", "H1g_go22tH", "iclr_2020_S1gV6AVKwB", "iclr_2020_S1gV6AVKwB", "iclr_2020_S1gV6AVKwB", "iclr_2020_S1gV6AVKwB" ]
iclr_2020_r1gIa0NtDH
MelNet: A Generative Model for Audio in the Frequency Domain
Capturing high-level structure in audio waveforms is challenging because a single second of audio spans tens of thousands of timesteps. While long-range dependencies are difficult to model directly in the time domain, we show that they can be more tractably modelled in two-dimensional time-frequency representations such as spectrograms. By leveraging this representational advantage, in conjunction with a highly expressive probabilistic model and a multiscale generation procedure, we design a model capable of generating high-fidelity audio samples which capture structure at timescales which time-domain models have yet to achieve. We demonstrate that our model captures longer-range dependencies than time-domain models such as WaveNet across a diverse set of unconditional generation tasks, including single-speaker speech generation, multi-speaker speech generation, and music generation.
reject
The paper proposed an autoregressive model with a multiscale generative representation of the spectrograms to better modeling the long term dependencies in audio signals. The techniques developed in the paper are novel and interesting. The main concern is the validation of the method. The paper presented some human listening studies to compare long-term structure on unconditional samples, which as also mentioned by reviewers are not particularly useful. Including justifications on the usefulness of the learned representation for any downstream task would make the work much more solid.
train
[ "S1gKu8yCFB", "Syx1z6__jH", "HJxDu3u_jr", "H1ejPouOoS", "rylkOqJptr", "Skxk8C7CFH" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "In this paper the authors present a new generative model for audio in the frequency domain to capture better the global structure of the signal. For this, they use an autoregressive procedure combined with a multiscale generative model for two-dimensional time-frequency visual representation (STFT spectrogram). T...
[ 6, -1, -1, -1, 3, 8 ]
[ 3, -1, -1, -1, 5, 5 ]
[ "iclr_2020_r1gIa0NtDH", "Skxk8C7CFH", "S1gKu8yCFB", "rylkOqJptr", "iclr_2020_r1gIa0NtDH", "iclr_2020_r1gIa0NtDH" ]
iclr_2020_BJlITC4KDB
Multi-Sample Dropout for Accelerated Training and Better Generalization
Dropout is a simple but efficient regularization technique for achieving better generalization of deep neural networks (DNNs); hence it is widely used in tasks based on DNNs. During training, dropout randomly discards a portion of the neurons to avoid overfitting. This paper presents an enhanced dropout technique, which we call multi-sample dropout, for both accelerating training and improving generalization over the original dropout. The original dropout creates a randomly selected subset (called a dropout sample) from the input in each training iteration while the multi-sample dropout creates multiple dropout samples. The loss is calculated for each sample, and then the sample losses are averaged to obtain the final loss. This technique can be easily implemented without implementing a new operator by duplicating a part of the network after the dropout layer while sharing the weights among the duplicated fully connected layers. Experimental results showed that multi-sample dropout significantly accelerates training by reducing the number of iterations until convergence for image classification tasks using the ImageNet, CIFAR-10, CIFAR-100, and SVHN datasets. Multi-sample dropout does not significantly increase computation cost per iteration for deep convolutional networks because most of the computation time is consumed in the convolution layers before the dropout layer, which are not duplicated. Experiments also showed that networks trained using multi-sample dropout achieved lower error rates and losses for both the training set and validation set.
reject
This paper proposes a multi-sample variant of dropout, claiming that it accelerates training and improves generalization. CIFAR10/100, ImageNet and SVHN results are presented, along with a few ablations. Reviewers were in agreement that the novelty of the contribution appears to be very limited, the evidence for the claims is not strong, and that the applicability of the method for achieving efficiency gains is limited to architectures that only apply dropout very late in processing, precluding applicability to models that employ dropout throughout. Importantly, comparisons to Fast Dropout (Wang 2013) seem highly relevant and are missing. While the reviewers acknowledged some of the criticisms, virtually no arguments were offered to rebut them and no updates were made to address them. I therefore recommend rejection.
train
[ "rJxuXvkhtH", "BkgstL95sS", "B1eMiNFqir", "Syephzt5sB", "Sygq8TiaKB", "Bkee3UlGcB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper propose an ensemble of dropout: it applies multiple copies of a neural net with different dropout configurations (i.e., dropout masks) to the same mini-batch, and the training loss is computed as the sum of losses incurred on the multiple copies. They claim that such ensembling can improve training perf...
[ 1, -1, -1, -1, 3, 1 ]
[ 4, -1, -1, -1, 5, 4 ]
[ "iclr_2020_BJlITC4KDB", "rJxuXvkhtH", "Sygq8TiaKB", "Bkee3UlGcB", "iclr_2020_BJlITC4KDB", "iclr_2020_BJlITC4KDB" ]
iclr_2020_rJgDT04twH
Deep Reinforcement Learning with Implicit Human Feedback
We consider the following central question in the field of Deep Reinforcement Learning (DRL): How can we use implicit human feedback to accelerate and optimize the training of a DRL algorithm? State-of-the-art methods rely on any human feedback to be provided explicitly, requiring the active participation of humans (e.g., expert labeling, demonstrations, etc.). In this work, we investigate an alternative paradigm, where non-expert humans are silently observing (and assessing) the agent interacting with the environment. The human's intrinsic reactions to the agent's behavior is sensed as implicit feedback by placing electrodes on the human scalp and monitoring what are known as event-related electric potentials. The implicit feedback is then used to augment the agent's learning in the RL tasks. We develop a system to obtain and accurately decode the implicit human feedback (specifically error-related event potentials) for state-action pairs in an Atari-type environment. As a baseline contribution, we demonstrate the feasibility of capturing error-potentials of a human observer watching an agent learning to play several different Atari-games using an electroencephalogram (EEG) cap, and then decoding the signals appropriately and using them as an auxiliary reward function to a DRL algorithm with the intent of accelerating its learning of the game. Building atop the baseline, we then make the following novel contributions in our work: (i) We argue that the definition of error-potentials is generalizable across different environments; specifically we show that error-potentials of an observer can be learned for a specific game, and the definition used as-is for another game without requiring re-learning of the error-potentials. (ii) We propose two different frameworks to combine recent advances in DRL into the error-potential based feedback system in a sample-efficient manner, allowing humans to provide implicit feedback while training in the loop, or prior to the training of the RL agent. (iii) Finally, we scale the implicit human feedback (via ErrP) based RL to reasonably complex environments (games) and demonstrate the significance of our approach through synthetic and real user experiments.
reject
The paper explores the idea of using implicit human feedback, gathered via EEG, to assist deep reinforcement learning. This is an interesting and at least somewhat novel idea. However, it is not clear that there is a good argument why it should work, or at least work well. The experiments carried are more exploratory than anything else, and it is not clear that much can be learned from the results. It's a proof of concept more than anything else, of the type that would work well for a workshop paper. More systematic empirical work would be needed for a good conference paper. The authors did not provide a rebuttal to reviewers, but rather agreed with their comments and that the paper needs more work. In light of this, the paper should be rejected and we wish the authors best of luck with a new version of the paper.
train
[ "B1gR7wt3sH", "Bkl3cLLIFS", "Syg9sWlRFH", "Hyx8T-E-qS" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewers for their valuable feedback. The concerns pointed out by the reviewers are helpful to strengthen our contributions. Since the major issues with the current version demand conducting more experiments, we have decided not to provide a rebuttal response and would be working to incorporate the f...
[ -1, 3, 1, 3 ]
[ -1, 4, 4, 3 ]
[ "iclr_2020_rJgDT04twH", "iclr_2020_rJgDT04twH", "iclr_2020_rJgDT04twH", "iclr_2020_rJgDT04twH" ]
iclr_2020_HylwpREtDr
Active Learning Graph Neural Networks via Node Feature Propagation
Graph Neural Networks (GNNs) for prediction tasks like node classification or edge prediction have received increasing attention in recent machine learning from graphically structured data. However, a large quantity of labeled graphs is difficult to obtain, which significantly limit the true success of GNNs. Although active learning has been widely studied for addressing label-sparse issues with other data types like text, images, etc., how to make it effective over graphs is an open question for research. In this paper, we present the investigation on active learning with GNNs for node classification tasks. Specifically, we propose a new method, which uses node feature propagation followed by K-Medoids clustering of the nodes for instance selection in active learning. With a theoretical bound analysis we justify the design choice of our approach. In our experiments on four benchmark dataset, the proposed method outperforms other representative baseline methods consistently and significantly.
reject
The authors propose a method of selecting nodes to label in a graph neural network setting to reduce the loss as efficiently as possible. Building atop Sener & Savarese 2017 the authors propose an alternative distance metric and clustering algorithm. In comparison to the just mentioned work, they show that their upper bound is smaller than the previous art's upper bound. While one cannot conclude from this that their algorithm is better, at least empirically the method appears to have a advantage over state of the art. However, reviewers were concerned about the assumptions necessary to prove the theorem, despite the modifications made by the authors after the initial round. The work proposes a simple estimator and shows promising results but reviewers felt improvements like reducing the number of assumptions and potentially a lower bound may greatly strengthen the paper.
train
[ "BkxoJF8LjS", "r1eNSF8LoH", "Bkxdy5LLjH", "rJxaKYIUjB", "ryxdKsggir", "Bkxt7UNTYH", "HkgnPPN0FB", "BJeDgUOoKH", "H1xdeRswFB", "SJgkKgUwtB", "H1e--mPYKS" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "public" ]
[ "We thank the reviewer for the comments. \n\n[for assumptions]\nWe would like to emphasize that our assumptions follow from the common settings in deep learning/active learning theory which is the general way of real data approximation. \n\nFor instance, both assumptions 2 and 3 are used in the paper by Sener & Sav...
[ -1, -1, -1, -1, 3, 1, 8, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, 4, 3, 3, -1, -1, -1, -1 ]
[ "ryxdKsggir", "HkgnPPN0FB", "iclr_2020_HylwpREtDr", "Bkxt7UNTYH", "iclr_2020_HylwpREtDr", "iclr_2020_HylwpREtDr", "iclr_2020_HylwpREtDr", "H1e--mPYKS", "SJgkKgUwtB", "iclr_2020_HylwpREtDr", "H1xdeRswFB" ]
iclr_2020_B1e5TA4FPr
Pareto Optimality in No-Harm Fairness
Common fairness definitions in machine learning focus on balancing various notions of disparity and utility. In this work we study fairness in the context of risk disparity among sub-populations. We introduce the framework of Pareto-optimal fairness, where the goal of reducing risk disparity gaps is secondary only to the principle of not doing unnecessary harm, a concept that is especially applicable to high-stakes domains such as healthcare. We provide analysis and methodology to obtain maximally-fair no-harm classifiers on finite datasets. We argue that even in domains where fairness at cost is required, no-harm fairness can prove to be the optimal first step. This same methodology can also be applied to any unbalanced classification task, where we want to dynamically equalize the misclassification risks across outcomes without degrading overall performance any more than strictly necessary. We test the proposed methodology on real case-studies of predicting income, ICU patient mortality, classifying skin lesions from images, and assessing credit risk, demonstrating how the proposed framework compares favorably to other traditional approaches.
reject
This manuscript outlines procedures to address fairness as measured by disparity in risk across groups. The manuscript is primarily motivated by methods that can achieve "no-harm" fairness, i.e., achieving fairness without increasing the risk in subgroups. The reviewers and AC agree that the problem studied is timely and interesting. However, in reviews and discussion, the reviewers noted issues with clarity of the presentation, and sufficient justification of the results. The consensus was that the manuscript in its current state is borderline, and would have to be significantly improved in terms of clarity of the discussion, and possibly improved methods that result in more convincing results.
train
[ "HJloynasor", "ryg2aj6jir", "Bkxi5jTjor", "rke5tXpptH", "SJxXAWmVqS", "rygZv8oIqS" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for their comments and analysis. We are glad he/she agrees that this is a worthwhile problem to look into, and found our formulation interesting. We improved the overall presentation considering your comments and suggestions.\n\nWe apologize for the lack of clarity in some of the definitions ...
[ -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, 4, 3, 4 ]
[ "rke5tXpptH", "SJxXAWmVqS", "rygZv8oIqS", "iclr_2020_B1e5TA4FPr", "iclr_2020_B1e5TA4FPr", "iclr_2020_B1e5TA4FPr" ]
iclr_2020_Hyg5TRNtDH
Unsupervised Temperature Scaling: Robust Post-processing Calibration for Domain Shift
The uncertainty estimation is critical in real-world decision making applications, especially when distributional shift between the training and test data are prevalent. Many calibration methods in the literature have been proposed to improve the predictive uncertainty of DNNs which are generally not well-calibrated. However, none of them is specifically designed to work properly under domain shift condition. In this paper, we propose Unsupervised Temperature Scaling (UTS) as a robust calibration method to domain shift. It exploits test samples to adjust the uncertainty prediction of deep models towards the test distribution. UTS utilizes a novel loss function, weighted NLL, that allows unsupervised calibration. We evaluate UTS on a wide range of model-datasets which shows the possibility of calibration without labels and demonstrate the robustness of UTS compared to other methods (e.g., TS, MC-dropout, SVI, ensembles) in shifted domains.
reject
The paper proposes a method called unsupervised temperature scaling (UTS) for improving calibration under domain shift. The reviewers agree that this is an interesting research question, but raised concerns about clarity of the text, depth of the empirical evaluation, and validity of some of the assumptions. While the author rebuttal addressed some of these concerns, the reviewers felt that the current version of the paper is not ready for publication. I encourage the authors to revise and resubmit to a different venue.
val
[ "HkgcVHZttr", "BygRrcYoiS", "SkxLJDtjjH", "BJg4k8Kjsr", "rJgdGGyooS", "rkxTQp1FsS", "HkxRnnJYjB", "HJg3s9yFjB", "HJxgV5Jtir", "B1lANuJYsr", "S1xqFI1Yor", "S1eu25Nqtr", "rJxwD1Tb5B" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "I've read the rebuttal and unfortunately, I'd like to keep my score as is. I still think the assumption made in the paper is too limiting for most practical settings. \n\n#########################\n\nThe paper proposes an unsupervised calibration method in a domain adaptation setting. The approach is based on the ...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_Hyg5TRNtDH", "rJgdGGyooS", "rJgdGGyooS", "iclr_2020_Hyg5TRNtDH", "HkxRnnJYjB", "HkxRnnJYjB", "HJg3s9yFjB", "HJxgV5Jtir", "S1eu25Nqtr", "HkgcVHZttr", "rJxwD1Tb5B", "iclr_2020_Hyg5TRNtDH", "iclr_2020_Hyg5TRNtDH" ]
iclr_2020_Skg9aAEKwH
Visual Hide and Seek
We train embodied agents to play Visual Hide and Seek where a prey must navigate in a simulated environment in order to avoid capture from a predator. We place a variety of obstacles in the environment for the prey to hide behind, and we only give the agents partial observations of their environment using an egocentric perspective. Although we train the model to play this game from scratch without any prior knowledge of its visual world, experiments and visualizations show that a representation of other agents automatically emerges in the learned representation. Furthermore, we quantitatively analyze how agent weaknesses, such as slower speed, effect the learned policy. Our results suggest that, although agent weaknesses make the learning problem more challenging, they also cause useful features to emerge in the representation.
reject
This paper proposes a technique for training embodied agents to play Visual Hide and Seek where a prey must navigate in a simulated environment in order to avoid capture from a predator. The model is trained to play this game from scratch without any prior knowledge of its visual world, and experiments and visualizations show that a representation of other agents automatically emerges in the learned representation. Results suggest that, although agent weaknesses make the learning problem more challenging, they also cause useful features to emerge in the representation. While reviewers found the paper explores an interesting direction, concerns were raised that many claims are unjustified. For example, in the discussion phase a reviewer asked how can one infer "hider learns to first turn away from the seeker then run away" from a single transition frequency? Or, the rebuttal mentions "The agent with visibility reward does not get the chance to learn features of self-visibility because of the limited speed hence the model received samples with significantly less variation of its self-visibility, which makes learning to discriminate self-visibility difficult". What is the justification for this? There could be more details in the paper and I'd also like to know if these findings were reached purely by looking at the histograms or by combining visual analysis with the histograms. I suggest authors address these concerns and provide quantitative results for all of the claims in an improved iteration of this paper.
train
[ "HyxIRfRYoS", "SJgzcGAKiB", "BJllwyXSoH", "rJlil17HsH", "ByxGJRzHjS", "H1lXIpfBsH", "HJezg2rAuB", "rJxTnqahYr", "HJxrcIXG9H" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewer 2,\n\nThank you again for your constructive reviews! They have helped us improve the quality and clarity of the paper. Based on your reviews, we clarify the related work section and highlighted our contributions in our work. We also updated the paper to include your suggestions on future work and dis...
[ -1, -1, -1, -1, -1, -1, 3, 3, 8 ]
[ -1, -1, -1, -1, -1, -1, 5, 5, 4 ]
[ "ByxGJRzHjS", "rJlil17HsH", "iclr_2020_Skg9aAEKwH", "HJezg2rAuB", "rJxTnqahYr", "HJxrcIXG9H", "iclr_2020_Skg9aAEKwH", "iclr_2020_Skg9aAEKwH", "iclr_2020_Skg9aAEKwH" ]
iclr_2020_S1xipR4FPB
Teacher-Student Compression with Generative Adversarial Networks
More accurate machine learning models often demand more computation and memory at test time, making them difficult to deploy on CPU- or memory-constrained devices. Teacher-student compression (TSC), also known as distillation, alleviates this burden by training a less expensive student model to mimic the expensive teacher model while maintaining most of the original accuracy. However, when fresh data is unavailable for the compression task, the teacher's training data is typically reused, leading to suboptimal compression. In this work, we propose to augment the compression dataset with synthetic data from a generative adversarial network (GAN) designed to approximate the training data distribution. Our GAN-assisted TSC (GAN-TSC) significantly improves student accuracy for expensive models such as large random forests and deep neural networks on both tabular and image datasets. Building on these results, we propose a comprehensive metric—the TSC Score—to evaluate the quality of synthetic datasets based on their induced TSC performance. The TSC Score captures both data diversity and class affinity, and we illustrate its benefits over the popular Inception Score in the context of image classification.
reject
This paper uses GAN for data augmentation to improve the performance of knowledge distillation. Reviewers and AC commonly think the paper suffers from limited novelty and insufficient experimental supports/details. Hence, I recommend rejection.
train
[ "BkxtFPgnFS", "S1gog4vAYS", "Sye1MwBJ9S" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes an approach for improving teacher-student compression by introducing the assistant of GANs. A conditional GAN is trained for generating synthetic data. Then, the generated data combined with training data is used for knowledge distillation. Experiments on large random forests and deep neural ne...
[ 3, 3, 3 ]
[ 5, 3, 3 ]
[ "iclr_2020_S1xipR4FPB", "iclr_2020_S1xipR4FPB", "iclr_2020_S1xipR4FPB" ]
iclr_2020_r1lh6C4FDr
COMBINED FLEXIBLE ACTIVATION FUNCTIONS FOR DEEP NEURAL NETWORKS
Activation in deep neural networks is fundamental to achieving non-linear mappings. Traditional studies mainly focus on finding fixed activations for a particular set of learning tasks or model architectures. The research on flexible activation is quite limited in both designing philosophy and application scenarios. In this study, we propose a general combined form of flexible activation functions as well as three principles of choosing flexible activation component. Based on this, we develop two novel flexible activation functions that can be implemented in LSTM cells and auto-encoder layers. Also two new regularisation terms based on assumptions as prior knowledge are proposed. We find that LSTM and auto-encoder models with proposed flexible activations provides significant improvements on time series forecasting and image compressing tasks, while layer-wise regularization can improve the performance of CNN (LeNet-5) models with RPeLu activation in image classification tasks.
reject
Main content: Proposes combining flexible activation functions Discussion: reviewer 1: main issue is unfamiliar with stock dataset, and CIFAR dataset has a bad baseline. reviewer 2: main issue is around baselines and writing. reviewer 3: main issue is paper does not compare with NAS. Recommendation: All 3 reviewers vote reject. Paper can be improved with stronger baselines and experiments. I recommend Reject.
train
[ "r1xVKGHAYH", "B1ei1AUosr", "rJgp1aqqsH", "SJgnV5C_sB", "SkgLarcQsH", "Hyx-7H5Xir", "HJg-1E3aFH" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer" ]
[ "The authors introduce a parameterized activation function to learn activation functions that have sigmoidal shapes that can be used in LSTMs. The authors apply their method to a dataset of forecasting stocks as well as to CIFAR-10. They also propose a method to regularize the activation function parameters.\n\nThe...
[ 1, -1, -1, 3, -1, -1, 3 ]
[ 4, -1, -1, 3, -1, -1, 4 ]
[ "iclr_2020_r1lh6C4FDr", "iclr_2020_r1lh6C4FDr", "SJgnV5C_sB", "iclr_2020_r1lh6C4FDr", "HJg-1E3aFH", "r1xVKGHAYH", "iclr_2020_r1lh6C4FDr" ]
iclr_2020_HJlTpCEKvS
Which Tasks Should Be Learned Together in Multi-task Learning?
Many computer vision applications require solving multiple tasks in real-time. A neural network can be trained to solve multiple tasks simultaneously using 'multi-task learning'. This saves computation at inference time as only a single network needs to be evaluated. Unfortunately, this often leads to inferior overall performance as task objectives compete, which consequently poses the question: which tasks should and should not be learned together in one network when employing multi-task learning? We systematically study task cooperation and competition and propose a framework for assigning tasks to a few neural networks such that cooperating tasks are computed by the same neural network, while competing tasks are computed by different networks. Our framework offers a time-accuracy trade-off and can produce better accuracy using less inference time than not only a single large multi-task neural network but also many single-task networks.
reject
An approach to make multi-task learning is presented, based on the idea of assigning tasks through the concepts of cooperation and competition. The main idea is well-motivated and explained well. The experiments demonstrate that the method is promising. However, there are a few concerns regarding fundamental aspects, such as: how are the decisions affected by the number of parameters? Could ad-hoc algorithms with human in the loop provide the same benefit, when the task-set is small? More importantly, identifying task groups for multi-task learning is an idea presented in prior work, e.g. [1,2,3]. This important body of prior work is not discussed at all in this paper. [1] Han and Zhang. "Learning multi-level task groups in multi-task learning" [2] Bonilla et al. "Multi-task Gaussian process prediction" [3] Zhang and Yang. "A Survey on Multi-Task Learning"
train
[ "HJgJ83u0KS", "S1lIQQ92jB", "BkeoAM9hor", "S1gehxK3sB", "r1e4KROnjH", "HkxHDaSaKS", "ByeKMG-W5H" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper focuses on how to partition a bunch of tasks in several groups and then it use multi-task learning to improve the performance. The paper makes an observation that multi-task relationships are not entirely correlated to transfer relationships and proposes a computational framework to optimize the assign...
[ 6, -1, -1, -1, -1, 6, 6 ]
[ 5, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_HJlTpCEKvS", "HkxHDaSaKS", "ByeKMG-W5H", "HJgJ83u0KS", "iclr_2020_HJlTpCEKvS", "iclr_2020_HJlTpCEKvS", "iclr_2020_HJlTpCEKvS" ]
iclr_2020_r1xapAEKwS
SDGM: Sparse Bayesian Classifier Based on a Discriminative Gaussian Mixture Model
In probabilistic classification, a discriminative model based on Gaussian mixture exhibits flexible fitting capability. Nevertheless, it is difficult to determine the number of components. We propose a sparse classifier based on a discriminative Gaussian mixture model (GMM), which is named sparse discriminative Gaussian mixture (SDGM). In the SDGM, a GMM-based discriminative model is trained by sparse Bayesian learning. This learning algorithm improves the generalization capability by obtaining a sparse solution and automatically determines the number of components by removing redundant components. The SDGM can be embedded into neural networks (NNs) such as convolutional NNs and can be trained in an end-to-end manner. Experimental results indicated that the proposed method prevented overfitting by obtaining sparsity. Furthermore, we demonstrated that the proposed method outperformed a fully connected layer with the softmax function in certain cases when it was used as the last layer of a deep NN.
reject
This paper presents a method for merging a discriminative GMM with an ARD sparsity-promoting prior. This is accomplished by nesting the ARD prior update within a larger EM-based routine for handling the GMM, allowing the model to automatically remove redundant components and improve generalization. The resulting algorithm was deployed on standard benchmark data sets and compared against existing baselines such as logistic regression, RVMs, and SVMs. Overall, one potential weakness of this paper, which is admittedly somewhat subjective, is that the exhibited novelty of the proposed approach is modest. Indeed ARD approaches are now widely used in various capacities, and even if some hurdles must be overcome to implement the specific marriage with a discriminative GMM as reported here, at least one reviewer did not feel that this was sufficient to warrant publication. Other concerns related to the experiments and comparison with existing work. For example, one reviewer mentioned comparisons with Panousis et al., "Nonparametric Bayesian Deep Networks with Local Competition," ICML 2019 and requested a discussion of differences. However, the rebuttal merely deferred this consideration to future work and provided no feedback regarding similarities or differences. In the end, all reviewers recommended rejecting this paper and I did not find any sufficient reason to overrule this consensus.
val
[ "S1xVuCQhjB", "BygiWCm2iB", "BkxG02QhoH", "HJepvMo__r", "SygQvluBFS", "HyxLpy9RKS" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you so much for your constructive and positive comments. As pointed out, the relationship between our method and nonparametric Bayesian deep learning is very interesting. Unfortunately, due to the short rebuttal period, we could not include comparisons in the revised manuscript. We will investigate this in f...
[ -1, -1, -1, 3, 3, 1 ]
[ -1, -1, -1, 5, 4, 5 ]
[ "HJepvMo__r", "SygQvluBFS", "HyxLpy9RKS", "iclr_2020_r1xapAEKwS", "iclr_2020_r1xapAEKwS", "iclr_2020_r1xapAEKwS" ]
iclr_2020_H1eJAANtvr
CGT: Clustered Graph Transformer for Urban Spatio-temporal Prediction
Deep learning based approaches have been widely used in various urban spatio-temporal forecasting problems, but most of them fail to account for the unsmoothness issue of urban data in their architecture design, which significantly deteriorates their prediction performance. The aim of this paper is to develop a novel clustered graph transformer framework that integrates both graph attention network and transformer under an encoder-decoder architecture to address such unsmoothness issue. Specifically, we propose two novel structural components to refine the architectures of those existing deep learning models. In spatial domain, we propose a gradient-based clustering method to distribute different feature extractors to regions in different contexts. In temporal domain, we propose to use multi-view position encoding to address the periodicity and closeness of urban time series data. Experiments on real datasets obtained from a ride-hailing business show that our method can achieve 10\%-25\% improvement than many state-of-the-art baselines.
reject
This paper proposes an approach to handle the problem of unsmoothness while modeling spatio-temporal urban data. However all reviewers have pointed major issues with the presentation of the work, and whether the method's complexity is justified.
val
[ "SkxUbnt3jB", "H1eXncZ3iH", "Byg_Zt-hoS", "rygrGibhiB", "H1xO1c-hoS", "rJggVEWq5S", "HJxr_thUKH", "r1g2jGL6FB", "S1xBWDqB5H" ]
[ "official_reviewer", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your responses. The updated version of the paper is indeed better in terms of clarity.\n\nGiven that the authors' response came in relatively late, I'm not sure how much time the authors will have to address my \"further comments\". But I think they could be useful anyway, should the paper be either ...
[ -1, -1, -1, -1, -1, -1, 3, 3, 1 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 1 ]
[ "H1eXncZ3iH", "HJxr_thUKH", "rJggVEWq5S", "r1g2jGL6FB", "S1xBWDqB5H", "iclr_2020_H1eJAANtvr", "iclr_2020_H1eJAANtvr", "iclr_2020_H1eJAANtvr", "iclr_2020_H1eJAANtvr" ]
iclr_2020_SJxyCRVKvB
Granger Causal Structure Reconstruction from Heterogeneous Multivariate Time Series
Granger causal structure reconstruction is an emerging topic that can uncover causal relationship behind multivariate time series data. In many real-world systems, it is common to encounter a large amount of multivariate time series data collected from heterogeneous individuals with sharing commonalities, however there are ongoing concerns regarding its applicability in such large scale complex scenarios, presenting both challenges and opportunities for Granger causal reconstruction. To bridge this gap, we propose a Granger cAusal StructurE Reconstruction (GASER) framework for inductive Granger causality learning and common causal structure detection on heterogeneous multivariate time series. In particular, we address the problem through a novel attention mechanism, called prototypical Granger causal attention. Extensive experiments, as well as an online A/B test on an E-commercial advertising platform, demonstrate the superior performances of GASER.
reject
This paper proposes a solution to learn Granger temporal-causal network for multivariate time series by adding attention named prototypical Granger causal attention in LSTM. The work aims to address an important problem. The proposed solution seems effective empirically. However, two major issues have not been fully addressed in the current version: (1) the connection between Granger causality and the attention mechanism is not fully justified; (2) the complex design overkills the whole concept of Granger causality (since its popularity is due to the simplicity). The paper would be a strong publication in the future if the two issues can be addressed in a satisfactory way.
train
[ "Skx4mNVt9B", "H1e0Fk7A5r", "ryl3LIEnsB", "HkeQjGP5jS", "Bke2gf-soH", "B1xyQbP5sS", "Sygod8Oeqr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes a new way of finding the Granger temporal-causal network based on attention mechanism on the predictions obtained by individual time series. It describes a surprisingly complex procedure for computing the attention vector based on combining Granger-inspired attentions with attentions obtained d...
[ 3, 8, -1, -1, -1, -1, 6 ]
[ 5, 4, -1, -1, -1, -1, 4 ]
[ "iclr_2020_SJxyCRVKvB", "iclr_2020_SJxyCRVKvB", "H1e0Fk7A5r", "B1xyQbP5sS", "Sygod8Oeqr", "Skx4mNVt9B", "iclr_2020_SJxyCRVKvB" ]
iclr_2020_r1g1CAEKDH
Wyner VAE: A Variational Autoencoder with Succinct Common Representation Learning
A new variational autoencoder (VAE) model is proposed that learns a succinct common representation of two correlated data variables for conditional and joint generation tasks. The proposed Wyner VAE model is based on two information theoretic problems---distributed simulation and channel synthesis---in which Wyner's common information arises as the fundamental limit of the succinctness of the common representation. The Wyner VAE decomposes a pair of correlated data variables into their common representation (e.g., a shared concept) and local representations that capture the remaining randomness (e.g., texture and style) in respective data variables by imposing the mutual information between the data variables and the common representation as a regularization term. The utility of the proposed approach is demonstrated through experiments for joint and conditional generation with and without style control using synthetic data and real images. Experimental results show that learning a succinct common representation achieves better generative performance and that the proposed model outperforms existing VAE variants and the variational information bottleneck method.
reject
This paper adds a new model to the literature on representation learning from correlated variables with some common and some "private" dimensions, and takes a variational approach based on Wyner's common information. The literature in this area includes models where both of the correlated variables are assumed to be available as input at all times, as well as models where only one of the two may be available; the proposed approach falls into the first category. Pros: The reviewers generally agree, as do I, that the motivation is very interesting and the resulting model is reasonable and produces solid results. Cons: The model is somewhat complex and the paper is lacking a careful ablation study on the components. In addition, the results are not a clear "win" for the proposed model. The authors have started to do an ablation study, and I think eventually an interesting story is likely to come out of that. But at the moment the paper feels a bit too preliminary/inconclusive for publication.
test
[ "B1xH6ki2oH", "HJgE-ZihiH", "HJgOQKRBsB", "BkxH10q2jr", "SygZpaqhsB", "HJg2uJ5njH", "rJldNiTijr", "B1xl7jxjiS", "BJxZXkOqsH", "HJl3Il7ujB", "SJebnlm_sS", "r1eOMWQ_jH", "S1exQ0G_sS", "HylZ_pf_sB", "SJeG-nGuiB", "Skllo4L6tH", "HJxfbuwRYH", "rJxG8duDqr", "HJlVHKMaqS" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "* On additional experiments on explaining the discrepancy between MNIST--MNIST add-1 and synthetic dataset \nFirst, we think the synthetic dataset shows some improvement with $\\lambda>0$, yet the amount is marginal. However, the effect does not clearly appear in the MNIST quadrant prediction task, and the reasoni...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "rJxG8duDqr", "rJldNiTijr", "rJxG8duDqr", "rJxG8duDqr", "iclr_2020_r1g1CAEKDH", "B1xl7jxjiS", "HJgOQKRBsB", "SJeG-nGuiB", "iclr_2020_r1g1CAEKDH", "rJxG8duDqr", "rJxG8duDqr", "rJxG8duDqr", "Skllo4L6tH", "HJxfbuwRYH", "HJlVHKMaqS", "iclr_2020_r1g1CAEKDH", "iclr_2020_r1g1CAEKDH", "icl...
iclr_2020_H1leCRNYvS
Hierarchical Bayes Autoencoders
Autoencoders are powerful generative models for complex data, such as images. However, standard models like the variational autoencoder (VAE) typically have unimodal Gaussian decoders, which cannot effectively represent the possible semantic variations in the space of images. To address this problem, we present a new probabilistic generative model called the \emph{Hierarchical Bayes Autoencoder (HBAE)}. The HBAE contains a multimodal decoder in the form of an energy-based model (EBM), instead of the commonly adopted unimodal Gaussian distribution. The HBAE can be trained using variational inference, similar to a VAE, to recover latent codes conditioned on inputs. For the decoder, we use an adversarial approximation where a conditional generator is trained to match the EBM distribution. During inference time, the HBAE consists of two sampling steps: first a latent code for the input is sampled, and then this code is passed to the conditional generator to output a stochastic reconstruction. The HBAE is also capable of modeling sets, by inferring a latent code for a set of examples, and sampling set members through the multimodal decoder. In both single image and set cases, the decoder generates plausible variations consistent with the input data, and generates realistic unconditional samples. To the best our knowledge, Set-HBAE is the first model that is able to generate complex image sets.
reject
This paper introduces a probabilistic generative model which mixes a variational autoencoder (VAE) with an energy based model (EBM). As mentioned by all reviewers (i) the motivation of the model is not well justified (ii) experimental results are not convincing enough. In addition (iii) handling sets is not specific to the proposed approach, and thus claims regarding sets should be revised.
train
[ "H8Xg7Ttb9aS", "HyxnX7kusr", "S1euMM1_oB", "SylieMJdjS", "HklwgvFXjB", "H1exIIKQsr", "BygOn4QjuH", "rJlN9wthFH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "I don't think this paper should be accepted. In my opinion, the mix of EBM and VAE is not really compelling; and it is not clear at all to me that one gets much from the \"V\" in this setting. Furthermore, the experimental results are not great either qualitatively (by the standards of generative-models-of-ima...
[ 1, -1, -1, -1, -1, -1, 1, 3 ]
[ 5, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_H1leCRNYvS", "iclr_2020_H1leCRNYvS", "SylieMJdjS", "rJlN9wthFH", "H1exIIKQsr", "BygOn4QjuH", "iclr_2020_H1leCRNYvS", "iclr_2020_H1leCRNYvS" ]
iclr_2020_BygZARVFDH
Compositional Visual Generation with Energy Based Models
Humans are able to both learn quickly and rapidly adapt their knowledge. One major component is the ability to incrementally combine many simple concepts to accelerates the learning process. We show that energy based models are a promising class of models towards exhibiting these properties by directly combining probability distributions. This allows us to combine an arbitrary number of different distributions in a globally coherent manner. We show this compositionality property allows us to define three basic operators, logical conjunction, disjunction, and negation, on different concepts to generate plausible naturalistic images. Furthermore, by applying these abilities, we show that we are able to extrapolate concept combinations, continually combine previously learned concepts, and infer concept properties in a compositional manner.
reject
This submission proposes an image generation technique for composing concepts by combining their associated distributions. Strengths: -The approach is interesting and novel. Weaknesses: -Several reviewers were not convinced about the correctness of the formulations for negation and disjunction. -The experimental validation of the disjunction and negation approaches is insufficient. -The paper clarity and exposition could be improved. The authors addressed this in the discussion but concerns remain. Given the weaknesses, AC shares R3’s recommendation to reject.
train
[ "rJlCDEff9B", "B1xh7lI6FS", "rkxmkzO3jr", "r1lWy3B2iS", "ryxVVT98iB", "B1lCbTqIjB", "HyedBac8sr", "SklK-Wj9sr", "Hyl8T7wcoB", "rJgCOtIcjB", "Hkg5a8Lcor", "rklcJ65UoB", "S1gYB0cAKH" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer" ]
[ "This paper proposes to combine energy functions to realize compositionality. This is interesting, and different from previous methods, which use either an explicit vector of factors that is input to a generator function, or object slots that are blended to form an image.\nSpecifically, three operators (logical con...
[ 3, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_BygZARVFDH", "iclr_2020_BygZARVFDH", "r1lWy3B2iS", "SklK-Wj9sr", "S1gYB0cAKH", "rJlCDEff9B", "B1xh7lI6FS", "Hyl8T7wcoB", "rJgCOtIcjB", "Hkg5a8Lcor", "ryxVVT98iB", "iclr_2020_BygZARVFDH", "iclr_2020_BygZARVFDH" ]
iclr_2020_BkxGAREYwB
Deep Expectation-Maximization in Hidden Markov Models via Simultaneous Perturbation Stochastic Approximation
We propose a novel method to estimate the parameters of a collection of Hidden Markov Models (HMM), each of which corresponds to a set of known features. The observation sequence of an individual HMM is noisy and/or insufficient, making parameter estimation solely based on its corresponding observation sequence a challenging problem. The key idea is to combine the classical Expectation-Maximization (EM) algorithm with a neural network, while these two are jointly trained in an end-to-end fashion, mapping the HMM features to its parameters and effectively fusing the information across different HMMs. In order to address the numerical difficulty in computing the gradient of the EM iteration, simultaneous perturbation stochastic approximation (SPSA) is employed to approximate the gradient. We also provide a rigorous proof that the approximated gradient due to SPSA converges to the true gradient almost surely. The efficacy of the proposed method is demonstrated on synthetic data as well as a real-world e-Commerce dataset.
reject
The authors propose to use numerical differentiation to approximate the Jacobian while estimating the parameters for a collection of Hidden Markov Models (HMMs). Two reviewers provided detailed and constructive comments, while unanimously rated weak rejection. Reviewer #1 likes the general idea of the work, and consider the contribution to be sound. However, he concerns the reproducibility of the work due to the niche database from e-commerce applications. Reviewer #2 concerns the poor presentation, especially section 3. The authors respond to Reviewers’ concerns but did not change the rating. The ACs concur the concerns and the paper can not be accepted at its current state.
train
[ "S1gAEFxNsr", "Byxqsd14jS", "HJecXUON9B", "SyerVLaBqr" ]
[ "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for your time in reviewing this submission. I would like to further clarify the points you raised.\n\n\"parenthesis bug in b_j(... in Eq.4\n in Eq. 5, index i appears both in numerator (as regular index) and denominator (as sum index)\"\nThese are indeed typos, we have updated the draft.\n\n\"what is \\P...
[ -1, -1, 3, 3 ]
[ -1, -1, 1, 3 ]
[ "HJecXUON9B", "SyerVLaBqr", "iclr_2020_BkxGAREYwB", "iclr_2020_BkxGAREYwB" ]
iclr_2020_S1xGCAVKvr
LEARNING TO LEARN WITH BETTER CONVERGENCE
We consider the learning to learn problem, where the goal is to leverage deeplearning models to automatically learn (iterative) optimization algorithms for training machine learning models. A natural way to tackle this problem is to replace the human-designed optimizer by an LSTM network and train the parameters on some simple optimization problems (Andrychowicz et al., 2016). Despite their success compared to traditional optimizers such as SGD on a short horizon, theselearnt (meta-) optimizers suffer from two key deficiencies: they fail to converge(or can even diverge) on a longer horizon (e.g., 10000 steps). They also often fail to generalize to new tasks. To address the convergence problem, we rethink the architecture design of the meta-optimizer and develop an embarrassingly simple,yet powerful form of meta-optimizers—a coordinate-wise RNN model. We provide insights into the problems with the previous designs of each component and re-design our SimpleOptimizer to resolve those issues. Furthermore, we propose anew mechanism to allow information sharing between coordinates which enables the meta-optimizer to exploit second-order information with negligible overhead.With these designs, our proposed SimpleOptimizer outperforms previous meta-optimizers and can successfully converge to optimal solutions in the long run.Furthermore, our empirical results show that these benefits can be obtained with much smaller models compared to the previous ones.
reject
This paper proposes an improved (over Andrychowicz et al) meta-optimizer that tries to to learn better strategies for training deep machine learning models. The paper was reviewed by three experts, two of whom recommend Weak Reject and one who recommends Reject. The reviewers identify a number of significant concerns, including degree of novelty and contribution, connections to previous work, completeness of experiments, and comparisons to baselines. In light of these reviews and since the authors have unfortunately not provided a response to them, we cannot recommend accepting the paper.
test
[ "HyesVOJatr", "SJefgrxCKr", "Byg2F33lcH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nThis paper presents several improvements over the existing learning to learn models including Andrychowicz et al. (2016) and Lv et al. (2017). Specifically, this paper analyzes the issues in the original learning to learn paradigm (L2L), including instability during training and bias term issues in the RNN. It p...
[ 3, 3, 1 ]
[ 3, 4, 4 ]
[ "iclr_2020_S1xGCAVKvr", "iclr_2020_S1xGCAVKvr", "iclr_2020_S1xGCAVKvr" ]
iclr_2020_rygfC0VKPS
Improved Modeling of Complex Systems Using Hybrid Physics/Machine Learning/Stochastic Models
Combining domain knowledge models with neural models has been challenging. End-to-end trained neural models often perform better (lower Mean Square Error) than domain knowledge models or domain/neural combinations, and the combination is inefficient to train. In this paper, we demonstrate that by composing domain models with machine learning models, by using extrapolative testing sets, and invoking decorrelation objective functions, we create models which can predict more complex systems. The models are interpretable, extrapolative, data-efficient, and capture predictable but complex non-stochastic behavior such as unmodeled degrees of freedom and systemic measurement noise. We apply this improved modeling paradigm to several simulated systems and an actual physical system in the context of system identification. Several ways of composing domain models with neural models are examined for time series, boosting, bagging, and auto-encoding on various systems of varying complexity and non-linearity. Although this work is preliminary, we show that the ability to combine models is a very promising direction for neural modeling.
reject
All reviewers agree that the paper is to be rejected, provided strong claims that were not answered. In this form (especially with such a title) it could not be published (it is more of a technical/engineering interest).
train
[ "B1gffMK5YS", "rye7ynORtB", "rkequUYRFB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper conducts several experiments to compare the extrapolative predictions of various hybrid models (sequential, ensemble, and cyclic), which compose physical models, neural networks and stochastic models. \n\nUnmodeled dynamics is a bottleneck for model learning, model-based reinforcement learning and sim-t...
[ 1, 1, 1 ]
[ 4, 5, 3 ]
[ "iclr_2020_rygfC0VKPS", "iclr_2020_rygfC0VKPS", "iclr_2020_rygfC0VKPS" ]
iclr_2020_HJxN0CNFPB
Ladder Polynomial Neural Networks
The underlying functions of polynomial neural networks are polynomial functions. These networks are shown to have nice theoretical properties by previous analysis, but they are actually hard to train when their polynomial orders are high. In this work, we devise a new type of activations and then create the Ladder Polynomial Neural Network (LPNN). This new network can be trained with generic optimization algorithms. With a feedforward structure, it can also be combined with deep learning techniques such as batch normalization and dropout. Furthermore, an LPNN provides good control of its polynomial order because its polynomial order increases by 1 with each of its hidden layers. In our empirical study, deep LPNN models achieve good performances in a series of regression and classification tasks.
reject
This paper proposes a new type of Polynomial NN called Ladder Polynomial NN (LPNN) which is easy to train with general optimization algorithms and can be combined with techniques like batch normalization and dropout. Experiments show it works better than FMs with simple classification and regression tasks, but no experiments are done in more complex tasks. All reviewers agree the paper addresses an interesting question and makes some progress but the contribution is limited and there are still many ways to improve.
test
[ "Bkl5UHQ3YH", "rygKXhcXjB", "B1lof9qXsS", "Hkgi3Mcmsr", "HyxHu42TFH", "B1lRcpQ79H" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This work introduces a new polynomial feed-forward neural network called Ladder Polynomial Neural Network (LPNN). Theoretical results show that LPNNs generalize vanilla PNNs and FMs. In the experimental analyses, LPNNs perform similar to the vanilla FMs and PNNs, as well.\n\n- In the statement “V has a shape of (d...
[ 3, -1, -1, -1, 6, 6 ]
[ 5, -1, -1, -1, 4, 5 ]
[ "iclr_2020_HJxN0CNFPB", "Bkl5UHQ3YH", "HyxHu42TFH", "B1lRcpQ79H", "iclr_2020_HJxN0CNFPB", "iclr_2020_HJxN0CNFPB" ]
iclr_2020_SkxV0RVYDH
Versatile Anomaly Detection with Outlier Preserving Distribution Mapping Autoencoders
State-of-the-art deep learning methods for outlier detection make the assumption that anomalies will appear far away from inlier data in the latent space produced by distribution mapping deep networks. However, this assumption fails in practice, because the divergence penalty adopted for this purpose encourages mapping outliers into the same high-probability regions as inliers. To overcome this shortcoming, we introduce a novel deep learning outlier detection method, called Outlier Preserving Distribution Mapping Autoencoder (OP-DMA), which succeeds to map outliers to low probability regions in the latent space of an autoencoder. For this we leverage the insight that outliers are likely to have a higher reconstruction error than inliers. We thus achieve outlier-preserving distribution mapping through weighting the reconstruction error of individual points by the value of a multivariate Gaussian probability density function evaluated at those points. This weighting implies that outliers will result overall penalty if they are mapped to low-probability regions. We show that if the global minimum of our newly proposed loss function is achieved, then our OP-DMA maps inliers to regions with a Mahalanobis distance less than delta, and outliers to regions past this delta, delta being the inverse Chi Squared CDF evaluated at (1-alpha) with alpha the percentage of outliers in the dataset. Our experiments confirm that OP-DMA consistently outperforms the state-of-art methods on a rich variety of outlier detection benchmark datasets.
reject
This paper proposes an outlier detection method that maps outliers to low probability regions of the latent space. The novelty is in proposing a weighted reconstruction error penalizing the mapping of outliers into high probability regions. The reviewers find the idea promising. They have also raised several questions. It seems the questions are at least partially addressed in the rebuttal, and as a result one of our expert reviewers (R5) has increased their score from WR to WA. But since we did not have a champion for this paper and its overall score is not high enough, I can only recommend a reject at this stage.
train
[ "HygKPD2hqr", "S1l4S4y9jS", "HJgfgdccsS", "SyxGNNRFiS", "SyxflCHBtH", "ryehE_265r" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This work proposes an outlier detection method based on WAE framework. WAE is trained to ensure that 1) latent distribution follows a prior distribution 2) weighted reconstruction error is low where prior PDF is used to weight the reconstruction error.\n\nPositives\n------------\n1.I liked the intuition behind the...
[ 6, -1, -1, -1, 6, 3 ]
[ 4, -1, -1, -1, 1, 5 ]
[ "iclr_2020_SkxV0RVYDH", "ryehE_265r", "HygKPD2hqr", "SyxflCHBtH", "iclr_2020_SkxV0RVYDH", "iclr_2020_SkxV0RVYDH" ]
iclr_2020_r1lHAAVtwr
Deep Hierarchical-Hyperspherical Learning (DH^2L)
Regularization is known to be an inexpensive and reasonable solution to alleviate over-fitting problems of inference models, including deep neural networks. In this paper, we propose a hierarchical regularization which preserves the semantic structure of a sample distribution. At the same time, this regularization promotes diversity by imposing distance between parameter vectors enlarged within semantic structures. To generate evenly distributed parameters, we constrain them to lie on \emph{hierarchical hyperspheres}. Evenly distributed parameters are considered to be less redundant. To define hierarchical parameter space, we propose to reformulate the topology space with multiple hypersphere space. On each hypersphere space, the projection parameter is defined by two individual parameters. Since maximizing groupwise pairwise distance between points on hypersphere is nontrivial (generalized Thomson problem), we propose a new discrete metric integrated with continuous angle metric. Extensive experiments on publicly available datasets (CIFAR-10, CIFAR-100, CUB200-2011, and Stanford Cars), our proposed method shows improved generalization performance, especially when the number of super-classes is larger.
reject
The paper proposes a hierarchical diversity promoting regularizer for neural networks. Experiments are shown with this regularizer applied to the last fully-connected layer of the network, in addition to L2 and energy regularizers on other layers. Reviewers found the paper well-motivated but had concerns on writing/readability of the paper and that it provides only marginal improvements over existing simple regularizers such as L2. I would encourage the authors to look for scenarios where the proposed regularizer can show clear improvements and resubmit to a future venue.
test
[ "H1eHQGojFH", "Sye7vbvwjr", "HkxHNcwwjB", "Byx2GZDDsS", "H1l11ZDwsH", "Hyg-cD3ycB", "Hklvx0P-9S" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a hierarchical regularization framework based on hierarchical hyperspheres. In particular, the paper tackles the problem of diversity promoting learning. Following (Liu et al., 2018), pairwise distances between parameters on hyperspheres are used in the regularization framework.\nThe topology of...
[ 3, -1, -1, -1, -1, 6, 3 ]
[ 3, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_r1lHAAVtwr", "H1eHQGojFH", "iclr_2020_r1lHAAVtwr", "Hyg-cD3ycB", "Hklvx0P-9S", "iclr_2020_r1lHAAVtwr", "iclr_2020_r1lHAAVtwr" ]
iclr_2020_B1xBAA4FwH
On Evaluating Explainability Algorithms
A plethora of methods attempting to explain predictions of black-box models have been proposed by the Explainable Artificial Intelligence (XAI) community. Yet, measuring the quality of the generated explanations is largely unexplored, making quantitative comparisons non-trivial. In this work, we propose a suite of multifaceted metrics that enables us to objectively compare explainers based on the correctness, consistency, as well as the confidence of the generated explanations. These metrics are computationally inexpensive, do not require model-retraining and can be used across different data modalities. We evaluate them on common explainers such as Grad-CAM, SmoothGrad, LIME and Integrated Gradients. Our experiments show that the proposed metrics reflect qualitative observations reported in earlier works.
reject
The paper proposes metrics for comparing explainability metrics. Both reviewers and authors have engaged in a thorough discussion of the paper and feedback. The reviewers, although appreciating aspects of the paper, all see major issues with the paper. All reviewers recommend reject.
train
[ "SklqelnnKB", "SJx1EXq0YH", "SJxA9XP3ir", "Hke_3jWhir", "BJgz9iZ3jB", "BJxTMj-hor", "Hyl21jZ3oH", "BJxYPItUYr", "ByeP5jZy5r", "BklPydLsdS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "public" ]
[ "See post-rebuttal updates below!\n\nSummary\n---\n\n(motivation)\nThere are lots of heat map/saliency/visual explanation approaches that try to deep image classifiers more interpretable.\nIt's hard to tell which ones are good, so we need better ways of evaluating explanations.\nThis paper proposes 3 such explanati...
[ 1, 3, -1, -1, -1, -1, -1, 1, -1, -1 ]
[ 4, 3, -1, -1, -1, -1, -1, 4, -1, -1 ]
[ "iclr_2020_B1xBAA4FwH", "iclr_2020_B1xBAA4FwH", "iclr_2020_B1xBAA4FwH", "BJgz9iZ3jB", "BJxYPItUYr", "SklqelnnKB", "SJx1EXq0YH", "iclr_2020_B1xBAA4FwH", "BklPydLsdS", "iclr_2020_B1xBAA4FwH" ]
iclr_2020_HygrAR4tPS
On Empirical Comparisons of Optimizers for Deep Learning
Selecting an optimizer is a central step in the contemporary deep learning pipeline. In this paper we demonstrate the sensitivity of optimizer comparisons to the metaparameter tuning protocol. Our findings suggest that the metaparameter search space may be the single most important factor explaining the rankings obtained by recent empirical comparisons in the literature. In fact, we show that these results can be contradicted when metaparameter search spaces are changed. As tuning effort grows without bound, more general update rules should never underperform the ones they can approximate (i.e., Adam should never perform worse than momentum), but the recent attempts to compare optimizers either assume these inclusion relationships are not relevant in practice or restrict the metaparameters they tune to break the inclusions. In our experiments, we find that the inclusion relationships between optimizers matter in practice and always predict optimizer comparisons. In particular, we find that the popular adative gradient methods never underperform momentum or gradient descent. We also report practical tips around tuning rarely-tuned metaparameters of adaptive gradient methods and raise concerns about fairly benchmarking optimizers for neural network training.
reject
This paper examines classifiers and challenges a (somewhat widely held) assumption that adaptive gradient methods underperform simpler methods. This paper sparked a *large* amount of discussion, more than any other paper in my area. It was also somewhat controversial. After reading the discussion and paper itself, on one hand I think this makes a valuable contribution to the community. It points out a (near-) inclusion relationship between many adaptive gradient methods and standard SGD-style methods, and points out that rather obviously if a particular method is included by a more general method, the more general method will never be worse and often will be better if hyperparameters are set appropriately. However, there were several concerns raised with the paper. For example, reviewer 1 pointed out that in order for Adam to include Momentum-based SGD, it must follow a specialized learning rate schedule that is not used with Adam in practice. This is pointed out in the paper, but I think it could be even more clear. For example, in the intro "For example, ADAM (Kingma and Ba, 2015) and RMSPROP (Tieleman and Hinton, 2012) can approximately simulate MOMENTUM (Polyak, 1964) if the ε term in the denominator of their parameter updates is allowed to grow very large." does not make any mention of the specialized learning rate schedule. Second, Reviewer 1 was concerned with the fact that the paper does not clearly qualify that the conclusion that more complicated optimization schedules do better depends on extensive hyperparameter search. This fact somewhat weakens one of the main points of the paper. I feel that this paper is very much on the borderline, but cannot strongly recommend acceptance. I hope that the authors take the above notes, as well as the reviewers' other comments into account seriously and try to reflect them in a revised version of the paper.
train
[ "rJgB9OBCtS", "BylPg8onoB", "HkxqYK9njS", "rJeiYvthiS", "S1gPO1F3oH", "r1xU0xF3jH", "r1gE7aRior", "H1eIZjAsjB", "H1xFiU0ijr", "rJxDkLFosS", "BJgXaSKojr", "SkxtHSYoiS", "BJxIMERcsS", "H1xFI-CqsB", "HyxovxAqjH", "SyeqKgAVjS", "B1xCUZCEor", "rylnXZC4sB", "HyeprxCNiH", "HkeYMxCVsS"...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "...
[ "\n\nFirst, I would like to note that the claim that SGD with momentum is a special case of Adam with large epsilon is technically wrong because Adam also includes the bias-corrected momentum estimates which SGD with momentum does not consider. It might seem like a small difference, however it is a form of learning...
[ 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_HygrAR4tPS", "HkxqYK9njS", "rJeiYvthiS", "S1gPO1F3oH", "H1eIZjAsjB", "H1xFiU0ijr", "rJxDkLFosS", "BJgXaSKojr", "SkxtHSYoiS", "H1xFI-CqsB", "HyxovxAqjH", "BJxIMERcsS", "SyeqKgAVjS", "HkeYMxCVsS", "HyeprxCNiH", "rJgB9OBCtS", "BkgPIQ2yqr", "ryllFmFoYH", "rJgB9OBCtS", "r...
iclr_2020_rylvAA4YDB
IsoNN: Isomorphic Neural Network for Graph Representation Learning and Classification
Deep learning models have achieved huge success in numerous fields, such as computer vision and natural language processing. However, unlike such fields, it is hard to apply traditional deep learning models on the graph data due to the ‘node-orderless’ property. Normally, adjacency matrices will cast an artificial and random node-order on the graphs, which renders the performance of deep mod- els on graph classification tasks extremely erratic, and the representations learned by such models lack clear interpretability. To eliminate the unnecessary node- order constraint, we propose a novel model named Isomorphic Neural Network (ISONN), which learns the graph representation by extracting its isomorphic features via the graph matching between input graph and templates. ISONN has two main components: graph isomorphic feature extraction component and classification component. The graph isomorphic feature extraction component utilizes a set of subgraph templates as the kernel variables to learn the possible subgraph patterns existing in the input graph and then computes the isomorphic features. A set of permutation matrices is used in the component to break the node-order brought by the matrix representation. Three fully-connected layers are used as the classification component in ISONN. Extensive experiments are conducted on benchmark datasets, the experimental results can demonstrate the effectiveness of ISONN, especially compared with both classic and state-of-the-art graph classification methods.
reject
This paper proposes a method to learn graph features by means of neural networks for graph classification. The reviewers find that the paper needs to improve in terms of novelty and experimental comparisons.
train
[ "HJgVlS82jr", "r1eqjbUhor", "H1gqUmL3jS", "r1gPFVLhor", "HJgiqXjrYB", "HyeRPECaFr", "rke32mHAYr", "rygjEnzpuS" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "We thank the reviewer for the comments and appreciation, and would like to answer the reviewer’s questions as follows:\n \nQ1. It, however, simply employs brute-to-force approach toward the graph isomorphism, lacking novelty:\n \nThe novelty of the proposed model lies in the isomorphic kernel methods instead of si...
[ -1, -1, -1, -1, 6, 3, 1, -1 ]
[ -1, -1, -1, -1, 1, 3, 3, -1 ]
[ "HyeRPECaFr", "rke32mHAYr", "HJgiqXjrYB", "HyeRPECaFr", "iclr_2020_rylvAA4YDB", "iclr_2020_rylvAA4YDB", "iclr_2020_rylvAA4YDB", "iclr_2020_rylvAA4YDB" ]
iclr_2020_SyeD0RVtvS
DeepSFM: Structure From Motion Via Deep Bundle Adjustment
Structure from motion (SfM) is an essential computer vision problem which has not been well handled by deep learning. One of the promising trends is to apply explicit structural constraint, e.g. 3D cost volume, into the network. In this work, we design a physical driven architecture, namely DeepSFM, inspired by traditional Bundle Adjustment (BA), which consists of two cost volume based architectures for depth and pose estimation respectively, iteratively running to improve both. In each cost volume, we encode not only photo-metric consistency across multiple input images, but also geometric consistency to ensure that depths from multiple views agree with each other. The explicit constraints on both depth (structure) and pose (motion), when combined with the learning components, bring the merit from both traditional BA and emerging deep learning technology. Extensive experiments on various datasets show that our model achieves the state-of-the-art performance on both depth and pose estimation with superior robustness against less number of inputs and the noise in initialization.
reject
Main content: Physical driven architecture of DeepSFM to infer the structures from motion Discussion: reviewer 1: well-motivated model with good solid experimental results. not clear about the LM optimization in BA-Net is memory inefficient reviewer 2: main issue is the experiments could be improved. reviewer 3: well written but again experimental section is lacking Recommendation: Good paper and results, but all 3 reviewers agree experiments could be improved. Rejection is recommended.
train
[ "SyewKHsvsr", "HkgvEdjDoB", "BJgsPzsPiB", "HJg96xsvsS", "S1e7O15zjr", "rJgQN2PcFB", "Syx181Q7cB" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for your comments, which is very helpful for clarifying our contribution and improving the presentation of the paper. Please see the inline responses.\n\nQ1: The paper is easy to follow but the authors are expected to clarify the rationality in integration of the loss function. How the paramete...
[ -1, -1, -1, -1, 6, 6, 3 ]
[ -1, -1, -1, -1, 5, 4, 5 ]
[ "Syx181Q7cB", "rJgQN2PcFB", "S1e7O15zjr", "iclr_2020_SyeD0RVtvS", "iclr_2020_SyeD0RVtvS", "iclr_2020_SyeD0RVtvS", "iclr_2020_SyeD0RVtvS" ]
iclr_2020_HJlvCR4KDS
Why Does the VQA Model Answer No?: Improving Reasoning through Visual and Linguistic Inference
In order to make Visual Question Answering (VQA) explainable, previous studies not only visualize the attended region of a VQA model but also generate textual explanations for its answers. However, when the model’s answer is ‘no,’ existing methods have difficulty in revealing detailed arguments that lead to that answer. In addition, previous methods are insufficient to provide logical bases, when the question requires common sense to answer. In this paper, we propose a novel textual explanation method to overcome the aforementioned limitations. First, we extract keywords that are essential to infer an answer from a question. Second, for a pre-trained explanation generator, we utilize a novel Variable-Constrained Beam Search (VCBS) algorithm to generate phrases that best describes the relationship between keywords in images. Then, we complete an explanation by feeding the phrase to the generator. Furthermore, if the answer to the question is “yes” or “no,” we apply Natural Langauge Inference (NLI) to identify whether contents of the question can be inferred from the explanation using common sense. Our user study, conducted in Amazon Mechanical Turk (MTurk), shows that our proposed method generates more reliable explanations compared to the previous methods. Moreover, by modifying the VQA model’s answer through the output of the NLI model, we show that VQA performance increases by 1.1% from the original model.
reject
This paper is good, with relatively positive support from the reviewers. However, there were also several legitimate issues raised, for example regarding the semantics of a negative answer and associated explanations. Though this paper cannot be accepted at this time, we hope the feedback here can help improve a future version, as all reviewers agree this is a valuable line of work.
train
[ "H1x8vBV6tS", "SyxclpUcsS", "H1lLnhUqiH", "rJgxc285jS", "S1gXpfnhFB", "rJlJyunRKB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\n I thank the authors for their response. I would keep my score unchanged (i.e., 6 Weak Accept). \n\n-----------------------------------------------\n\nStrengths: \n- The paper enhances the beam search approach to generate explanations for answers to visual questions. The explanations are further used for verify...
[ 6, -1, -1, -1, 6, 3 ]
[ 4, -1, -1, -1, 4, 4 ]
[ "iclr_2020_HJlvCR4KDS", "H1x8vBV6tS", "S1gXpfnhFB", "rJlJyunRKB", "iclr_2020_HJlvCR4KDS", "iclr_2020_HJlvCR4KDS" ]
iclr_2020_H1gdAC4KDB
Adversarially Robust Generalization Just Requires More Unlabeled Data
Neural network robustness has recently been highlighted by the existence of adversarial examples. Many previous works show that the learned networks do not perform well on perturbed test data, and significantly more labeled data is required to achieve adversarially robust generalization. In this paper, we theoretically and empirically show that with just more unlabeled data, we can learn a model with better adversarially robust generalization. The key insight of our results is based on a risk decomposition theorem, in which the expected robust risk is separated into two parts: the stability part which measures the prediction stability in the presence of perturbations, and the accuracy part which evaluates the standard classification accuracy. As the stability part does not depend on any label information, we can optimize this part using unlabeled data. We further prove that for a specific Gaussian mixture problem, adversarially robust generalization can be almost as easy as the standard generalization in supervised learning if a sufficiently large amount of unlabeled data is provided. Inspired by the theoretical findings, we further show that a practical adversarial training algorithm that leverages unlabeled data can improve adversarial robust generalization on MNIST and Cifar-10.
reject
This work starts with a decomposition of the adversarial risk into two terms: the first is the usual risk, while the second is a stability term, that captures the possible effect of an adversarial perturbation. The insight of this work is that this second term can be dealt with using unlabelled data, which is often in plentiful supply. Unfortunately, the same ideas was developed concurrently and independently by several groups of authors. The reviewer all agreed that this particular version was not ready for publication. In two cases, the authors compared the work unfavorably with concurrent independent work. I will note that the main bound somewhat ignores the issue of overfitting that the second term deals with via the Rademacher bound. Unless one assumes one has unlimited unlabeled data, could one not get an arbitrarily biased view of robustness from the sample. Seems like a gap to fill.
train
[ "rkguDO1iFS", "H1xM7hwjFH", "B1ljO9uCYH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors study the sample complexity of adversarially robust learning with access to unlabeled samples. Theoretically, they consider the setting of Schmidt et al. 2018 (separating two class-conditional Gaussians) and present an algorithm which can learn a robust classifier with only a few labeled samples and a ...
[ 3, 3, 3 ]
[ 5, 5, 3 ]
[ "iclr_2020_H1gdAC4KDB", "iclr_2020_H1gdAC4KDB", "iclr_2020_H1gdAC4KDB" ]
iclr_2020_H1eY00VFDB
Retrospection: Leveraging the Past for Efficient Training of Deep Neural Networks
Deep neural networks are powerful learning machines that have enabled breakthroughs in several domains. In this work, we introduce retrospection loss to improve the performance of neural networks by utilizing prior experiences during training. Minimizing the retrospection loss pushes the parameter state at the current training step towards the optimal parameter state while pulling it away from the parameter state at a previous training step. We conduct extensive experiments to show that the proposed retrospection loss results in improved performance across multiple tasks, input types and network architectures.
reject
This paper introduces a further regularizer, retrospection loss, for training neural networks, which leverages past parameter states. The authors added several ablation studies and extra experiments during the rebuttal, which are helpful to show that their method is useful. However, this is still one of those papers that essentially proposes an additional heuristic to train deep news, which is helpful but not clearly motivated from a theoretical point of view (despite the intuitions). Yes, it provides improvements across tasks but these are all relatively small, and the method is more involved. Therefore, I am recommending rejection.
test
[ "rJePCn43YB", "SyejA3FNjB", "rkldcPYViB", "SJgKOpYNsr", "HkeQS6tEor", "SyeRYntEiB", "H1xvLwKEoH", "rygRVjNAFr" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper proposes a new loss function which adds to the training objective another term that pulls the current parameters of a neural network further away from the parameters at a previous time step.\nIntuitively, this aims to push the current parameters further to the local optimum.\nOn a variety of benchmarks, ...
[ 8, -1, -1, -1, -1, -1, -1, 3 ]
[ 3, -1, -1, -1, -1, -1, -1, 1 ]
[ "iclr_2020_H1eY00VFDB", "SyeRYntEiB", "H1xvLwKEoH", "HkeQS6tEor", "SyejA3FNjB", "rJePCn43YB", "rygRVjNAFr", "iclr_2020_H1eY00VFDB" ]
iclr_2020_H1x9004YPr
Contextual Temperature for Language Modeling
Temperature scaling has been widely used to improve performance for NLP tasks that utilize Softmax decision layer. Current practices in using temperature either assume a fixed value or a dynamically changing temperature but with a fixed schedule. Little has been known on an optimal trajectory of temperature that can change with the context. In this paper, we propose contextual temperature, a mechanism that allows temperatures to change over the context for each vocabulary, and to co-adopt with model parameters during training. Experimental results illustrated that contextual temperature improves over state-of-the-art language models significantly. Our model CT-MoS achieved a perplexity of 55.31 in the test set of Penn Treebank and a perplexity of 62.89 in the test set of WikiText-2. The in-depth analysis showed that the behavior of temperature schedule varies dramatically by vocabulary. The optimal temperature trajectory drops as the context becomes longer to suppress uncertainties in language modeling. These evidence further justified the need for contextual temperature and explained its performance advantage over fixed temperature or scheduling.
reject
With an average post author response score of 4 - two weak rejects and one weak accept, it is just not possible for the AC to recommend acceptance. The author response was not able to shift the scores and general opinions of the reviewers and the reviewers have outlined their reasoning why their final scores remain unchanged during the discussion period.
val
[ "Byxh9hH2or", "Sye-x7zKsH", "H1xvj9zFjS", "Bkx_JJQtor", "SyeltUfKiH", "Ske6l8RJjS", "rJlIaog6YS", "H1em6VUatr" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We appreciate the constructive feedback of every reviewer. We have thoroughly refined the paper: grammar errors are corrected, sections including abstract, introduction and experiments are retouched, and appendix is added to provide more clear explanations.", "First of all, we thank the reviewer for the construc...
[ -1, -1, -1, -1, -1, 3, 6, 3 ]
[ -1, -1, -1, -1, -1, 1, 5, 3 ]
[ "iclr_2020_H1x9004YPr", "rJlIaog6YS", "H1em6VUatr", "Ske6l8RJjS", "rJlIaog6YS", "iclr_2020_H1x9004YPr", "iclr_2020_H1x9004YPr", "iclr_2020_H1x9004YPr" ]
iclr_2020_Byx5R0NKPr
Learning Calibratable Policies using Programmatic Style-Consistency
We study the important and challenging problem of controllable generation of long-term sequential behaviors. Solutions to this problem would impact many applications, such as calibrating behaviors of AI agents in games or predicting player trajectories in sports. In contrast to the well-studied areas of controllable generation of images, text, and speech, there are significant challenges that are unique to or exacerbated by generating long-term behaviors: how should we specify the factors of variation to control, and how can we ensure that the generated temporal behavior faithfully demonstrates diverse styles? In this paper, we leverage large amounts of raw behavioral data to learn policies that can be calibrated to generate a diverse range of behavior styles (e.g., aggressive versus passive play in sports). Inspired by recent work on leveraging programmatic labeling functions, we present a novel framework that combines imitation learning with data programming to learn style-calibratable policies. Our primary technical contribution is a formal notion of style-consistency as a learning objective, and its integration with conventional imitation learning approaches. We evaluate our framework using demonstrations from professional basketball players and agents in the MuJoCo physics environment, and show that our learned policies can be accurately calibrated to generate interesting behavior styles in both domains.
reject
The reviewers generally reached a consensus that the work is not quite ready for acceptance in its current form. The central concerns were about the potentially limited novelty of the method, and the fact that it was not quite clear how good the annotations needed to be (or how robust the method would be to imperfect annotations). This, combined with an evaluation scenario that is non-standard and requires some guesswork to understand its difficulty, leaves one with the impression that it is not quite clear from the experiments whether the method really works well. I would recommend for the authors to improve the evaluation in the next submission.
train
[ "Bkeq0sQ5ir", "H1gKdoQ9jH", "BklnbjQ9sr", "SyxOjqQcoS", "Skl6ZZOfKr", "SJekg7fLKB", "SJgzQP-2FB" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "> “... credit assignment seems like the wrong word.”\nYes, a significant benefit of learning a dynamics model is that it allows us to differentiate through the environment dynamics. While this is not exactly credit assignment in the RL sense (e.g. we do not learn the value of each action with a Q-network), the pro...
[ -1, -1, -1, -1, 6, 3, 3 ]
[ -1, -1, -1, -1, 1, 4, 3 ]
[ "Skl6ZZOfKr", "SJekg7fLKB", "SJgzQP-2FB", "iclr_2020_Byx5R0NKPr", "iclr_2020_Byx5R0NKPr", "iclr_2020_Byx5R0NKPr", "iclr_2020_Byx5R0NKPr" ]
iclr_2020_Byg9AR4YDB
Exploring Cellular Protein Localization Through Semantic Image Synthesis
Cell-cell interactions have an integral role in tumorigenesis as they are critical in governing immune responses. As such, investigating specific cell-cell interactions has the potential to not only expand upon the understanding of tumorigenesis, but also guide clinical management of patient responses to cancer immunotherapies. A recent imaging technique for exploring cell-cell interactions, multiplexed ion beam imaging by time-of-flight (MIBI-TOF), allows for cells to be quantified in 36 different protein markers at sub-cellular resolutions in situ as high resolution multiplexed images. To explore the MIBI images, we propose a GAN for multiplexed data with protein specific attention. By conditioning image generation on cell types, sizes, and neighborhoods through semantic segmentation maps, we are able to observe how these factors affect cell-cell interactions simultaneously in different protein channels. Furthermore, we design a set of metrics and offer the first insights towards cell spatial orientations, cell protein expressions, and cell neighborhoods. Our model, cell-cell interaction GAN (CCIGAN), outperforms or matches existing image synthesis methods on all conventional measures and significantly outperforms on biologically motivated metrics. To our knowledge, we are the first to systematically model multiple cellular protein behaviors and interactions under simulated conditions through image synthesis.
reject
This paper proposes a dedicated deep models for analysis of multiplexed ion beam imaging by time-of-flight (MIBI-TOF). The reviewers appreciated the contributions of the paper but not quite enough to make the cut. Rejection is recommended.
test
[ "S1gVULo_or", "r1x5KUi_jB", "r1xZRXjOjr", "SkxiZLoOor", "S1gvgVs_sB", "HkgtWHoOor", "HygLks1AFS", "ryxXuMHAtH", "SyO0QgHqr" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nMinor concern:\n1. What's the resolution of the MIBI-TOF and CCIGAN?\nMIBI TOF is 800 $mm^2$ at 2048x2048 pixels, doing some rearranging we see that 64x64 $\\rightarrow$ 64/2048 *800 = $25 mm^2$\n\n2. The introduction of the background could be further refined. The many-to-many mapping between different cell ty...
[ -1, -1, -1, -1, -1, -1, 3, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "HygLks1AFS", "HygLks1AFS", "iclr_2020_Byg9AR4YDB", "ryxXuMHAtH", "SyO0QgHqr", "ryxXuMHAtH", "iclr_2020_Byg9AR4YDB", "iclr_2020_Byg9AR4YDB", "iclr_2020_Byg9AR4YDB" ]
iclr_2020_SJloA0EYDr
A⋆MCTS: SEARCH WITH THEORETICAL GUARANTEE USING POLICY AND VALUE FUNCTIONS
Combined with policy and value neural networks, Monte Carlos Tree Search (MCTS) is a critical component of the recent success of AI agents in learning to play board games like Chess and Go (Silver et al., 2017). However, the theoretical foundations of MCTS with policy and value networks remains open. Inspired by MCTS, we propose A⋆MCTS, a novel search algorithm that uses both the policy and value predictors to guide search and enjoys theoretical guarantees. Specifically, assuming that value and policy networks give reasonably accurate signals of the values of each state and action, the sample complexity (number of calls to the value network) to estimate the value of the current state, as well as the optimal one-step action to take from the current state, can be bounded. We apply our theoretical framework to different models for the noise distribution of the policy and value network as well as the distribution of rewards, and show that for these general models, the sample complexity is polynomial in D, where D is the depth of the search tree. Empirically, our method outperforms MCTS in these models.
reject
This paper proposed an extension of the Monte Carlos Tree Search to find the optimal policy. The method combines A* and MCTS algorithms to prioritize the state to be explored. Compare with traditional MCTS based on UCT, A* MCTS seem to perform better. One concern of the reviewers is the paper's presentation, which is hard to follow. The second concern is the strong restriction of assumption, which make the setting too simple and unrealistic. The rebuttal did not fully address these problems. This paper needs further polish to meet the standard of ICLR.
train
[ "SylklJK5sr", "r1g5CtzZjr", "HygnvKMZjS", "r1leI_GboS", "SkeXSQ86tr", "S1lF9obI9r", "Bkefd1gj9r" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary of revisions to the manuscript\n\n1) moved the proofs to the appendix\n2) added an intuition paragraph to each section\n3) added illustrations to further explain the main techniques and the models\n4) added a notation table to summarize notations used\n5) added explanations in the main contributions sectio...
[ -1, -1, -1, -1, 1, 3, 6 ]
[ -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2020_SJloA0EYDr", "SkeXSQ86tr", "S1lF9obI9r", "Bkefd1gj9r", "iclr_2020_SJloA0EYDr", "iclr_2020_SJloA0EYDr", "iclr_2020_SJloA0EYDr" ]
iclr_2020_r1e30AEKPr
A Group-Theoretic Framework for Knowledge Graph Embedding
We have rigorously proved the existence of a group algebraic structure hidden in relational knowledge embedding problems, which suggests that a group-based embedding framework is essential for model design. Our theoretical analysis explores merely the intrinsic property of the embedding problem itself without introducing extra designs. Using the proposed framework, one could construct embedding models that naturally accommodate all possible local graph patterns, which are necessary for reproducing a complete graph from atomic knowledge triplets. We reconstruct many state-of-the-art models from the framework and re-interpret them as embeddings with different groups. Moreover, we also propose new instantiation models using simple continuous non-abelian groups.
reject
This paper presents a rigorous mathematical framework for knowledge graph embedding. The paper received 3 reviews. R1 recommends Weak Reject based on concerns about the contributions of the paper; the authors, in their response, indicate that R1 may have been confused about what the contributions were meant to be. R2 initially recommended Reject, based on concerns that the paper was overselling its claims, and on the clarity and quality of writing. After the author response, R2 raised their score to Weak Reject but still felt that their main concerns had gone unanswered, and in particular that the authors seemed unwilling to tone down their claims. R3 recommends Weak Reject, indicating that they found the paper difficult to follow and gave some specific technical concerns. The authors, in their response, express confusion about R3's comments and suggest that R3 also did not understand the paper. However, in light of these unanimous Weak Reject reviews, we cannot recommend acceptance at this time. We understand that the authors may feel that some reviewers did not properly understand or appreciate the contribution, but all three reviewers are researchers working at highly-ranked institutions and thus are fairly representative of the attendees of ICLR; we hope that their points of confusion and concern, as reflected in their reviews, will help authors to clarify a revision of the paper for another venue.
train
[ "H1gpT8NrOS", "HJg7z_3TKH", "Syl6_zJ8sH", "ryxiA-1Ujr", "r1xCB-1LiS", "BkeQ-y18oS", "SylHMCB0FH", "SygVUgzHqH", "SylRlWbFdB", "BJxGzC_j_S", "Hkgbb7ej_S", "SJeWWSp5OS", "rJgxcMUYuS", "B1eoGvA8_B" ]
[ "public", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "public", "author", "public", "author", "author", "author" ]
[ "Hello,\n\nIt is very interesting to see that you have a very similar idea about introducing group theory into KGE. I also wrote a small paper connecting group representation theory with KGE, which is recently accepted in NeurIPS graph representation workshop. \n\nBest,\nChen \n\nGroup Representation Theory for Kno...
[ -1, 3, -1, -1, -1, -1, 3, 3, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, -1, -1, -1, 1, 1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_r1e30AEKPr", "iclr_2020_r1e30AEKPr", "SylHMCB0FH", "HJg7z_3TKH", "HJg7z_3TKH", "SygVUgzHqH", "iclr_2020_r1e30AEKPr", "iclr_2020_r1e30AEKPr", "B1eoGvA8_B", "Hkgbb7ej_S", "iclr_2020_r1e30AEKPr", "B1eoGvA8_B", "SylRlWbFdB", "H1gpT8NrOS" ]
iclr_2020_SJl3CANKvB
A SIMPLE AND EFFECTIVE FRAMEWORK FOR PAIRWISE DEEP METRIC LEARNING
Deep metric learning (DML) has received much attention in deep learning due to its wide applications in computer vision. Previous studies have focused on designing complicated losses and hard example mining methods, which are mostly heuristic and lack of theoretical understanding. In this paper, we cast DML as a simple pairwise binary classification problem that classifies a pair of examples as similar or dissimilar. It identifies the most critical issue in this problem---imbalanced data pairs. To tackle this issue, we propose a simple and effective framework to sample pairs in a batch of data for updating the model. The key to this framework is to define a robust loss for all pairs over a mini-batch of data, which is formulated by distributionally robust optimization. The flexibility in constructing the {\it uncertainty decision set} of the dual variable allows us to recover state-of-the-art complicated losses and also to induce novel variants. Empirical studies on several benchmark data sets demonstrate that our simple and effective method outperforms the state-of-the-art results.
reject
The reviewers agree that this is a reasonable paper but somewhat derivative. The authors discussed the contribution further in the rebuttal, but even in light of their comments, I consider the significance of this work too low for acceptance.
train
[ "BJgxWlzsoS", "ryguiHzojH", "rJxRWXGsjB", "H1lyrJVHtB", "SyevOXVstr", "HJgZmWvTtr" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would emphasize that our framework is not a straightforward application of DRO. Instead, by addressing the critical issues in DML, our framework is an effective and general approach to DML. We summarize two significant contributions of our paper.\n\nFirst, our framework is more general, flexible and practical t...
[ -1, -1, -1, 6, 6, 3 ]
[ -1, -1, -1, 1, 3, 3 ]
[ "HJgZmWvTtr", "H1lyrJVHtB", "HJgZmWvTtr", "iclr_2020_SJl3CANKvB", "iclr_2020_SJl3CANKvB", "iclr_2020_SJl3CANKvB" ]
iclr_2020_rylT0AVtwH
Learning from Partially-Observed Multimodal Data with Variational Autoencoders
Learning from only partially-observed data for imputation has been an active research area. Despite promising progress on unimodal data imputation (e.g., image in-painting), models designed for multimodal data imputation are far from satisfactory. In this paper, we propose variational selective autoencoders (VSAE) for this task. Different from previous works, our proposed VSAE learns only from partially-observed data. The proposed VSAE is capable of learning the joint distribution of observed and unobserved modalities as well as the imputation mask, resulting in a unified model for various down-stream tasks including data generation and imputation. Evaluation on both synthetic high-dimensional and challenging low-dimensional multi-modality datasets shows significant improvement over the state-of-the-art data imputation models.
reject
This submission proposes a VAE-based method for jointly inferring latent variables and data generation. The method learns from partially-observed multimodal data. Strengths: -Learning to generate from partially-observed data is an important and challenging problem. -The proposed idea is novel and promising. Weaknesses: -Some experimental protocols are not fully explained. -The experiments are not sufficiently comprehensive (comparisons to key baselines are missing). -More analysis of some surprising results is needed. -The presentation has much to improve. The method is promising but the mentioned weaknesses were not sufficiently addressed during discussion. AC agrees with the majority recommendation to reject.
train
[ "H1ly2z2RFS", "H1eOXTuD5B", "HJe_FU7soH", "B1gi3o1oiS", "rJxyl7dOoB", "HJlxtt_Osr", "Skl-7ddujB", "ryxqjgdujS", "SJxujtOdiH", "BJlU6IBK9S", "r1gk4ck25B" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThis paper proposes to impute multimodal data when certain modalities are present. The authors present a variational selective autoencoder model that learns only from partially-observed data. VSAE is capable of learning the joint\ndistribution of observed and unobserved modalities as well as the imputati...
[ 3, 3, -1, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ 5, 4, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2020_rylT0AVtwH", "iclr_2020_rylT0AVtwH", "Skl-7ddujB", "rJxyl7dOoB", "BJlU6IBK9S", "H1ly2z2RFS", "H1eOXTuD5B", "r1gk4ck25B", "iclr_2020_rylT0AVtwH", "iclr_2020_rylT0AVtwH", "iclr_2020_rylT0AVtwH" ]
iclr_2020_BJeTCAEtDB
Feature Map Transform Coding for Energy-Efficient CNN Inference
Convolutional neural networks (CNNs) achieve state-of-the-art accuracy in a variety of tasks in computer vision and beyond. One of the major obstacles hindering the ubiquitous use of CNNs for inference on low-power edge devices is their high computational complexity and memory bandwidth requirements. The latter often dominates the energy footprint on modern hardware. In this paper, we introduce a lossy transform coding approach, inspired by image and video compression, designed to reduce the memory bandwidth due to the storage of intermediate activation calculation results. Our method does not require fine-tuning the network weights and halves the data transfer volumes to the main memory by compressing feature maps, which are highly correlated, with variable length coding. Our method outperform previous approach in term of the number of bits per value with minor accuracy degradation on ResNet-34 and MobileNetV2. We analyze the performance of our approach on a variety of CNN architectures and demonstrate that FPGA implementation of ResNet-18 with our approach results in a reduction of around 40% in the memory energy footprint, compared to quantized network, with negligible impact on accuracy. When allowing accuracy degradation of up to 2%, the reduction of 60% is achieved. A reference implementation}accompanies the paper.
reject
The paper proposed the use of a lossy transform coding approach to to reduce the memory bandwidth brought by the storage of intermediate activations. It has shown the proposed method can bring good memory usage while maintaining the the accuracy. The main concern on this paper is the limited novelty. The lossy transform coding is borrowed from other domains and only the use of it on CNN intermediate activation is new, which seems insufficient.
test
[ "rkgopztCcB", "Bke8bYAror", "rylzWORrjH", "HklwO_ASjH", "BylZwqW2KS", "BJgWjNu0KS" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies an important question: how to reduce memory bandwidth requirement in neural network computation and hence reduce the energy footprint. It proposes to use lossy transform coding before sending network output to memory. My concern with the paper is two-fold:\n1) The major technique of transform-do...
[ 3, -1, -1, -1, 3, 8 ]
[ 3, -1, -1, -1, 5, 1 ]
[ "iclr_2020_BJeTCAEtDB", "BylZwqW2KS", "rkgopztCcB", "BJgWjNu0KS", "iclr_2020_BJeTCAEtDB", "iclr_2020_BJeTCAEtDB" ]
iclr_2020_HyxCRCEKwB
ROBUST GENERATIVE ADVERSARIAL NETWORK
Generative adversarial networks (GANs) are powerful generative models, but usually suffer from instability which may lead to poor generations. Most existing works try to alleviate this problem by focusing on stabilizing the training of the discriminator, which unfortunately ignores the robustness of generator and discriminator. In this work, we consider the robustness of GANs and propose a novel robust method called robust generative adversarial network (RGAN). Particularly, we design a robust optimization framework where the generator and discriminator compete with each other in a worst-case setting within a small Wasserstein ball. The generator tries to map the worst input distribution (rather than a specific input distribution, typically a Gaussian distribution used in most GANs) to the real data distribution, while the discriminator attempts to distinguish the real and fake distribution with the worst perturbation. We have provided theories showing that the generalization of the new robust framework can be guaranteed. A series of experiments on CIFAR-10, STL-10 and CelebA datasets indicate that our proposed robust framework can improve consistently on four baseline GAN models. We also provide ablation analysis and visualization showing the efficacy of our method on both generator and discriminator quantitatively and qualitatively.
reject
This work proposes a robust variant of GAN, in which the generator and discriminator compete with each other in a worst-case setting within a small Wasserstein ball. Unfortunately, the reviewers have raised some critical concerns in terms of theoretical analysis and empirical support. The authors did not submit rebuttals in time. We encourage the authors to improve the work based on reviewer's comments.
train
[ "Syx7zrDiYH", "rkeJlDz6YH", "rkxvKVc0YS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Developing stable GAN training method has gained much attention these years. This paper propose to tackle this issue via involving distributionally robust optimization into GAN training. Its main contribution is to combine Sinha et al with GAN, proposing a new GAN training method on the basis of vanilla GAN. Rela...
[ 1, 3, 3 ]
[ 3, 5, 4 ]
[ "iclr_2020_HyxCRCEKwB", "iclr_2020_HyxCRCEKwB", "iclr_2020_HyxCRCEKwB" ]
iclr_2020_H1gyy1BtDS
An Information Theoretic Approach to Distributed Representation Learning
The problem of distributed representation learning is one in which multiple sources of information X1,...,XK are processed separately so as to extract useful information about some statistically correlated ground truth Y. We investigate this problem from information-theoretic grounds. For both discrete memoryless (DM) and memoryless vector Gaussian models, we establish fundamental limits of learning in terms of optimal tradeoffs between accuracy and complexity. We also develop a variational bound on the optimal tradeoff that generalizes the evidence lower bound (ELBO) to the distributed setting. Furthermore, we provide a variational inference type algorithm that allows to compute this bound and in which the mappings are parametrized by neural networks and the bound approximated by Markov sampling and optimized with stochastic gradient descent. Experimental results on synthetic and real datasets are provided to support the efficiency of the approaches and algorithms which we develop in this paper.
reject
The authors study generalization in distributed representation learning by describing limits in accuracy and complexity which stem from information theory. The paper has been controversial, but ultimately the reviewers who provided higher scores presented weaker and fewer arguments. By recruiting an additional reviewer it became clearer that, overall the paper needs a little more work to reach ICLR standards. The main suggestions for improvements have to do with improving clarity in a way that makes the motivation convincing and the practicality more obvious. Boosting the experimental results is a complemental way of increasing convincingness, as argued by reviewers.
test
[ "rygizoc0qr", "rklfW0_6Fr", "HJlpd_F6tr", "S1lFhyDE5B" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper studies a distributed representation problem where multiple features X_1,...,X_K are processed (or encoded) separately to estimate (or decode) some quantity of interest Y.\nThe log loss is considered throughout, which amounts to measuring the mutual information between Y and \\hat Y, defined as the \"ac...
[ 3, 3, 8, 6 ]
[ 4, 1, 3, 1 ]
[ "iclr_2020_H1gyy1BtDS", "iclr_2020_H1gyy1BtDS", "iclr_2020_H1gyy1BtDS", "iclr_2020_H1gyy1BtDS" ]
iclr_2020_Hkee1JBKwB
Convolutional Tensor-Train LSTM for Long-Term Video Prediction
Long-term video prediction is highly challenging since it entails simultaneously capturing spatial and temporal information across a long range of image frames.Standard recurrent models are ineffective since they are prone to error propagation and cannot effectively capture higher-order correlations. A potential solution is to extend to higher-order spatio-temporal recurrent models. However, such a model requires a large number of parameters and operations, making it intractable to learn in practice and is prone to overfitting. In this work, we propose convolutional tensor-train LSTM (Conv-TT-LSTM), which learns higher-orderConvolutional LSTM (ConvLSTM) efficiently using convolutional tensor-train decomposition (CTTD). Our proposed model naturally incorporates higher-order spatio-temporal information at a small cost of memory and computation by using efficient low-rank tensor representations. We evaluate our model on Moving-MNIST and KTH datasets and show improvements over standard ConvLSTM and better/comparable results to other ConvLSTM-based approaches, but with much fewer parameters.
reject
This paper proposes Conv-TT-LSTM for long-term video prediction. The proposed method saves memory and computation by low-rank tensor representations via tensor decomposition and is evaluated in Moving MNIST and KTH datasets. All reviews argue that the novelty of the paper does not meet the standard of ICLR. In the rebuttal, the authors polish the experiment design, which fails to change any reviewer’s decision. Overall, the paper is not good enough for ICLR.
train
[ "S1l4Ac4hor", "BJeqsFEhsS", "Hkgf1YEhjB", "SJxYGCFcjr", "BkxAlRIRYr", "rkgc4-Cy5S", "HyegM4fo9H" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for your thoughtful comments.\n\n(1) Novelty of our work.\n(a) Yu et al. 2017 shows that higher-order models (compressed by standard tensor-train decomposition) perform better than first-order models in synthetic regression problems. However, their approach can not be easily extended to video p...
[ -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, 5, 3, 3 ]
[ "HyegM4fo9H", "rkgc4-Cy5S", "BkxAlRIRYr", "iclr_2020_Hkee1JBKwB", "iclr_2020_Hkee1JBKwB", "iclr_2020_Hkee1JBKwB", "iclr_2020_Hkee1JBKwB" ]
iclr_2020_SJegkkrYPS
Starfire: Regularization-Free Adversarially-Robust Structured Sparse Training
This paper studies structured sparse training of CNNs with a gradual pruning technique that leads to fixed, sparse weight matrices after a set number of epochs. We simplify the structure of the enforced sparsity so that it reduces overhead caused by regularization. The proposed training methodology explores several options for structured sparsity. We study various tradeoffs with respect to pruning duration, learning-rate configuration, and the total length of training. We show that our method creates a sparse version of ResNet50 and ResNet50v1.5 on full ImageNet while remaining within a negligible <1% margin of accuracy loss. To make sure that this type of sparse training does not harm the robustness of the network, we also demonstrate how the network behaves in the presence of adversarial attacks. Our results show that with 70% target sparsity, over 75% top-1 accuracy is achievable.
reject
This paper concerns a training procedure for neural networks which results in sparse connectivity in the final resulting network, consisting of an "early era" of training in which pruning takes place, followed by fixed connectivity training thereafter, and a study of tradeoffs inherent in various approaches to structured and unstructured pruning, and an investigation of adversarial robustness of pruned networks. While some reviewers found the general approach interesting, all reviewers were critical of the lack of novelty, clarity and empirical rigour. R2 in particular raised concerns about the motivation, evaluation of computational savings (that FLOPS should be measured directly), and felt that the discussion of adversarial robustness was out of place and "an afterthought". Reviewers were unconvinced by rebuttals, and no attempts were made at improving the paper (additional experiments were promised, but not delivered). I therefore recommend rejection.
train
[ "SJx4ciYciH", "rkl7uiY9iB", "SJxtA9K5ir", "rkgWKqKciH", "BJgK9BkUYr", "H1lup7X-5B", "r1xPmfUG9B" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "(1) We agree that our method is similar to Narang et al.; however, achieving high levels of accuracy and high levels of structured sparsity for CNNs was missing; we contribute substantial experiments to find the limits of the final level of sparsity and how few epochs we can train for CNN. Also, Narang et al. prun...
[ -1, -1, -1, -1, 1, 1, 1 ]
[ -1, -1, -1, -1, 4, 4, 1 ]
[ "BJgK9BkUYr", "SJxtA9K5ir", "H1lup7X-5B", "r1xPmfUG9B", "iclr_2020_SJegkkrYPS", "iclr_2020_SJegkkrYPS", "iclr_2020_SJegkkrYPS" ]
iclr_2020_Bkel1krKPS
Attention on Abstract Visual Reasoning
Attention mechanisms have been boosting the performance of deep learning models on a wide range of applications, ranging from speech understanding to program induction. However, despite experiments from psychology which suggest that attention plays an essential role in visual reasoning, the full potential of attention mechanisms has so far not been explored to solve abstract cognitive tasks on image data. In this work, we propose a hybrid network architecture, grounded on self-attention and relational reasoning. We call this new model Attention Relation Network (ARNe). ARNe combines features from the recently introduced Transformer and the Wild Relation Network (WReN). We test ARNe on the Procedurally Generated Matrices (PGMs) datasets for abstract visual reasoning. ARNe excels the WReN model on this task by 11.28 ppt. Relational concepts between objects are efficiently learned demanding only 35% of the training samples to surpass reported accuracy of the base line model. Our proposed hybrid model, represents an alternative on learning abstract relations using self-attention and demonstrates that the Transformer network is also well suited for abstract visual reasoning.
reject
This work proposes a new architecture for abstract visual reasoning called "Attention Relation Network" (ARNe), based on Transformer-style soft attention and relation networks, which the authors show to improve on the "Wild Relation Network" (WReN). The authors test their network on the PGM dataset, and demonstrate a non-trivial improvement over previously reported baselines. The paper is well written and makes an interesting contribution, but the reviewers expressed some criticisms, including technical novelty, unfinished experiments (and lack of experimental details), and somewhat weak experimental results, which suggest that the proposed ARNe model does not work well when training with weaker supervision without meta-targets. Even though the authors addressed some concerns in their revised version (namely, they added new experiments in the extrapolation split of PGM and experiments on the new RAVEN dataset), I feel the paper is not yet ready for publication at ICLR.
train
[ "BklsYnBhoB", "HkgWMFWhKr", "SJlIFDe6KH", "BygwFWrJcr", "SyeRK47CFr", "S1lysesBOH" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "We thank the reviewers for their comments and suggestions. Based on the reviews, we made the following changes to the paper:\n- We added an evaluation of the model's performance on the extrapolation split.\n- We conducted experiments on the new RAVEN dataset and report the results\n\nResponse to the key criticism:...
[ -1, 3, 3, 1, -1, -1 ]
[ -1, 3, 1, 5, -1, -1 ]
[ "iclr_2020_Bkel1krKPS", "iclr_2020_Bkel1krKPS", "iclr_2020_Bkel1krKPS", "iclr_2020_Bkel1krKPS", "S1lysesBOH", "iclr_2020_Bkel1krKPS" ]
iclr_2020_H1gWyJBFDr
Fully Convolutional Graph Neural Networks using Bipartite Graph Convolutions
Graph neural networks have been adopted in numerous applications ranging from learning relational representations to modeling data on irregular domains such as point clouds, social graphs, and molecular structures. Though diverse in nature, graph neural network architectures remain limited by the graph convolution operator whose input and output graphs must have the same structure. With this restriction, representational hierarchy can only be built by graph convolution operations followed by non-parameterized pooling or expansion layers. This is very much like early convolutional network architectures, which later have been replaced by more effective parameterized strided and transpose convolution operations in combination with skip connections. In order to bring a similar change to graph convolutional networks, here we introduce the bipartite graph convolution operation, a parameterized transformation between different input and output graphs. Our framework is general enough to subsume conventional graph convolution and pooling as its special cases and supports multi-graph aggregation leading to a class of flexible and adaptable network architectures, termed BiGraphNet. By replacing the sequence of graph convolution and pooling in hierarchical architectures with a single parametric bipartite graph convolution, (i) we answer the question of whether graph pooling matters, and (ii) accelerate computations and lower memory requirements in hierarchical networks by eliminating pooling layers. Then, with concrete examples, we demonstrate that the general BiGraphNet formalism (iii) provides the modeling flexibility to build efficient architectures such as graph skip connections, and autoencoders.
reject
All three reviewers are consistently negative on this paper. Thus a reject is recommended.
train
[ "HklAVQc3iH", "S1gZufcnjr", "Byl2wZchjS", "H1exCVyg9H", "SkeIGYybcH", "SJlhsq6d9S", "HkxFvEe6uH", "ryen8-Qx_B" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "We thank the reviewer for their suggestions and comments. \n\n1. The reviewer is correct that bigraphnet layer still requires a separate clustering (or expansion) block as indicated by fig 1. This block can be precomputed (non-learnable, non-parameterized) such as voxel grid for point cloud data or data-driven (le...
[ -1, -1, -1, 3, 1, 3, -1, -1 ]
[ -1, -1, -1, 4, 4, 3, -1, -1 ]
[ "H1exCVyg9H", "SkeIGYybcH", "SJlhsq6d9S", "iclr_2020_H1gWyJBFDr", "iclr_2020_H1gWyJBFDr", "iclr_2020_H1gWyJBFDr", "ryen8-Qx_B", "iclr_2020_H1gWyJBFDr" ]
iclr_2020_HygXkJHtvB
Using Objective Bayesian Methods to Determine the Optimal Degree of Curvature within the Loss Landscape
The efficacy of the width of the basin of attraction surrounding a minimum in parameter space as an indicator for the generalizability of a model parametrization is a point of contention surrounding the training of artificial neural networks, with the dominant view being that wider areas in the landscape reflect better generalizability by the trained model. In this work, however, we aim to show that this is only true for a noiseless system and in general the trend of the model towards wide areas in the landscape reflect the propensity of the model to overfit the training data. Utilizing the objective Bayesian (Jeffreys) prior we instead propose a different determinant of the optimal width within the parameter landscape determined solely by the curvature of the landscape. In doing so we utilize the decomposition of the landscape into the dimensions of principal curvature and find the first principal curvature dimension of the parameter space to be independent of noise within the training data.
reject
There has been significant discussion in the literature on the effect of the properties of the curvature of minima on generalization in deep learning. This paper aims to shed some light on that discussion through the lens of theoretical analysis and the use of a Bayesian Jeffrey's prior. It seems clear that the reviewers appreciated the work and found the analysis insightful. However, a major issue cited by the reviewers is a lack of compelling empirical evidence that the claims of the paper are true. The authors run experiments on very small networks and reviewers felt that the results of these experiments were unlikely to extrapolate to large scale modern models and problems. One reviewer was concerned about the quality of the exposition in terms of the writing and language and care in terminology. Unfortunately, this paper falls below the bar for acceptance, but it seems likely that stronger empirical results and a careful treatment of the writing would make this a much stronger paper for future submission.
test
[ "Byl-b3VFir", "SkgtRj4KjS", "Byl2A8VtjH", "SJgulr4KsH", "rylBzFNKoH", "SJgEDU4YoB", "B1xXYr4YjH", "SJxwSVtcKr", "HJlP-ZkhFr", "HJx3a8J0FB" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Q8) On the other hand, as the authors used the spectrum properties of the Fisher information matrix, there are some recent works by Amari which can be cited.\n\nA8) Based on your suggestion, we found the paper \"Pathological spectra of the Fisher information metric and its variants in deep neural networks\" by Kar...
[ -1, -1, -1, -1, -1, -1, -1, 1, 6, 1 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 3, 3 ]
[ "SkgtRj4KjS", "SJxwSVtcKr", "HJx3a8J0FB", "iclr_2020_HygXkJHtvB", "HJlP-ZkhFr", "iclr_2020_HygXkJHtvB", "SJgulr4KsH", "iclr_2020_HygXkJHtvB", "iclr_2020_HygXkJHtvB", "iclr_2020_HygXkJHtvB" ]
iclr_2020_H1lQJ1HYwS
Deep amortized clustering
We propose a \textit{deep amortized clustering} (DAC), a neural architecture which learns to cluster datasets efficiently using a few forward passes. DAC implicitly learns what makes a cluster, how to group data points into clusters, and how to count the number of clusters in datasets. DAC is meta-learned using labelled datasets for training, a process distinct from traditional clustering algorithms which usually require hand-specified prior knowledge about cluster shapes/structures. We empirically show, on both synthetic and image data, that DAC can efficiently and accurately cluster new datasets coming from the same distribution used to generate training datasets.
reject
This paper introduces a new clustering method, which builds upon the work introduced by Lee et al, 2019 - contextual information across different dataset samples is gathered with a transformer, and then used to predict the cluster label for a given sample. All reviewers agree the writing should be improved and clarified. The novelty is also on the low side, given the previous work by Lee et al. Experiments should be more convincing.
train
[ "S1gUbYntiB", "H1gs9okQor", "Ske6diyQsB", "BklQVikXoS", "Syx3iqyXor", "r1gwMlFp_S", "HygfUNzxcB", "HkersYSG9B" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The fragmented clusters look really bad considering that these clusters themselves are so well separated but still authors argue that splitting these tight clusters should not be called failure. I do not agree with this claim.\n\nTraditional learning algorithm, of course, is all based on the assumptuion that raini...
[ -1, -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, -1, 1, 4, 4 ]
[ "H1gs9okQor", "r1gwMlFp_S", "HygfUNzxcB", "HkersYSG9B", "iclr_2020_H1lQJ1HYwS", "iclr_2020_H1lQJ1HYwS", "iclr_2020_H1lQJ1HYwS", "iclr_2020_H1lQJ1HYwS" ]
iclr_2020_r1eX1yrKwB
Distribution Matching Prototypical Network for Unsupervised Domain Adaptation
State-of-the-art Unsupervised Domain Adaptation (UDA) methods learn transferable features by minimizing the feature distribution discrepancy between the source and target domains. Different from these methods which do not model the feature distributions explicitly, in this paper, we explore explicit feature distribution modeling for UDA. In particular, we propose Distribution Matching Prototypical Network (DMPN) to model the deep features from each domain as Gaussian mixture distributions. With explicit feature distribution modeling, we can easily measure the discrepancy between the two domains. In DMPN, we propose two new domain discrepancy losses with probabilistic interpretations. The first one minimizes the distances between the corresponding Gaussian component means of the source and target data. The second one minimizes the pseudo negative log likelihood of generating the target features from source feature distribution. To learn both discriminative and domain invariant features, DMPN is trained by minimizing the classification loss on the labeled source data and the domain discrepancy losses together. Extensive experiments are conducted over two UDA tasks. Our approach yields a large margin in the Digits Image transfer task over state-of-the-art approaches. More remarkably, DMPN obtains a mean accuracy of 81.4% on VisDA 2017 dataset. The hyper-parameter sensitivity analysis shows that our approach is robust w.r.t hyper-parameter changes.
reject
This paper addresses the problem of unsupervised domain adaptation and proposes explicit modeling of the source and target feature distributions to aid in cross-domain alignment. The reviewers all recommended rejection of this work. Though they all understood the paper’s position of explicit feature distribution modeling, there was a lack of understanding as to why this explicit modeling should be superior to the common implicit modeling done in related literature. As some reviewers raised concern that the empirical performance of the proposed approach was marginally better than competing methods, this experimental evidence alone was not sufficient justification of the explicit modeling. There was also a secondary concern about whether the two proposed loss functions were simultaneously necessary. Overall, after reading the reviewers and authors comments, the AC recommends this paper not be accepted.
train
[ "SklF3jHIir", "BJxtpvB8sB", "r1xyE5pzsB", "BJlDLqTMsH", "B1xu9qHGor", "BkxKqPpbiS", "rJxzTD6bor", "rJgKuw6ZsS", "BJeJePTbiS", "S1lOfD6bir", "HJenFJCaFB", "SygAfV3gqH", "BJldN2FWqH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "7) a) \"when L1-distance between two distributions is small, H-delta-H divergence is small\" is proved in [R1], but \"minimizing GCMM loss reduces the L1-distance\" and \"minimizing PDM loss also reduces the L1-distance\" are not mathematically proved in this paper. These proofs are necessary to obtain the bound o...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 1, 5 ]
[ "r1xyE5pzsB", "BJlDLqTMsH", "B1xu9qHGor", "B1xu9qHGor", "rJxzTD6bor", "BJldN2FWqH", "HJenFJCaFB", "BJldN2FWqH", "SygAfV3gqH", "SygAfV3gqH", "iclr_2020_r1eX1yrKwB", "iclr_2020_r1eX1yrKwB", "iclr_2020_r1eX1yrKwB" ]
iclr_2020_Hyg4kkHKwH
V1Net: A computational model of cortical horizontal connections
The primate visual system builds robust, multi-purpose representations of the external world in order to support several diverse downstream cortical processes. Such representations are required to be invariant to the sensory inconsistencies caused by dynamically varying lighting, local texture distortion, etc. A key architectural feature combating such environmental irregularities is ‘long-range horizontal connections’ that aid the perception of the global form of objects. In this work, we explore the introduction of such horizontal connections into standard deep convolutional networks; we present V1Net -- a novel convolutional-recurrent unit that models linear and nonlinear horizontal inhibitory and excitatory connections inspired by primate visual cortical connectivity. We introduce the Texturized Challenge -- a new benchmark to evaluate object recognition performance under perceptual noise -- which we use to evaluate V1Net against an array of carefully selected control models with/without recurrent processing. Additionally, we present results from an ablation study of V1Net demonstrating the utility of diverse neurally inspired horizontal connections for state-of-the-art AI systems on the task of object boundary detection from natural images. We also present the emergence of several biologically plausible horizontal connectivity patterns, namely center-on surround-off, association fields and border-ownership connectivity patterns in a V1Net model trained to perform boundary detection on natural images from the Berkeley Segmentation Dataset 500 (BSDS500). Our findings suggest an increased representational similarity between V1Net and biological visual systems, and highlight the importance of neurally inspired recurrent contextual processing principles for learning visual representations that are robust to perceptual noise and furthering the state-of-the-art in computer vision.
reject
The paper proposes a neurally inspired model that is a variant of conv-LSTM called V1net. The reviewers had trouble gleaning the main contributions of the work. Given that it is hard to obtain state of art results in neurally inspired architectures, the bar is much higher to demonstrate that there is value in pursuing these architectures. There are not enough convincing results in the paper to show this. I recommend rejection.
train
[ "HkeRt1ihiH", "S1eQOp93ir", "ryeD3Mo3jB", "B1gvZbERKH", "SylfsS40Fr", "HJgoWw8U9r" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank you for taking the time in carefully reviewing our submission. We shall address your concerns with regard to our work below:\n\n(1) Reading clarity issues: Thanks for pointing out the clarity issues in our paper, we elaborate more on terms such as the one mentioned in your review in order to ease the pape...
[ -1, -1, -1, 1, 1, 3 ]
[ -1, -1, -1, 5, 3, 5 ]
[ "SylfsS40Fr", "HJgoWw8U9r", "B1gvZbERKH", "iclr_2020_Hyg4kkHKwH", "iclr_2020_Hyg4kkHKwH", "iclr_2020_Hyg4kkHKwH" ]
iclr_2020_BJe4JJBYwS
CROSS-DOMAIN CASCADED DEEP TRANSLATION
In recent years we have witnessed tremendous progress in unpaired image-to-image translation methods, propelled by the emergence of DNNs and adversarial training strategies. However, most existing methods focus on transfer of style and appearance, rather than on shape translation. The latter task is challenging, due to its intricate non-local nature, which calls for additional supervision. We mitigate this by descending the deep layers of a pre-trained network, where the deep features contain more semantics, and applying the translation between these deep features. Specifically, we leverage VGG, which is a classification network, pre-trained with large-scale semantic supervision. Our translation is performed in a cascaded, deep-to-shallow, fashion, along the deep feature hierarchy: we first translate between the deepest layers that encode the higher-level semantic content of the image, proceeding to translate the shallower layers, conditioned on the deeper ones. We show that our method is able to translate between different domains, which exhibit significantly different shapes. We evaluate our method both qualitatively and quantitatively and compare it to state-of-the-art image-to-image translation methods. Our code and trained models will be made available.
reject
The paper addresses image translation by extending prior models, e.g. CycleGAN, to domain pairs that have significantly different shape variations. The main technical idea is to apply the translation directly on the deep feature maps (instead of on the pixel level). While acknowledging that the proposed model is potentially useful, the reviewers raised several important concerns: (1) ill-posed formulation of the problem and what is desirable, (2) using fine-tuned/pre-trained VGG features, (3) computational cost of the proposed approach, i.e. training a cascade of pairs of translators (one pair per layer). AC can confirm that all three reviewers have read the author responses. AC suggests, in its current state the manuscript is not ready for a publication. We hope the reviews are useful for improving and revising the paper.
train
[ "ryejuILOjH", "HJgmd1aGoH", "SygzVJpMiS", "HJxGNlaGjS", "BklJnxazsr", "rJljMjEctH", "Skl0SYi6Fr", "BJgbvGF-cH" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear reviewers,\nWe implemented several of your suggestions and comments, in a revised version of the paper (uploaded), and plan to continue to address the others.\n\nWe briefly summarize the changes in this revised version:\n-  Figure 1 caption explained better, as well as modifications to Figure 3 caption.\n-  E...
[ -1, -1, -1, -1, -1, 6, 6, 3 ]
[ -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2020_BJe4JJBYwS", "SygzVJpMiS", "BJgbvGF-cH", "Skl0SYi6Fr", "rJljMjEctH", "iclr_2020_BJe4JJBYwS", "iclr_2020_BJe4JJBYwS", "iclr_2020_BJe4JJBYwS" ]
iclr_2020_B1erJJrYPH
Optimizing Loss Landscape Connectivity via Neuron Alignment
The loss landscapes of deep neural networks are poorly understood due to their high nonconvexity. Empirically, the local optima of these loss functions can be connected by a simple curve in model space, along which the loss remains fairly constant. Yet, current path finding algorithms do not consider the influence of symmetry in the loss surface caused by weight permutations of the networks corresponding to the minima. We propose a framework to investigate the effect of symmetry on the landscape connectivity by directly optimizing the weight permutations of the networks being connected. Through utilizing an existing neuron alignment technique, we derive an initialization for the weight permutations. Empirically, this initialization is critical for efficiently learning a simple, planar, low-loss curve between networks that successfully generalizes. Additionally, we introduce a proximal alternating minimization scheme to address if an optimal permutation can be learned, with some provable convergence guarantees. We find that the learned parameterized curve is still a low-loss curve after permuting the weights of the endpoint models, for a subset of permutations. We also show that there is small but steady performance gain in performance of the ensembles constructed from the learned curve, when considering weight space symmetry.
reject
This paper studies the loss landscape of neural networks by taking into consideration the symmetries arising from the parametrisation. Specifically, given two models $\theta_1$, $\theta_2$, it attempts to connect $\theta_1$ with the equivalence of class of $\theta_2$ generated by weight permutations. Reviewers found several strengths in this work, from its intuitive and simple idea to the quality of the experimental setup. However, they also found important shortcomings in the current manuscript, chief among them the lack of significance of the results. As a result, this paper unfortunately cannot be accepted in its current form. The chairs encourage the authors to revise their work by taking the reviewer feedback into consideration.
train
[ "H1e-Yoqhor", "Hkl1Cp92sH", "rJgY7pchjS", "H1ev9n9hoS", "BJxF-hchiH", "r1gr0cc3jr", "SJeZKdchKH", "SylrU09ptS", "BJxTPBxM9S" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We address the motivation of this work in the second and third paragraphs of Introduction. The study of finding optimal curves between models, also known as mode connectivity, has been of recent interest in the deep learning community (Freeman & Bruna (2016); Garipov et al. (2018); Gotmare et al. (2018)). The par...
[ -1, -1, -1, -1, -1, -1, 1, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2020_B1erJJrYPH", "SylrU09ptS", "BJxTPBxM9S", "iclr_2020_B1erJJrYPH", "iclr_2020_B1erJJrYPH", "SJeZKdchKH", "iclr_2020_B1erJJrYPH", "iclr_2020_B1erJJrYPH", "iclr_2020_B1erJJrYPH" ]
iclr_2020_rJeBJJBYDB
Chart Auto-Encoders for Manifold Structured Data
Auto-encoding and generative models have made tremendous successes in image and signal representation learning and generation. These models, however, generally employ the full Euclidean space or a bounded subset (such as [0,1]l) as the latent space, whose trivial geometry is often too simplistic to meaningfully reflect the structure of the data. This paper aims at exploring a nontrivial geometric structure of the latent space for better data representation. Inspired by differential geometry, we propose \textbf{Chart Auto-Encoder (CAE)}, which captures the manifold structure of the data with multiple charts and transition functions among them. CAE translates the mathematical definition of manifold through parameterizing the entire data set as a collection of overlapping charts, creating local latent representations. These representations are an enhancement of the single-charted latent space commonly employed in auto-encoding models, as they reflect the intrinsic structure of the manifold. Therefore, CAE achieves a more accurate approximation of data and generates realistic new ones. We conduct experiments with synthetic and real-life data to demonstrate the effectiveness of the proposed CAE.
reject
This paper proposes to use more varied geometric structures of latent spaces to capture the manifold structure of the data, and provide experiments with synthetic and real data that show some promise in terms of approximating manifolds. While reviewers appreciate the motivation behind the paper and see that angle as potentially resulting in a strong paper in the future, they have concerns that the method is too complicated and that the experimental results are not fully convincing that the proposed method is useful, with also not enough ablation studies. Authors provided some additional results and clarified explanations in their revisions, but reviewers still believe there is more work required to deliver a submission warranting acceptance in terms of justifying the complicated architecture experimentally. Therefore, we do not recommend acceptance.
train
[ "HylUDv92oH", "Skle4vq3sH", "r1lNLU5hsS", "SkeJBH92sr", "Hye6AV9hor", "B1xTQ492oH", "BkeUkXq2jS", "HJeesHDodr", "SyefiJ_uYB", "HJeMrVc3Kr" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "g) is this paper to be seen as an implementation of Chen et al. (2019)? \n \n No, This is not an implementation of Chen et al 2019. Their paper deals with using a network to represent a function on a known, fixed manifold, while we are concerned with capturing the manifold. \n\nh) the concept of intrinsic dim...
[ -1, -1, -1, -1, -1, -1, -1, 6, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 5, 4 ]
[ "SyefiJ_uYB", "SyefiJ_uYB", "SyefiJ_uYB", "HJeesHDodr", "HJeMrVc3Kr", "HJeMrVc3Kr", "iclr_2020_rJeBJJBYDB", "iclr_2020_rJeBJJBYDB", "iclr_2020_rJeBJJBYDB", "iclr_2020_rJeBJJBYDB" ]
iclr_2020_SJxIkkSKwB
Learning in Confusion: Batch Active Learning with Noisy Oracle
We study the problem of training machine learning models incrementally using active learning with access to imperfect or noisy oracles. We specifically consider the setting of batch active learning, in which multiple samples are selected as opposed to a single sample as in classical settings so as to reduce the training overhead. Our approach bridges between uniform randomness and score based importance sampling of clusters when selecting a batch of new samples. Experiments on benchmark image classification datasets (MNIST, SVHN, and CIFAR10) shows improvement over existing active learning strategies. We introduce an extra denoising layer to deep networks to make active learning robust to label noises and show significant improvements.
reject
This paper proposes a new active learning algorithm based on clustering and then sampling based on an uncertainty-based metric. This active learning method is not particular to deep learning. The authors also propose a new de-noising layer specific to deep learning to remove noise from possibly noisy labels that are provided. These two proposals are orthogonal to one another and its not clear why they appear in the same paper. Reviewers were underwhelmed by the novelty of either contribution. With respect to active learning, there is years of work on first performing unsupervised learning (e.g., clustering) and then different forms of active sampling. This work lacks sufficient novelty for acceptance at a top tier venue. Reject
train
[ "H1lbEu1e5r", "HkxpAZB5ir", "BJxqTlrcsH", "Byxs_eSciH", "S1lIMxH5oH", "Skgso79atr", "rJlOwYVRtB" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary: The paper proposes an uncertainty-based method for batch-mode active learning with/without noisy oracles which uses importance sampling scores of clusters as the querying strategy. Authors evaluate their method on MNIST, CIFAR10, and SVHN against approaches such as Core-set, BALD, entropy, and random samp...
[ 1, -1, -1, -1, -1, 1, 6 ]
[ 4, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_SJxIkkSKwB", "iclr_2020_SJxIkkSKwB", "Skgso79atr", "rJlOwYVRtB", "H1lbEu1e5r", "iclr_2020_SJxIkkSKwB", "iclr_2020_SJxIkkSKwB" ]
iclr_2020_S1lukyrKPr
LEX-GAN: Layered Explainable Rumor Detector Based on Generative Adversarial Networks
Social media have emerged to be increasingly popular and have been used as tools for gathering and propagating information. However, the vigorous growth of social media contributes to the fast-spreading and far-reaching rumors. Rumor detection has become a necessary defense. Traditional rumor detection methods based on hand-crafted feature selection are replaced by automatic approaches that are based on Artificial Intelligence (AI). AI decision making systems need to have the necessary means, such as explainability to assure users their trustworthiness. Inspired by the thriving development of Generative Adversarial Networks (GANs) on text applications, we propose LEX-GAN, a GAN-based layered explainable rumor detector to improve the detection quality and provide explainability. Unlike fake news detection that needs a previously collected verified news database, LEX-GAN realizes explainable rumor detection based on only tweet-level text. LEX-GAN is trained with generated non-rumor-looking rumors. The generators produce rumors by intelligently inserting controversial information in non-rumors, and force the discriminators to detect detailed glitches and deduce exactly which parts in the sentence are problematic. The layered structures in both generative and discriminative model contributes to the high performance. We show LEX-GAN's mutation detection ability in textural sequences by performing a gene classification and mutation detection task.
reject
The paper is well-written and presents an extensive set of experiments. The architecture is a simple yet interesting attempt at learning explainable rumour detection models. Some reviewers worry about the novelty of the approach, and whether the explainability of the model is in fact properly evaluated. The authors responded to the reviews and provided detailed feedback. A major limitation of this work is that explanations are at the level of input words. This is common in interpretability (LIME, etc), but it is not clear that explanations/interpretations are best provided at this level and not, say, at the level of training instances or at a more abstract level. It is also not clear that this approach would scale to languages that are morphologically rich and/or harder to segment into words. Since modern approaches to this problem would likely include pretrained language models, it is an interesting problem to make such architectures interpretable.
train
[ "H1xAgLJnjB", "rygTTxJ5iS", "r1etElcLor", "Bkelbk98iS", "SJgKpAFUsB", "HyxBFRF8oH", "rkeErRYIsB", "S1lo-RKLjH", "Ske9ubVNYS", "SkxQro2BqH", "ryeF57Xc9r", "BygbyqPT5r" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We appreciate the reviewer's effort and time and giving us the opportunity to clarify the novelty of our manuscript. We would like to clarify that the novelty of our manuscript is not about a mathematical formula breakthrough, but rather about a framework we designed for extracting information from text without co...
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 1, 8, 1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 5 ]
[ "rygTTxJ5iS", "BygbyqPT5r", "Ske9ubVNYS", "SJgKpAFUsB", "SkxQro2BqH", "ryeF57Xc9r", "S1lo-RKLjH", "BygbyqPT5r", "iclr_2020_S1lukyrKPr", "iclr_2020_S1lukyrKPr", "iclr_2020_S1lukyrKPr", "iclr_2020_S1lukyrKPr" ]
iclr_2020_HyeYJ1SKDH
FLUID FLOW MASS TRANSPORT FOR GENERATIVE NETWORKS
Generative Adversarial Networks have been shown to be powerful tools for generating content resulting in them being intensively studied in recent years. Training these networks requires maximizing a generator loss and minimizing a discriminator loss, leading to a difficult saddle point problem that is slow and difficult to converge. Motivated by techniques in the registration of point clouds and the fluid flow formulation of mass transport, we investigate a new formulation that is based on strict minimization, without the need for the maximization. This formulation views the problem as a matching problem rather than an adversarial one, and thus allows us to quickly converge and obtain meaningful metrics in the optimization path.
reject
The submission is concerned with providing a transport based formulation for generative modeling in order to avoid the standard max/min optimization challenge of GANs. The authors propose representing the divergence with a fluid flow model, the solution of which can be found by discretizing the space, resulting in an alignment of high dimensional point clouds. The authors disagreed about the novelty and clarity of the work, but they did agree that the empirical and theoretical support was lacking, and that the paper could be substantially improved through better validation and better results - in particular, the approach struggles with MNIST digit generation compared to other methods. The recommendation is to not accept the submission at this time.
train
[ "rklblJ7LKr", "B1gXbw_nFB", "SJxhj397cB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposed a generative network based on fluid flow solutions of mass transport problems. The paper is difficult to follow due to a poor structure and obvious technical mistakes. Detailed comments are as follows:\n\n1. The dual formulation used in the objective of WGANS involves expectations with respect ...
[ 1, 3, 3 ]
[ 4, 4, 1 ]
[ "iclr_2020_HyeYJ1SKDH", "iclr_2020_HyeYJ1SKDH", "iclr_2020_HyeYJ1SKDH" ]
iclr_2020_Syxc1yrKvr
Implicit λ-Jeffreys Autoencoders: Taking the Best of Both Worlds
We propose a new form of an autoencoding model which incorporates the best properties of variational autoencoders (VAE) and generative adversarial networks (GAN). It is known that GAN can produce very realistic samples while VAE does not suffer from mode collapsing problem. Our model optimizes λ-Jeffreys divergence between the model distribution and the true data distribution. We show that it takes the best properties of VAE and GAN objectives. It consists of two parts. One of these parts can be optimized by using the standard adversarial training, and the second one is the very objective of the VAE model. However, the straightforward way of substituting the VAE loss does not work well if we use an explicit likelihood such as Gaussian or Laplace which have limited flexibility in high dimensions and are unnatural for modelling images in the space of pixels. To tackle this problem we propose a novel approach to train the VAE model with an implicit likelihood by an adversarially trained discriminator. In an extensive set of experiments on CIFAR-10 and TinyImagent datasets, we show that our model achieves the state-of-the-art generation and reconstruction quality and demonstrate how we can balance between mode-seeking and mode-covering behaviour of our model by adjusting the weight λ in our objective.
reject
The paper received Weak Reject scores from all three reviewers. The AC has read the reviews and lengthy discussions and examined the paper. AC feels that there is a consensus that the paper does not quite meet the acceptance threshold and thus cannot be accepted. Hopefully the authors can use the feedback to improve their paper and resubmit to another venue.
train
[ "rJetsmv2oS", "rJeKIIljor", "r1gwbLeooH", "r1ejPHesoS", "H1lrdEeosS", "SJgy_5i_OH", "SygnRBCatS", "ryeRzOnX9S" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I just wanted to acknowledge that I have read your response and I will consider it in making a final recommendation for this paper. \n\nThank you! ", "> Existing ablation studies are a bit of a straw-man: the paper compares changing r(y|x) by standard Gaussian or Laplace. However, we know that a large variance d...
[ -1, -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "H1lrdEeosS", "r1gwbLeooH", "SJgy_5i_OH", "SygnRBCatS", "ryeRzOnX9S", "iclr_2020_Syxc1yrKvr", "iclr_2020_Syxc1yrKvr", "iclr_2020_Syxc1yrKvr" ]
iclr_2020_SklcyJBtvB
Off-policy Bandits with Deficient Support
Off-policy training of contextual-bandit policies is attractive in online systems (e.g. search, recommendation, ad placement), since it enables the reuse of large amounts of log data from the production system. State-of-the-art methods for off-policy learning, however, are based on inverse propensity score (IPS) weighting, which requires that the logging policy chooses all actions with non-zero probability for any context (i.e., full support). In real-world systems, this condition is often violated, and we show that existing off-policy learning methods based on IPS weighting can fail catastrophically. We therefore develop new off-policy contextual-bandit methods that can controllably and robustly learn even when the logging policy has deficient support. To this effect, we explore three approaches that provide various guarantees for safe learning despite the inherent limitations of support deficient data: restricting the action space, reward extrapolation, and restricting the policy space. We analyze the statistical and computational properties of these three approaches, and empirically evaluate their effectiveness in a series of experiments. We find that controlling the policy space is both computationally efficient and that it robustly leads to accurate policies.
reject
This paper tackles the problem of learning off-policy in the contextual bandit problem, more specifically when the available data is deficient (in the sense that it does not allow to build reasonable counterfactual estimators). To address this, the authors introduce three strategies: 1) restricting the action space; 2) imputing missing rewards when lacking data; 3) restricting the policy space to policies with "enough" data. All three approaches are analyzed (statistical and computational properties) and evaluated empirically. Restricting the policy space appears to be particularly effective in practice. Although the problem being solved is very relevant, it is not clear how this work is positioned with respect to approaches solving similar problems in RL. For example, Batch constrained Q-learning ([1]) restricts action space, while Bootstrapping Error Accumulation ([2]) and SPIBB ([3]) restrict the policy class in batch RL. A comparison with these techniques in the contextual bandit settings, in addition to recent state-of-the-art off-policy bandit approaches (Liu et al. (2019), Xie et al. (2019)) is lacking. Moreover, given the newly added results (DR method by Tang et al. (2019)), it is not clear how the proposed approach improves over existing techniques. This should be clarified. I therefore recommend to reject this paper.
train
[ "S1elCk_3iB", "Bkla7_u6KB", "rkgWKUb5oB", "ByxCFQAKsS", "SyguLBRFsS", "r1GLV4Ctjr", "BygT4d5oYr", "H1glUK7pFr" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for the response. On the narrow question of DR vs. policy restriction, we do maintain that policy restriction is preferable, since it does not require training and optimizing a regression model.\n\nStepping back and taking a broader view, the main contribution of the paper is not any particular method. I...
[ -1, 6, -1, -1, -1, -1, 3, 3 ]
[ -1, 3, -1, -1, -1, -1, 4, 3 ]
[ "rkgWKUb5oB", "iclr_2020_SklcyJBtvB", "r1GLV4Ctjr", "Bkla7_u6KB", "BygT4d5oYr", "H1glUK7pFr", "iclr_2020_SklcyJBtvB", "iclr_2020_SklcyJBtvB" ]
iclr_2020_B1eiJyrtDB
Improved Generalization Bound of Permutation Invariant Deep Neural Networks
We theoretically prove that a permutation invariant property of deep neural networks largely improves its generalization performance. Learning problems with data that are invariant to permutations are frequently observed in various applications, for example, point cloud data and graph neural networks. Numerous methodologies have been developed and they achieve great performances, however, understanding a mechanism of the performance is still a developing problem. In this paper, we derive a theoretical generalization bound for invariant deep neural networks with a ReLU activation to clarify their mechanism. Consequently, our bound shows that the main term of their generalization gap is improved by n! where n is a number of permuting coordinates of data. Moreover, we prove that an approximation power of invariant deep neural networks can achieve an optimal rate, though the networks are restricted to be invariant. To achieve the results, we develop several new proof techniques such as correspondence with a fundamental domain and a scale-sensitive metric entropy.
reject
This work proves a generalization bound for permutation invariant neural networks (with ReLU activations). While it appears the proof is technically sound and the exact result is novel, reviewers did not feel that the proof significantly improves our understanding of model generalization relative to prior work. Because of this, the work is too incremental in its current form.
train
[ "rJx0u1p19B", "HklGNpG2jH", "BJgTNhz3ir", "rkxB0sz2iH", "Hkl4hU95jr", "ryxk9laYjH", "Hyg8wsedor", "rkeUodgdsr", "BygmiLgujB", "BkeJKjc0Kr", "HklWqgGrcr" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper provides generalization bounds for permutation invariant neural networks where the learning problem is invariant to the permutation of input data. \n\nUnfortunately, the technical value of the content and its novelty is very limited since the proof reduces to a very basic argument that counts invariance...
[ 1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, 1, 1 ]
[ "iclr_2020_B1eiJyrtDB", "HklWqgGrcr", "BkeJKjc0Kr", "iclr_2020_B1eiJyrtDB", "ryxk9laYjH", "rkeUodgdsr", "BkeJKjc0Kr", "rJx0u1p19B", "HklWqgGrcr", "iclr_2020_B1eiJyrtDB", "iclr_2020_B1eiJyrtDB" ]
iclr_2020_rJxok1BYPr
Black Box Recursive Translations for Molecular Optimization
Machine learning algorithms for generating molecular structures offer a promising new approach to drug discovery. We cast molecular optimization as a translation problem, where the goal is to map an input compound to a target compound with improved biochemical properties. Remarkably, we observe that when generated molecules are iteratively fed back into the translator, molecular compound attributes improve with each step. We show that this finding is invariant to the choice of translation model, making this a "black box" algorithm. We call this method Black Box Recursive Translation (BBRT), a new inference method for molecular property optimization. This simple, powerful technique operates strictly on the inputs and outputs of any translation model. We obtain new state-of-the-art results for molecular property optimization tasks using our simple drop-in replacement with well-known sequence and graph-based models. Our method provides a significant boost in performance relative to its non-recursive peers with just a simple "``for" loop. Further, BBRT is highly interpretable, allowing users to map the evolution of newly discovered compounds from known starting points.
reject
This paper presents a simple method for improving molecular optimization with a learned model. The method operates by repeatedly feeding generated molecules back through an encoder decoder pair trained to maximize a desired property. Reviewers liked the simplicity of the method, and found it interesting but ultimately there were concerns about the metrics used to evaluate the method. Reviewers 3 and 4 both noted issues with the log P (and penalized log P) metric, noting that it is possible to artificially increase both metrics in a way that isn't useful in practice. During the discussion phase, Reviewer 4 constructed a specific example where simply adding long carbon chains to a molecule would yield a linear increase the penalized log P metric, and noted that the "best molecules" found by the method in Figure 3 also have extremely long carbon chains (long carbon chains are not generally desirable for drug discovery). I recommend the authors resubmit after finding a better way to evaluate that their method generates molecules with more useful properties for drug discovery.
train
[ "r1liUowJtB", "SJgCp1Oiir", "H1xQJmOsoB", "Bke66M_oor", "ByliaWOsjH", "HJxiRgOiiB", "HkxY-Twior", "r1exzxujor", "HyxysRDssS", "BylkRpDssS", "H1eRIAw2YB", "ry1VC8LqB", "Skl3e32vcB" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors frame molecule optimization as a sequence-to-sequence problem where a source molecule is translated to a target molecule with improved properties. The authors extend existing methods for improving molecules by applying them recursively over multiple rounds, and show that it is beneficial for optimizing...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3 ]
[ "iclr_2020_rJxok1BYPr", "Skl3e32vcB", "ry1VC8LqB", "ry1VC8LqB", "ry1VC8LqB", "H1eRIAw2YB", "r1liUowJtB", "Skl3e32vcB", "r1liUowJtB", "r1liUowJtB", "iclr_2020_rJxok1BYPr", "iclr_2020_rJxok1BYPr", "iclr_2020_rJxok1BYPr" ]
iclr_2020_Skeh1krtvH
WaveFlow: A Compact Flow-based Model for Raw Audio
In this work, we present WaveFlow, a small-footprint generative flow for raw audio, which is trained with maximum likelihood without complicated density distillation and auxiliary losses as used in Parallel WaveNet. It provides a unified view of flow-based models for raw audio, including autoregressive flow (e.g., WaveNet) and bipartite flow (e.g., WaveGlow) as special cases. We systematically study these likelihood-based generative models for raw waveforms in terms of test likelihood and speech fidelity. We demonstrate that WaveFlow can synthesize high-fidelity speech and obtain comparable likelihood as WaveNet, while only requiring a few sequential steps to generate very long waveforms. In particular, our small-footprint WaveFlow has only 5.91M parameters and can generate 22.05kHz speech 15.39 times faster than real-time on a GPU without customized inference kernels.
reject
The paper presented a unified framework for constructing likelihood-based generative models for raw audio. It demonstrated tradeoffs between memory footprint, generation speech and audio fidelity. The experimental justification with objective likelihood scores and subjective mean opinion scores are matching standard baselines. The main concern of this paper is the novelty and depth of the analysis. It could be much stronger if there're thorough analysis on the benefits and limitations of the unified approach and more insights on how to make the model much better.
train
[ "rylJ0A96Yr", "HyesGurosr", "BklOXBLior", "HJxCxI95sr", "SkeuG_BAYB", "SJloae9gcS", "BygtvEDMYH", "S1eUdXwzFB", "rkxW1GDftH", "BJlQPXYZ_B", "HkgjptwsvS" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "public" ]
[ "\n## Updated review\n\nI have read the rebuttal. The new version of the paper is definitely clearer, especially the contribution section and the experimental results. The new version addresses all my concerns, hence I am upgrading my rating to Accept.\n\n## Original review\n\nThis paper presents the WaveGlow model...
[ 8, -1, -1, -1, 6, 3, -1, -1, -1, -1, -1 ]
[ 3, -1, -1, -1, 5, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2020_Skeh1krtvH", "rylJ0A96Yr", "SJloae9gcS", "SkeuG_BAYB", "iclr_2020_Skeh1krtvH", "iclr_2020_Skeh1krtvH", "BJlQPXYZ_B", "HkgjptwsvS", "iclr_2020_Skeh1krtvH", "iclr_2020_Skeh1krtvH", "iclr_2020_Skeh1krtvH" ]
iclr_2020_BylTy1HFDS
Deep unsupervised feature selection
Unsupervised feature selection involves finding a small number of highly informative features, in the absence of a specific supervised learning task. Selecting a small number of features is an important problem in many scientific domains with high-dimensional observations. Here, we propose the restricted autoencoder (RAE) framework for selecting features that can accurately reconstruct the rest of the features. We justify our approach through a novel proof that the reconstruction ability of a set of features bounds its performance in downstream supervised learning tasks. Based on this theory, we present a learning algorithm for RAEs that iteratively eliminates features using learned per-feature corruption rates. We apply the RAE framework to two high-dimensional biological datasets—single cell RNA sequencing and microarray gene expression data, which pose important problems in cell biology and precision medicine—and demonstrate that RAEs outperform nine baseline methods, often by a large margin.
reject
This paper proposes Restricted AutoEncoders (REAs) for unsupervised feature selection, and applies and evaluates it in applications in biology. The paper was reviewed by three experts. R1 recommends Weak Reject, identifying some specific technical concerns as well as questions about missing and unclear experimental details. R2 recommends Reject, with concerns about limited novelty and unconvincing experimental results. R3 recommends Weak Accept saying that the overall idea is good, but also feels the contribution is "severely undermined" by a recently-published paper that proposes a very similar approach. Given that that paper (at ECMLPKDD 2019) was presented just one week before the deadline for ICLR, we would not have expected the authors to cite the paper. Nevertheless, given the concerns expressed by the other reviewers and the lack of an author response to help clarify the novelty, technical concerns, and missing details, we are not able to recommend acceptance. We believe the paper does have significant merit and hope that the reviewer comments will help authors in preparing a revision for another venue.
train
[ "ByglajI6FS", "SkeKMsaJqH", "SyejVTsd9S" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Authors of this paper propose the restricted autoencoder (RAE) framework for selecting features that can accurately reconstruct the rest of features. Authors justify the proposed method via the proof that the reconstruction ability of a set of features bounds its performance in downstream supervised learning tasks...
[ 3, 1, 6 ]
[ 4, 5, 5 ]
[ "iclr_2020_BylTy1HFDS", "iclr_2020_BylTy1HFDS", "iclr_2020_BylTy1HFDS" ]
iclr_2020_BJeRykBKDH
Empowering Graph Representation Learning with Paired Training and Graph Co-Attention
Through many recent advances in graph representation learning, performance achieved on tasks involving graph-structured data has substantially increased in recent years---mostly on tasks involving node-level predictions. The setup of prediction tasks over entire graphs (such as property prediction for a molecule, or side-effect prediction for a drug), however, proves to be more challenging, as the algorithm must combine evidence about several structurally relevant patches of the graph into a single prediction. Most prior work attempts to predict these graph-level properties while considering only one graph at a time---not allowing the learner to directly leverage structural similarities and motifs across graphs. Here we propose a setup in which a graph neural network receives pairs of graphs at once, and extend it with a co-attentional layer that allows node representations to easily exchange structural information across them. We first show that such a setup provides natural benefits on a pairwise graph classification task (drug-drug interaction prediction), and then expand to a more generic graph regression setup: enhancing predictions over QM9, a standard molecular prediction benchmark. Our setup is flexible, powerful and makes no assumptions about the underlying dataset properties, beyond anticipating the existence of multiple training graphs.
reject
The paper proposes combining paired attention with co-attention. The reviewers have remarked that the paper is will written and that the experiments provide some new insights into this combination. Initially, some additional experiments were proposed, which were addressed by the authors in the rebuttal and the new version of the paper. However, ICLR is becoming a very competitive conference where novelty is an important criteria for acceptance, and unfortunately the paper was considered to lack the novelty to be presented at ICLR.
train
[ "BkeBspq3sH", "H1lNy0q3sr", "rylL76qnsS", "Hkg6hDIotB", "B1xagNeTFH", "rkgsaIwdqH" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to start by thanking the reviewer for their highly constructive and useful comments.\n\nInitially, we would like to address your comment on the technical novelty: our proposal concerns the combination of paired training with co-attention, which seeks to generically exploit and match similarities betw...
[ -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, 5, 3, 4 ]
[ "B1xagNeTFH", "Hkg6hDIotB", "rkgsaIwdqH", "iclr_2020_BJeRykBKDH", "iclr_2020_BJeRykBKDH", "iclr_2020_BJeRykBKDH" ]
iclr_2020_HklCk1BtwS
Word embedding re-examined: is the symmetrical factorization optimal?
As observed in previous works, many word embedding methods exhibit two interesting properties: (1) words having similar semantic meanings are embedded closely; (2) analogy structure exists in the embedding space, such that ''emph{Paris} is to \emph{France} as \emph{Berlin} is to \emph{Germany}''. We theoretically analyze the inner mechanism leading to these nice properties. Specifically, the embedding can be viewed as a linear transformation from the word-context co-occurrence space to the embedding space. We reveal how the relative distances between nodes change during this transforming process. Such linear transformation will result in these good properties. Based on the analysis, we also provide the answer to a question whether the symmetrical factorization (e.g., \texttt{word2vec}) is better than traditional SVD method. We propose a method to improve the embedding further. The experiments on real datasets verify our analysis.
reject
The paper studies word embeddings using the matrix factorization framework introduced by Levy et al 2015. The authors provide a theoretical explanation for how the hyperparameter alpha controls the distance between words in the embedding and a method to estimate the optimal alpha. The authors also provide experiments showing the alpha found using their method is close to the alpha that gives the highest performance on the word-similarity task on several datasets. The paper received 2 weak rejects and 1 weak accept. The reviews were unchanged after the rebuttal, with even the review for weak accept (R2) indicating that they felt the submission to be of low quality. Initially, reviewers commented that while the work seemed solid and provided insights into the problem of learning word embeddings, the paper needed to improve their positioning with respect to prior work on word embeddings and add missing citations. In the revision, the authors improved the related work, but removed the conclusion. The current version of the paper is still low quality and has the following issues 1. The paper exposition still needs improvement and it would benefit from another review pass Following R3's suggestions, the authors have made various improvements to the paper, including modifying the terminology and contextualizing the work. However, as R3 suggests, the paper still needs more rewriting to clearly articulate the contribution and how it relates to prior work throughout the paper. In addition, the conclusion was removed and the paper still needs an editing pass as there are still many language/grammar issues. Page 5: "inherites" -> "inherits" Page 5: "top knn" -> "top k" 2. More experimental evaluation is needed. For instance, R1 suggested that the authors perform additional experiments on other tasks (e.g. NER, POS Tagging). The authors indicated that this was not a focus of their work as other works have already looked at the impact of alpha on other task. While prior works has looked at the correlation of alpha vs performance on the task, they have not looked at whether alpha estimated the method proposed by the author will give good performance on these tasks as well. Including such analysis will make this a stronger paper. Overall, there are some promising elements in the paper but the quality of the paper needs to be improved. The authors are encouraged to improve the paper by adding more experimental evaluation on other tasks, improving the writing, as well as incorporating other reviewer comments and resubmit to an appropriate venue.
test
[ "B1gw6EE7qr", "S1ly8M8hjB", "HJla4ULhiH", "BJxUTnx3ir", "SJgNUbOiFB", "SJx5aT16tB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors study the word embedding, with a particular emphasize on the word2vec or similar strategies. To this end, the authors consider the matrix factorization framework, previously introduced in the literature, and also study the influence of an hyperparameter denoted by alpha. Roughly speaking...
[ 6, -1, -1, -1, 3, 3 ]
[ 3, -1, -1, -1, 5, 4 ]
[ "iclr_2020_HklCk1BtwS", "B1gw6EE7qr", "SJgNUbOiFB", "SJx5aT16tB", "iclr_2020_HklCk1BtwS", "iclr_2020_HklCk1BtwS" ]