paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2020_B1eCk1StPH
The Generalization-Stability Tradeoff in Neural Network Pruning
Pruning neural network parameters is often viewed as a means to compress models, but pruning has also been motivated by the desire to prevent overfitting. This motivation is particularly relevant given the perhaps surprising observation that a wide variety of pruning approaches increase test accuracy despite sometimes massive reductions in parameter counts. To better understand this phenomenon, we analyze the behavior of pruning over the course of training, finding that pruning's effect on generalization relies more on the instability it generates (defined as the drops in test accuracy immediately following pruning) than on the final size of the pruned model. We demonstrate that even the pruning of unimportant parameters can lead to such instability, and show similarities between pruning and regularizing by injecting noise, suggesting a mechanism for pruning-based generalization improvements that is compatible with the strong generalization recently observed in over-parameterized networks.
reject
The authors introduce a notion of stability to pruning and argue through empirical evaluation that pruning leads to improved generalization when it introduces instability. The reviewers were largely unconvinced, though for very different reasons. The idea that "Bayesian ideas" explain what's going on seems obviously wrong to me. The third reviewer seems to think there's a tautology lurking here and that doesn't seem to be true to me either. It is disappointing that the reviewers did not re-engage with the authors after the authors produced extensive rebuttals. Unfortunately, this is a widespread pattern this year. Even though I'm inclined to ignore aspects of these reviews, I feel that there needs to be a broader empirical study to confirm these findings. In the next iteration of the paper, I believe it may also be important to relate these ideas to [1]. It would be interesting to compare also on the networks studied in [1], which are more diverse. [1] The Lottery Ticket Hypothesis at Scale (Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M. Roy, and Michael Carbin) https://arxiv.org/abs/1903.01611
train
[ "HkeqIHShKr", "rJgZEBSjsr", "Hyg44Npwor", "rJxtXr6wjS", "rJgATHpwsH", "HklyaBTDsB", "BkeUES6vjB", "H1euKNpPsH", "H1gBba7itH", "BJxKLBXTYH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper mainly studies the relationship between the generalization error and mean/variance of the test accuracy. The authors first propose a new score for pruning called E[BN]. Then, the authors observe the generalization error and the test accuracy mean/variance for pruning large score weights and small score ...
[ 3, -1, -1, -1, -1, -1, -1, -1, 1, 1 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2020_B1eCk1StPH", "BJxKLBXTYH", "H1gBba7itH", "BJxKLBXTYH", "HklyaBTDsB", "HkeqIHShKr", "rJxtXr6wjS", "Hyg44Npwor", "iclr_2020_B1eCk1StPH", "iclr_2020_B1eCk1StPH" ]
iclr_2020_Hke1gySFvB
Enhancing Language Emergence through Empathy
The emergence of language in multi-agent settings is a promising research direction to ground natural language in simulated agents. If AI would be able to understand the meaning of language through its using it, it could also transfer it to other situations flexibly. That is seen as an important step towards achieving general AI. The scope of emergent communication is so far, however, still limited. It is necessary to enhance the learning possibilities for skills associated with communication to increase the emergable complexity. We took an example from human language acquisition and the importance of the empathic connection in this process. We propose an approach to introduce the notion of empathy to multi-agent deep reinforcement learning. We extend existing approaches on referential games with an auxiliary task for the speaker to predict the listener's mind change improving the learning time. Our experiments show the high potential of this architectural element by doubling the learning speed of the test setup.
reject
This paper introduces the idea of "empathy" to improve learning in communication emergence. The reviewers all agree that the idea is interesting and well described. However, this paper clearly falls short on delivering the detailed and sufficient experiments and results to demonstrate whether and how the idea works. I thank the authors for submitting this research to ICLR and encourage following up on the reviewers' comments and suggestions for future submission.
train
[ "ryeJqt0UiB", "ryeHqlTZFB", "S1e1TaJRYS", "Hkx7J160Yr" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I am very grateful for the feedback. It helps me better understand the requirements for a full ICLR paper.\nI'm happy, that the reviewers liked the overall idea, I will work on improving the experimental base.", "This paper takes the reference-game setup of Lazaridou et al. (2018), as a means of enabling emergen...
[ -1, 1, 1, 1 ]
[ -1, 5, 4, 3 ]
[ "iclr_2020_Hke1gySFvB", "iclr_2020_Hke1gySFvB", "iclr_2020_Hke1gySFvB", "iclr_2020_Hke1gySFvB" ]
iclr_2020_SkgWeJrYwr
Efficient Wrapper Feature Selection using Autoencoder and Model Based Elimination
We propose a computationally efficient wrapper feature selection method - called Autoencoder and Model Based Elimination of features using Relevance and Redundancy scores (AMBER) - that uses a single ranker model along with autoencoders to perform greedy backward elimination of features. The ranker model is used to prioritize the removal of features that are not critical to the classification task, while the autoencoders are used to prioritize the elimination of correlated features. We demonstrate the superior feature selection ability of AMBER on 4 well known datasets corresponding to different domain applications via comparing the accuracies with other computationally efficient state-of-the-art feature selection techniques. Interestingly, we find that the ranker model that is used for feature selection does not necessarily have to be the same as the final classifier that is trained on the selected features. Finally, we hypothesize that overfitting the ranker model on the training set facilitates the selection of more salient features.
reject
In this paper the authors propose a wrapper feature selection method that selects features based on 1) redundancy, i.e. the sensitivity of the downstream model to feature elimination, and 2) relevance, i.e. how the individual features impact the accuracy of the target task. The authors use a combination of the redundancy and relevance scores to eliminate the features. While acknowledging that the proposed model is potentially useful, the reviewers raised several important concerns that were viewed by AC as critical issues: (1) all reviewers agreed that the proposed approach lacks theoretical justification or convincing empirical evaluations in order to show its effectiveness and general applicability -- see R1’s and R2’s requests for evaluation with more datasets/diverse tasks to assess the applicability and generality of the proposed model; see R1’s, R4’s concerns regarding theoretical analysis; (2) all reviewers expressed concerns regarding the technical issue of combining the redundancy and relevance scores -- see R4’s and R2’s concerns regarding the individual/disjoint calibration of scores; see R1’s suggestion to learn to reweigh the scores; (3) experimental setup requires improvement both in terms of clarity of presentation and implementation -- see R1’s comment regarding the ranker model, see R4’s concern regarding comparison with a standard deep learning model that does feature learning for a downstream task; both reviewers also suggested to analyse how autoencoders with different capacity could impact the results. Additionally R1 raised a concern regarding relevant recent works that were overlooked. The authors have tried to address some of these concerns during rebuttal, but an insufficient empirical evidence still remains a critical issue of this work. To conclude, the reviewers and AC suggest that in its current state the manuscript is not ready for a publication. We hope the reviews are useful for improving and revising the paper.
train
[ "BJxxp9v2oH", "r1eJ2pRojS", "SJl8rr9qsH", "SJlJ-nrGir", "SylKDMA2FB", "r1g9DMyP9B", "H1gEqkE0qr", "BygvQJvJ5B", "SklM7HIbdH", "Syx0hfvpDB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public" ]
[ "I have read the other reviews and authors' responses. The response does clarify several important details about the experimental design, but my basic concerns still remain that there are many empirical choices which need further exploration to justify the described approach. Other reviewers also raised this issue,...
[ -1, -1, -1, -1, 1, 3, 3, -1, -1, -1 ]
[ -1, -1, -1, -1, 4, 3, 4, -1, -1, -1 ]
[ "r1eJ2pRojS", "SylKDMA2FB", "r1g9DMyP9B", "H1gEqkE0qr", "iclr_2020_SkgWeJrYwr", "iclr_2020_SkgWeJrYwr", "iclr_2020_SkgWeJrYwr", "SklM7HIbdH", "Syx0hfvpDB", "iclr_2020_SkgWeJrYwr" ]
iclr_2020_rJebgkSFDB
Learning to Learn Kernels with Variational Random Features
Meta-learning for few-shot learning involves a meta-learner that acquires shared knowledge from a set of prior tasks to improve the performance of a base-learner on new tasks with a small amount of data. Kernels are commonly used in machine learning due to their strong nonlinear learning capacity, which have not yet been fully investigated in the meta-learning scenario for few-shot learning. In this work, we explore kernel approximation with random Fourier features in the meta-learning framework for few-shot learning. We propose learning adaptive kernels by meta variational random features (MetaVRF), which is formulated as a variational inference problem. To explore shared knowledge across diverse tasks, our MetaVRF deploys an LSTM inference network to generate informative features, which can establish kernels of highly representational power with low spectral sampling rates, while also being able to quickly adapt to specific tasks for improved performance. We evaluate MetaVRF on a variety of few-shot learning tasks for both regression and classification. Experimental results demonstrate that our MetaVRF can deliver much better or competitive performance than recent meta-learning algorithms.
reject
The paper looks at meta learning using random Fourier features for kernel approximations. The idea is to learn adaptive kernels by inferring Fourier bases from related tasks that can be used for the new task. A key insight of the paper is to use an LSTM to share knowledge across tasks. The paper tackles an interesting problem, and the idea to use a meta learning setting for transfer learning within a kernel setting is quite interesting. It may be worthwhile relating this work to this paper by Titsias et al. (https://arxiv.org/abs/1901.11356), which looks at a slightly different setting (continual learning with Gaussian processes, where information is shared through inducing variables). Having read the paper, I have some comments/questions: 1. log-likelihood should be called log-marginal likelihood (wherever the ELBO shows up) 2. The derivation of the ELBO confuses me (section 3.1). First, I don't know whether this ELBO is at training time or at test time. If it was at training time, then I agree with Reviewer #1 in the sense that $p(\omega)$ should not depend on either $x$ or $\mathcal {S}$. If it is at test time, the log-likelihood term should not depend on $\mathcal{S}$ (which is the training set), because $\mathcal S$ is taken care of by $p(\omega|\mathcal S)$. However, critically, $p(\omega|\mathcal S)$ should not depend on $x$. I agree with Reviewer #1 that this part is confusing, and the authors' response has not helped me to diffuse this confusion (e.g., priors should not be conditioned on any data). 3. The tasks are indirectly represented by a set of basis functions, which are represented by $\omega^t$ for task $t$. In the paper, these tasks are then inferred using variational inference and an LSTM. It may be worthwhile relating this to the latent-variable approach by Saemundsson et al. (http://auai.org/uai2018/proceedings/papers/235.pdf) for meta learning. 4. The expression "meta ELBO" is inappropriate. This is a simple ELBO, nothing meta about it. If we think of the tasks as latent variables (which the paper also states), this ELBO in equation (9) is a vanilla ELBO that is used in variational inference. 5. For the LSTM, does it make a difference how the tasks are ordered? 6. Experiments: Figure 3 clearly needs error bars, and MSEs need to be reported with error bars as well; 6a) Figures 4 and 5 need error bars. 6b) Error bars should also be based on different random initializations of the learning procedure to evaluate the robustness of the methods (use at least 20 random seeds). I don't think any of the results is based on more than one random seed (at least I could not find any statement regarding this). 7. Table 1 and 2: The highlighting in bold is unclear. If it is supposed to highlight the best methods, then the highlighting is dishonest in the sense that methods, which perform similarly, are not highlighted. For example, in Table 1, VERSA or MetaVRF (w/o LSTM) could be highlighted for all tasks because the error bars are so huge (similar in Table 2). 8. One of the things I'm missing completely is a discussion about computational demand: How efficiently can we train the model, and how long does it take to make predictions? It would be great to have some discussion about this in the paper and relate this to other approaches. 9. The paper evaluates also the effect of having an LSTM that correlates tasks in the posterior. The analysis shows that there are some marginal gains, but none of the is statistically significant. I would have liked to see much more analysis of the effect/benefit of the LSTM. Summary: The paper addresses an interesting problem. However, I have reservations regarding some theoretical bits and regarding the quality of the evaluation. Given that this paper also exceeds the 8 pages (default) limit, we are supposed to ask for higher acceptance standards than for an 8-pages paper. Hence, putting everything together, I recommend to reject this paper.
train
[ "B1lEFrL9iB", "S1xu541KiH", "Bkg53sYroB", "SygxY8KriH", "rygBguFrjr", "rJeFIsYHiH", "HJlxcBYSiH", "BygdMZIoYS", "rkx5Y4zMcr", "BkgAgpcVcr" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We are very glad to hear that our responses resolve most of your questions. \n\nWe now would like to further explain the meta prior $p(\\omega| x, S)$. We show technically in the derivation of the meta ELBO (eqs. 14-17 in the appendix) how the meta prior is conditioned on the input $x$, from which we provide some...
[ -1, -1, -1, -1, -1, -1, -1, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 1 ]
[ "S1xu541KiH", "rJeFIsYHiH", "BygdMZIoYS", "BkgAgpcVcr", "rkx5Y4zMcr", "BygdMZIoYS", "iclr_2020_rJebgkSFDB", "iclr_2020_rJebgkSFDB", "iclr_2020_rJebgkSFDB", "iclr_2020_rJebgkSFDB" ]
iclr_2020_HkgMxkHtPH
UWGAN: UNDERWATER GAN FOR REAL-WORLD UNDERWATER COLOR RESTORATION AND DEHAZING
In real-world underwater environment, exploration of seabed resources, underwater archaeology, and underwater fishing rely on a variety of sensors, vision sensor is the most important one due to its high information content, non-intrusive, and passive nature. However, wavelength-dependent light attenuation and back-scattering result in color distortion and haze effect, which degrade the visibility of images. To address this problem, firstly, we proposed an unsupervised generative adversarial network (GAN) for generating realistic underwater images (color distortion and haze effect simulation) from in-air image and depth map pairs. Secondly, U-Net, which is trained efficiently using synthetic underwater dataset, is adopted for color restoration and de-hazing. Our model directly reconstructs underwater clear images using end-to-end autoencoder networks, while maintaining scene content structural similarity. The results obtained by our method were compared with existing methods qualitatively and quantitatively. Experimental results on open real-world underwater datasets demonstrate that the presented method performs well on different actual underwater scenes, and the processing speed can reach up to 125FPS on images running on one NVIDIA 1060 GPU.
reject
This paper proposed to improve the quality of underwater images, specifically color distortion and haze effect, by an unsupervised generative adversarial network (GAN). An end-to-end autoencoder network is used to demonstrate its effectiveness in comparing to existing works, while maintaining scene content structural similarity. Three reviewers unanimously rated weak rejection. The major concerns include unclear difference with respect to the existing works, incremental contribution, low quality of figures, low quality of writing, etc. The authors respond to Reviewers’ concerns but did not change the rating. The ACs concur the concerns and the paper can not be accepted at its current state.
train
[ "S1gemwT6YH", "SJx8OG8Fsr", "SkxGSfLtsH", "Bken178tjr", "rJl7F-UFjS", "Skl-8kPxjr", "SyeZlLA6Yr", "HJei8hV35H", "HyegdV42cB" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "[Update after rebuttal period]\nIn response, the authors cannot clearly clarify the difference between this work with existing works integrating the physical model into the network. Thus I stay my original score.\n\n\n[Original reviews]\nThis paper proposed an unsupervised generative adversarial network for underw...
[ 3, -1, -1, -1, -1, 3, 3, -1, -1 ]
[ 5, -1, -1, -1, -1, 4, 3, -1, -1 ]
[ "iclr_2020_HkgMxkHtPH", "Skl-8kPxjr", "SyeZlLA6Yr", "iclr_2020_HkgMxkHtPH", "S1gemwT6YH", "iclr_2020_HkgMxkHtPH", "iclr_2020_HkgMxkHtPH", "HyegdV42cB", "iclr_2020_HkgMxkHtPH" ]
iclr_2020_BJg7x1HFvB
Well-Read Students Learn Better: On the Importance of Pre-training Compact Models
Recent developments in natural language representations have been accompanied by large and expensive models that leverage vast amounts of general-domain text through self-supervised pre-training. Due to the cost of applying such models to down-stream tasks, several model compression techniques on pre-trained language representations have been proposed (Sun et al., 2019; Sanh, 2019). However, surprisingly, the simple baseline of just pre-training and fine-tuning compact models has been overlooked. In this paper, we first show that pre-training remains important in the context of smaller architectures, and fine-tuning pre-trained compact models can be competitive to more elaborate methods proposed in concurrent work. Starting with pre-trained compact models, we then explore transferring task knowledge from large fine-tuned models through standard knowledge distillation. The resulting simple, yet effective and general algorithm, Pre-trained Distillation, brings further improvements. Through extensive experiments, we more generally explore the interaction between pre-training and distillation under two variables that have been under-studied: model size and properties of unlabeled task data. One surprising observation is that they have a compound effect even when sequentially applied on the same data. To accelerate future research, we will make our 24 pre-trained miniature BERT models publicly available.
reject
Though the reviewers thought the ideas in this paper were interesting, they questioned the importance and magnitude of the contribution. Though it is important to share empirical results, the reviewers were not sure that there was enough for this paper to be accepted.
test
[ "BJxP-MjjiH", "SyeWIxtuiH", "HJlMayFujB", "B1lrwkYuoB", "H1xlKlknFr", "rkgyLdyntS", "SJe6IsB6KH" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I read the authors' responses and am not satisfied. \n\nKD does not require that the teacher and student have the same hidden dimension size. This can be done following [FitNets: Hints for Thin Deep Nets](https://arxiv.org/pdf/1412.6550). ", "We believe the reviewer has misunderstood the contribution of the pape...
[ -1, -1, -1, -1, 1, 6, 3 ]
[ -1, -1, -1, -1, 4, 3, 4 ]
[ "B1lrwkYuoB", "H1xlKlknFr", "rkgyLdyntS", "SJe6IsB6KH", "iclr_2020_BJg7x1HFvB", "iclr_2020_BJg7x1HFvB", "iclr_2020_BJg7x1HFvB" ]
iclr_2020_BJlXgkHYvS
Information-Theoretic Local Minima Characterization and Regularization
Recent advances in deep learning theory have evoked the study of generalizability across different local minima of deep neural networks (DNNs). While current work focused on either discovering properties of good local minima or developing regularization techniques to induce good local minima, no approach exists that can tackle both problems. We achieve these two goals successfully in a unified manner. Specifically, based on the Fisher information we propose a metric both strongly indicative of generalizability of local minima and effectively applied as a practical regularizer. We provide theoretical analysis including a generalization bound and empirically demonstrate the success of our approach in both capturing and improving the generalizability of DNNs. Experiments are performed on CIFAR-10 and CIFAR-100 for various network architectures.
reject
This paper proposes using the Fisher information matrix to characterize local minima of deep network loss landscapes to indicate generalizability of a local minimum. While the reviewers agree that this paper contains interesting ideas and its presentation has been substantially improved during the discussion period, there are still issues that remain unanswered, in particular between the main objective/claims and the presented evidence. The paper will benefit from a revision and resubmission to another venue.
train
[ "H1xq1PY2tr", "SygnyAMKsr", "HJeXEVW3sH", "ryxOmflnir", "BJx7S-pojr", "r1xAnokijS", "rygLJrqqsS", "rylHBEqqjH", "SJxX9rLYsB", "rygL3EBFiH", "r1xtg-7Ksr", "Bke1Ojf_jH", "SkxtuOMdir", "SJezCZfdiH", "H1l91QkdsS", "Sye3GCAwoB", "SkeWgp0PiH", "HJe3TWyXjB", "HyeG_bk7jr", "HyxDSpLfir"...
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "public", ...
[ "Post-rebuttal update: I have just noticed the authors modified their summary post below and claimed \"[my concerns] are all minor or resolved\". This is not true. Here is my summary of unresolved concerns written after the discussion period.\n\nThis work has been substantially improved during the rebuttal process,...
[ 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 3 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_BJlXgkHYvS", "H1xq1PY2tr", "HJeaomh79S", "r1xAnokijS", "Sye3GCAwoB", "rylHBEqqjH", "iclr_2020_BJlXgkHYvS", "rygL3EBFiH", "rygL3EBFiH", "SygnyAMKsr", "HyxDSpLfir", "SkeWgp0PiH", "SJezCZfdiH", "H1l91QkdsS", "SkeWgp0PiH", "rJgVQyi9FB", "H1xq1PY2tr", "HyeG_bk7jr", "HJeaomh...
iclr_2020_HJgEe1SKPr
GAN-based Gaussian Mixture Model Responsibility Learning
Mixture Model (MM) is a probabilistic framework which allows us to define a dataset containing K different modes. When each of the modes is associated with a Gaussian distribution, we refer it as Gaussian MM, or GMM. Given a data point x, GMM may assume the existence of a random index k ∈ {1, . . . , K } identifying which Gaussian the particular data is associated with. In a traditional GMM paradigm, it is straightforward to compute in closed-form, the conditional like- lihood p(x|k, θ), as well as responsibility probability p(k|x, θ) which describes the distribution index corresponds to the data. Computing the responsibility allows us to retrieve many important statistics of the overall dataset, including the weights of each of the modes. Modern large datasets often contain multiple unlabelled modes, such as paintings dataset containing several styles; fashion images containing several unlabelled categories. In its raw representation, the Euclidean distances between the data do not allow them to form mixtures naturally, nor it’s feasible to compute responsibility distribution, making GMM unable to apply. To this paper, we utilize the Generative Adversarial Network (GAN) framework to achieve an alternative plausible method to compute these probabilities at the data’s latent space z instead of x. Instead of defining p(x|k, θ) explicitly, we devised a modified GAN to allow us to define the distribution using p(z|k, θ), where z is the corresponding latent representation of x, as well as p(k|x, θ) through an additional classification network which is trained with the GAN in an “end-to-end” fashion. These techniques allow us to discover interesting properties of an unsupervised dataset, including dataset segments as well as generating new “out-distribution” data by smooth linear interpolation across any combinations of the modes in a completely unsupervised manner.
reject
This paper proposes to use GMM as the latent prior distribution of GAN. The reviewers unanimously agree that the paper is not well motivated, explanations are lacking and writing needs to be substantially improved.
train
[ "HklCTYa0KH", "SJlnIgtxqB", "rylsua2Q9S" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a modification of GANs where the latent space follows a distribution modelled by a Gaussian Mixture Model. While the idea of using GMMs in GANs is not novel, the main contribution of the paper is to add a classification models that enables posterior inference. The whole model is trained jointly...
[ 1, 3, 1 ]
[ 3, 1, 3 ]
[ "iclr_2020_HJgEe1SKPr", "iclr_2020_HJgEe1SKPr", "iclr_2020_HJgEe1SKPr" ]
iclr_2020_SkeNlJSKvS
Shallow VAEs with RealNVP Prior Can Perform as Well as Deep Hierarchical VAEs
Using powerful posterior distributions is a popular technique in variational inference. However, recent works showed that the aggregated posterior may fail to match unit Gaussian prior, even with expressive posteriors, thus learning the prior becomes an alternative way to improve the variational lower-bound. We show that using learned RealNVP prior and just one latent variable in VAE, we can achieve test NLL comparable to very deep state-of-the-art hierarchical VAE, outperforming many previous works with complex hierarchical VAE architectures. We hypothesize that, when coupled with Gaussian posteriors, the learned prior can encourage appropriate posterior overlapping, which is likely to improve reconstruction loss and lower-bound, supported by our experimental results. We demonstrate that, with learned RealNVP prior, ß-VAE can have better rate-distortion curve than using fixed Gaussian prior.
reject
This paper provides an interesting insight into the fitting of variational autoencoders. While much of the recent literature focuses on training ever more expressive models, the authors demonstrate that learning a flexible prior can provide an equally strong model. Unfortunately one review is somewhat terse. Among the other reviews, one reviewer found the paper very interesting and compelling but did not feel comfortable raising their score to "accept" in the discussion phase citing a lack of compelling empirical results in compared to baselines. Both reviewers were concerned about novelty in light of Huang et al., in which a RealNVP prior is also learned in a VAE. AnonReviewer3 also felt that the experiments were not thorough enough to back up the claims in the paper. Unfortunately, for these reasons the recommendation is to reject. More compelling empirical results with carefully chosen baselines to back up the claims of the paper and comparison to existing literature (Huang et al) would make this paper much stronger.
train
[ "ryeDf7PxFB", "rkxnJ9o0YB", "ryxeWqalqB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This submission shows that using learned autoregressive priors (real NVP) allows shallow VAEs to achieve comparable log-likelihood performances compared to more complex deep VAE architectures.\n\nI found this paper an enjoyable read, and its results quite intriguing. While most of VAE research focuses on building ...
[ 6, 3, 3 ]
[ 5, 4, 4 ]
[ "iclr_2020_SkeNlJSKvS", "iclr_2020_SkeNlJSKvS", "iclr_2020_SkeNlJSKvS" ]
iclr_2020_BJlVeyHFwH
On the Invertibility of Invertible Neural Networks
Guarantees in deep learning are hard to achieve due to the interplay of flexible modeling schemes and complex tasks. Invertible neural networks (INNs), however, provide several mathematical guarantees by design, such as the ability to approximate non-linear diffeomorphisms. One less studied advantage of INNs is that they enable the design of bi-Lipschitz functions. This property has been used implicitly by various works to design generative models, memory-saving gradient computation, regularize classifiers, and solve inverse problems. In this work, we study Lipschitz constants of invertible architectures in order to investigate guarantees on stability of their inverse and forward mapping. Our analysis reveals that commonly-used INN building blocks can easily become non-invertible, leading to questionable ``exact'' log likelihood computations and training difficulties. We introduce a set of numerical analysis tools to diagnose non-invertibility in practice. Finally, based on our theoretical analysis, we show how to guarantee numerical invertibility for one of the most common INN architectures.
reject
This submission analyses the numerical invertibility of analytically invertible neural networks and shows that analytical invertibility does not guarantee numerical invertibility of some invertible networks under certain conditions (e.g. adversarial perturbation). Strengths: -The work is interesting and the theoretical analysis is insightful. Weaknesses: -The main concern shared by all reviewers was the weakness of the experimental section including (i) insufficient motivation of the decorrelation task; (ii) missing comparisons and experimental settings. -The paper clarity could be improved. Both weaknesses were not sufficiently addressed in the rebuttal. All reviewer recommendations were borderline to reject.
train
[ "r1gig7Jf9r", "HkeLVwHojS", "SylRsdHjir", "HylpLOBioH", "Hkei4tHisH", "HkxxFvSoiH", "rkxtlPrjsB", "ryTvSq2Fr", "rkxViW62KH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper points out invertible neural networks are not necessarily invertible because of bad conditioning. It shows some cases when invertible neural networks fail, including adding adversarial pertubations, solving the decorrelation task, and training without maximum likelihood objective (Flow-GAN). The paper a...
[ 6, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ 3, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_BJlVeyHFwH", "rkxViW62KH", "HylpLOBioH", "ryTvSq2Fr", "iclr_2020_BJlVeyHFwH", "HkeLVwHojS", "r1gig7Jf9r", "iclr_2020_BJlVeyHFwH", "iclr_2020_BJlVeyHFwH" ]
iclr_2020_SyxBgkBFPS
Guided Adaptive Credit Assignment for Sample Efficient Policy Optimization
Policy gradient methods have achieved remarkable successes in solving challenging reinforcement learning problems. However, it still often suffers from sparse reward tasks, which leads to poor sample efficiency during training. In this work, we propose a guided adaptive credit assignment method to do effectively credit assignment for policy gradient methods. Motivated by entropy regularized policy optimization, our method extends the previous credit assignment methods by introducing more general guided adaptive credit assignment(GACA). The benefit of GACA is a principled way of utilizing off-policy samples. The effectiveness of proposed algorithm is demonstrated on the challenging \textsc{WikiTableQuestions} and \textsc{WikiSQL} benchmarks and an instruction following environment. The task is generating action sequences or program sequences from natural language questions or instructions, where only final binary success-failure execution feedback is available. Empirical studies show that our method significantly improves the sample efficiency of the state-of-the-art policy optimization approaches.
reject
The paper proposes a policy gradient algorithm related to entropy-regularized RL, that instead of the KL uses f-divergence to avoid mode collapse. The reviewers found many technical issues with the presentation of the method, and the evaluation. In particular, the experiments are conducted on particular program synthesis tasks and show small margin improvements, while the algorithm is motivated by general sparse reward RL. I recommend rejection at this time, but encourage the authors to take the feedback into account and resubmit an improved version elsewhere.
train
[ "rkeNx7jTFB", "HkxCF407jB", "SJl_u4RmjB", "rJxO8NC7ir", "HJxsdYBcYS", "HJgXnWk0Yr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes guided adaptive credit assignment (GACA) for policy gradient methods with sparse reward.\n\nGACA attacks the credit assignment problem by\n1) using entropy regularized RL objective (KL divergence), iteratively update prior \\bar{\\pi} and \\pi_\\theta;\n2) generalizing KL to f-divergence to avo...
[ 3, -1, -1, -1, 6, 3 ]
[ 4, -1, -1, -1, 3, 3 ]
[ "iclr_2020_SyxBgkBFPS", "HJxsdYBcYS", "rkeNx7jTFB", "HJgXnWk0Yr", "iclr_2020_SyxBgkBFPS", "iclr_2020_SyxBgkBFPS" ]
iclr_2020_Skg8gJBFvr
Filling the Soap Bubbles: Efficient Black-Box Adversarial Certification with Non-Gaussian Smoothing
Randomized classifiers have been shown to provide a promising approach for achieving certified robustness against adversarial attacks in deep learning. However, most existing methods only leverage Gaussian smoothing noise and only work for ℓ2 perturbation. We propose a general framework of adversarial certification with non-Gaussian noise and for more general types of attacks, from a unified functional optimization perspective. Our new framework allows us to identify a key trade-off between accuracy and robustness via designing smoothing distributions, helping to design two new families of non-Gaussian smoothing distributions that work more efficiently for ℓ2 and ℓ∞ attacks, respectively. Our proposed methods achieve better results than previous works and provide a new perspective on randomized smoothing certification.
reject
The authors extend the framework of randomized smoothing to handle non-Gaussian smoothing distribution and use this to show that they can construct smoothed models that perform well against l2 and linf adversarial attacks. They show that the resulting framework can obtain state-of-the-art certified robustness results improving upon prior work. While the paper contains several interesting ideas, the reviewers were concerned about several technical flaws and omissions from the paper: 1) A theorem on strong duality was incorrect in the initial version of the paper, though this was fixed in the rebuttal. However, the reasoning of the authors on the "fundamental trade-off" is specific to the particular framework they consider, and is not really a fundamental trade-off. 2) The justification for the new family of distributions constructed by the author is not very clear and the experiments only show marginal improvements over prior work. Thus, the significance of this contribution is not clear. Some of the issues were clarified during the rebuttal, but the reviewers remained unconvinced about the above points. Thus, the paper cannot be accepted in its current form.
train
[ "r1x8lRc2jB", "BJxlo2c2jB", "B1eytnq3jr", "BygQJM9noS", "rylAi45hor", "ByllmnE6tH", "rJejRI6aKS", "B1lWEtgb9B", "Hkgqg8GpFH", "rkeRwZqa_S", "SJlZpBx1OB", "BJxwqasovr" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author", "public" ]
[ "We fix the issue of Theorem 1 pointed out by Reviewer #3. Our lower bound is still tight (strong duality holds) for all the cases we studied.", "3. (sketchy justification): 'The paper justifies a smoothing distribution that concentrates more mass around the center as follows: 'This phenomenon makes it problemati...
[ -1, -1, -1, -1, -1, 1, 1, 3, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, 5, 3, 4, -1, -1, -1, -1 ]
[ "iclr_2020_Skg8gJBFvr", "ByllmnE6tH", "ByllmnE6tH", "rJejRI6aKS", "B1lWEtgb9B", "iclr_2020_Skg8gJBFvr", "iclr_2020_Skg8gJBFvr", "iclr_2020_Skg8gJBFvr", "rkeRwZqa_S", "SJlZpBx1OB", "BJxwqasovr", "iclr_2020_Skg8gJBFvr" ]
iclr_2020_BklIxyHKDr
Deep k-NN for Noisy Labels
Modern machine learning models are often trained on examples with noisy labels that hurt performance and are hard to identify. In this paper, we provide an empirical study showing that a simple k-nearest neighbor-based filtering approach on the logit layer of a preliminary model can remove mislabeled training data and produce more accurate models than some recently proposed methods. We also provide new statistical guarantees into its efficacy.
reject
The paper proposed and analyze a k-NN method for identifying corrupted labels for training deep neural networks. Although a reviewer pointed out that the noisy k-NN contribution is interesting, I think the paper can be much improved further due to the followings: (a) Lack of state-of-the-art baselines to compare. (b) Lack of important recent related work, i.e., "Robust Inference via Generative Classifiers for Handling Noisy Labels" from ICML 2019 (see https://arxiv.org/abs/1901.11300). The paper also runs a clustering-like algorithm for handling noisy labels, and the authors should compare and discuss why the proposed method is superior. (c) Poor write-up, e.g., address what is missing in existing methods from many different perspectives as this is a quite well-studied popular problem. Hence, I recommend rejection.
train
[ "Bye4OdB4KS", "S1lnl2tCKB", "HyeMkEW2ir", "S1xjoW-2ir", "rkxFL70AYH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer" ]
[ "This paper provided \"an empirical study showing that a simple k-nearest neighbor-based filtering approach on the logit layer of a preliminary model can remove mislabeled training data and produce more accurate models than some recently proposed methods\". Even though it has many theoretical analysis and experimen...
[ 1, 1, -1, -1, 6 ]
[ 5, 1, -1, -1, 4 ]
[ "iclr_2020_BklIxyHKDr", "iclr_2020_BklIxyHKDr", "S1lnl2tCKB", "rkxFL70AYH", "iclr_2020_BklIxyHKDr" ]
iclr_2020_ByeDl1BYvH
Global graph curvature
Recently, non-Euclidean spaces became popular for embedding structured data. However, determining suitable geometry and, in particular, curvature for a given dataset is still an open problem. In this paper, we define a notion of global graph curvature, specifically catered to the problem of embedding graphs, and analyze the problem of estimating this curvature using only graph-based characteristics (without actual graph embedding). We show that optimal curvature essentially depends on dimensionality of the embedding space and loss function one aims to minimize via embedding. We review the existing notions of local curvature (e.g., Ollivier-Ricci curvature) and analyze their properties theoretically and empirically. In particular, we show that such curvatures are often unable to properly estimate the global one. Hence, we propose a new estimator of global graph curvature specifically designed for zero-one loss function.
reject
This paper studies the problem of embedding graphs into continuous spaces. The authors focus on determining the correct dimension and curvature to minimize distortion or a threshold loss of the embedding. The authors consider a variety of existing notions of curvature for graphs, introduce a notion of global curvature for the entire graph, and how to efficiently compute it. Reviewers were positive about the problem under study, but agreed that the current manuscript somewhat lacks a clear contribution. They also pointed out that the goal of using a global notion of curvature should be better motivated. For these reasons, the AC recommends rejection at this time.
train
[ "r1leUQf2sS", "Skg2VSb3iB", "rylYHNb3sS", "SJelGQWnjH", "r1lv_XXjjH", "r1lwZoJmjH", "rygor9kXoH", "SyliHdJmjr", "SJlybsupKH", "rkei-QhAKB", "HJxYwVfw9S" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the suggestions! We uploaded a revised paper, see our comment above https://openreview.net/forum?id=ByeDl1BYvH&noteId=SJelGQWnjH . In particular, we tried to improve the motivation part. We also added a comment on the different notions of distortion.", "Thank you for the feedback!\n\nWe uploaded a ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 1, 3 ]
[ "r1lv_XXjjH", "rkei-QhAKB", "HJxYwVfw9S", "iclr_2020_ByeDl1BYvH", "r1lwZoJmjH", "SJlybsupKH", "SJlybsupKH", "HJxYwVfw9S", "iclr_2020_ByeDl1BYvH", "iclr_2020_ByeDl1BYvH", "iclr_2020_ByeDl1BYvH" ]
iclr_2020_BkxDxJHFDr
Power up! Robust Graph Convolutional Network based on Graph Powering
Graph convolutional networks (GCNs) are powerful tools for graph-structured data. However, they have been recently shown to be vulnerable to topological attacks. To enhance adversarial robustness, we go beyond spectral graph theory to robust graph theory. By challenging the classical graph Laplacian, we propose a new convolution operator that is provably robust in the spectral domain and is incorporated in the GCN architecture to improve expressivity and interpretability. By extending the original graph to a sequence of graphs, we also propose a robust training paradigm that encourages transferability across graphs that span a range of spatial and spectral characteristics. The proposed approaches are demonstrated in extensive experiments to {simultaneously} improve performance in both benign and adversarial situations.
reject
The paper identifies the limitation of graph neural networks and proposed new variants of graph neural works. However, the reviewers feel that the theory of the paper have some problems: 1. A major concern is that the theoretical analyses in this paper are limited to graphs sampled from the SBM model. It is unclear how these analyses can be generalized to real graphs. 2. The robustness definition is inconsistent. Furthermore, more extensive experiments on more datasets will also be helpful.
train
[ "BJx2Xmcujr", "BylgNz5uir", "r1xRjgcuiS", "HJeapTXYoS", "Syg7RVcOiH", "SylEaM5_oS", "SylgClqOiH", "ryg6Ve_tuB", "H1gRE3E6tH", "BkxQEAZ0YB" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Q: The acronyms are slightly confusing to understand at first sight, since they first appear at the equations without any information on what the letters stand for. Something like a \"variable power network (VPN)\" would make the paper more pleasant to read.\n\nA: Thank you for the suggestion! We have addressed it...
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 5, 5 ]
[ "ryg6Ve_tuB", "H1gRE3E6tH", "BkxQEAZ0YB", "iclr_2020_BkxDxJHFDr", "ryg6Ve_tuB", "H1gRE3E6tH", "BkxQEAZ0YB", "iclr_2020_BkxDxJHFDr", "iclr_2020_BkxDxJHFDr", "iclr_2020_BkxDxJHFDr" ]
iclr_2020_S1e3g1rtwB
The fairness-accuracy landscape of neural classifiers
That machine learning algorithms can demonstrate bias is well-documented by now. This work confronts the challenge of bias mitigation in feedforward fully-connected neural nets from the lens of causal inference and multiobjective optimisation. Regarding the former, a new causal notion of fairness is introduced that is particularly suited to giving a nuanced treatment of datasets collected under unfair practices. In particular, special attention is paid to subjects whose covariates could appear with substantial probability in either value of the sensitive attribute. Next, recognising that fairness and accuracy are competing objectives, the proposed methodology uses techniques from multiobjective optimisation to ascertain the fairness-accuracy landscape of a neural net classifier. Experimental results suggest that the proposed method produces neural net classifiers that distribute evenly across the Pareto front of the fairness-accuracy space and is more efficient at finding non-dominated points than an adversarial approach.
reject
This manuscript investigates and characterizes the tradeoff between fairness and accuracy in neural network models. The primary empirical contribution is to investigate this tradeoff for a variety of datasets. The reviewers and AC agree that the problem studied is timely and interesting. However, this manuscript also received quite divergent reviews, resulting from differences in opinion about the novelty of the results. IN particular, it is not clear that the idea of a fairness/performance tradeoff is a new one. In reviews and discussion, the reviewers also noted issues with clarity of the presentation. In the opinion of the AC, the manuscript is not appropriate for publication in its current state.
train
[ "rJxlRjhsoH", "Byeg222isr", "rkxAr3niiS", "rkg1g92ssr", "rJl6sF3osr", "r1xy_56sFr", "rJgwE8_xcS", "BJexY-Q4cr" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The reviewer is correct that we failed to make the assumptions regarding the causal estimand explicit. These necessary assumptions are now clearly stated in the revision:\n- In adopting the potential outcome framework of Imbens and Rubin 2015, we assume the Stable Unit Treatment Value Assumption\n- Under unconfoun...
[ -1, -1, -1, -1, -1, 1, 6, 3 ]
[ -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "r1xy_56sFr", "rkxAr3niiS", "rJxlRjhsoH", "rJgwE8_xcS", "BJexY-Q4cr", "iclr_2020_S1e3g1rtwB", "iclr_2020_S1e3g1rtwB", "iclr_2020_S1e3g1rtwB" ]
iclr_2020_H1gCeyHFDS
Gram-Gauss-Newton Method: Learning Overparameterized Neural Networks for Regression Problems
First-order methods such as stochastic gradient descent (SGD) are currently the standard algorithm for training deep neural networks. Second-order methods, despite their better convergence rate, are rarely used in practice due to the pro- hibitive computational cost in calculating the second-order information. In this paper, we propose a novel Gram-Gauss-Newton (GGN) algorithm to train deep neural networks for regression problems with square loss. Our method draws inspiration from the connection between neural network optimization and kernel regression of neural tangent kernel (NTK). Different from typical second-order methods that have heavy computational cost in each iteration, GGN only has minor overhead compared to first-order methods such as SGD. We also give theoretical results to show that for sufficiently wide neural networks, the convergence rate of GGN is quadratic. Furthermore, we provide convergence guarantee for mini-batch GGN algorithm, which is, to our knowledge, the first convergence result for the mini-batch version of a second-order method on overparameterized neural net- works. Preliminary experiments on regression tasks demonstrate that for training standard networks, our GGN algorithm converges much faster and achieves better performance than SGD.
reject
The article considers Gauss-Newton as a scalable second order alternative to train neural networks, and gives theoretical convergence rates and some experiments. The second order convergence results rely on the NTK and very wide networks. The reviewers pointed out that the method is of course not new, and suggested that comparison not only with SGD but also with methods such as Adam, natural gradients, KFAC, would be important, as well as additional experiments with other types of losses for classification problems and multidimensional outputs. The revision added preliminary experiments comparing with Adam and KFAC. Overall, I think that the article makes an interesting and relevant case that Gauss-Newton can be a competitive alternative for parameter optimization in neural networks. However, the experimental section could still be improved significantly. Therefore, I am recommending that the paper is not accepted at this time but revised to include more extensive experiments.
train
[ "BJeshEY55r", "SkgoyRG6KH", "rkgi-irhiB", "SkxosqS2sB", "BkgxE9rhoH", "B1gDZ5B3jS", "Bkew6KBhsr", "HJxKytr2oS", "rygJfehGqB", "BJxhNaz2qH", "rJxaGCqN9S" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors propose a scalable second order method for optimization using a quadratic loss. The method is inspired by the Neural Tangent kernel approach, which also allows them to provide global convergence rates for GD and batch SGD. The algorithm has a computational complexity that is linear in the number of par...
[ 3, 3, -1, -1, -1, -1, -1, -1, 3, 6, 1 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ "iclr_2020_H1gCeyHFDS", "iclr_2020_H1gCeyHFDS", "SkgoyRG6KH", "rygJfehGqB", "rJxaGCqN9S", "BJeshEY55r", "BJxhNaz2qH", "iclr_2020_H1gCeyHFDS", "iclr_2020_H1gCeyHFDS", "iclr_2020_H1gCeyHFDS", "iclr_2020_H1gCeyHFDS" ]
iclr_2020_BkgRe1SFDS
Learning World Graph Decompositions To Accelerate Reinforcement Learning
Efficiently learning to solve tasks in complex environments is a key challenge for reinforcement learning (RL) agents. We propose to decompose a complex environment using a task-agnostic world graphs, an abstraction that accelerates learning by enabling agents to focus exploration on a subspace of the environment.The nodes of a world graph are important waypoint states and edges represent feasible traversals between them. Our framework has two learning phases: 1) identifying world graph nodes and edges by training a binary recurrent variational auto-encoder (VAE) on trajectory data and 2) a hierarchical RL framework that leverages structural and connectivity knowledge from the learned world graph to bias exploration towards task-relevant waypoints and regions. We show that our approach significantly accelerates RL on a suite of challenging 2D grid world tasks: compared to baselines, world graph integration doubles achieved rewards on simpler tasks, e.g. MultiGoal, and manages to solve more challenging tasks, e.g. Door-Key, where baselines fail.
reject
This paper introduces an approach for structured exploration based on graph-based representations. While a number of the ideas in the paper are quite interesting and relevant to the ICLR community, the reviewers were generally in agreement about several concerns, which were discussed after the author response. These concerns include the ad-hoc nature of the approach, the limited technical novelty, and the difficulty of the experimental domains (and whether the approach could be applied to a more general class of challenging long-horizon problems such as those in prior works). Overall, the paper is not quite ready for publication at ICLR.
train
[ "rJlZ3Tisir", "ByeItaioir", "SkgqLTGqir", "HJeOJlV3FS", "Byllc8ecjB", "Bye-tMJYjH", "Skl-xMJFiS", "Sye-N-yYoB", "HkgB4ykYir", "rkgq0aBAtS", "Hyg4isEH9H" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "4. We are happy to add more comparisons with various waypoint selection rates and neighborhood sizes. To clarify, ‘neighborhood size’ refers to the size of the neighborhood around a waypoint state (wide goal) within which the WN Manager can propose the narrow goal. Intuitively, if the proportion of waypoint states...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, 3, 3 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, 5, 5 ]
[ "SkgqLTGqir", "SkgqLTGqir", "Bye-tMJYjH", "iclr_2020_BkgRe1SFDS", "HkgB4ykYir", "HJeOJlV3FS", "rkgq0aBAtS", "Hyg4isEH9H", "Hyg4isEH9H", "iclr_2020_BkgRe1SFDS", "iclr_2020_BkgRe1SFDS" ]
iclr_2020_BJgkbyHKDS
Invertible generative models for inverse problems: mitigating representation error and dataset bias
Trained generative models have shown remarkable performance as priors for inverse problems in imaging. For example, Generative Adversarial Network priors permit recovery of test images from 5-10x fewer measurements than sparsity priors. Unfortunately, these models may be unable to represent any particular image because of architectural choices, mode collapse, and bias in the training dataset. In this paper, we demonstrate that invertible neural networks, which have zero representation error by design, can be effective natural signal priors at inverse problems such as denoising, compressive sensing, and inpainting. Our formulation is an empirical risk minimization that does not directly optimize the likelihood of images, as one would expect. Instead we optimize the likelihood of the latent representation of images as a proxy, as this is empirically easier. For compressive sensing, our formulation can yield higher accuracy than sparsity priors across almost all undersampling ratios. For the same accuracy on test images, they can use 10-20x fewer measurements. We demonstrate that invertible priors can yield better reconstructions than sparsity priors for images that have rare features of variation within the biased training set, including out-of-distribution natural images.
reject
This paper studies the empirical performance of invertible generative models for compressive sensing, denoising and in painting. One issue in using generative models in this area has been that they hit an error floor in reconstruction due to model collapse etc i.e. one can not achieve zero error in reconstruction. The reviewers raised some concerns about novelty of the approach and thoroughness of the empirical studies. The authors response suggests that they are not claiming novelty w.r.t. to the approach but rather their use in compressive techniques. My own understanding is that this error floor is a major problem and removing its effect is a good contribution even without any novelty in the techniques. However, I do agree that a more thorough empirical study would be more convincing. While I can not recommend acceptance given the scores I do think this paper has potential and recommend the authors to resubmit to a future venue after a through revision.
test
[ "SJeePN_AFB", "HyxURBHiiH", "rkgAISBisB", "rygi7BBjjS", "HylG0NrosH", "rkxWorWy5H", "Bye3icUIqB", "SJeomuT_cB" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Update: I have read the other reviews and the author response and have not changed my evaluation.\n\nRecent work has shown that GANs can be effective for use as priors in inverse problems for images such as compressed sensing, denoising, and inpainting. A drawback is that GANs may have the problem of inexact recon...
[ 6, -1, -1, -1, -1, 1, 3, 6 ]
[ 4, -1, -1, -1, -1, 3, 5, 3 ]
[ "iclr_2020_BJgkbyHKDS", "SJeePN_AFB", "rkxWorWy5H", "Bye3icUIqB", "SJeomuT_cB", "iclr_2020_BJgkbyHKDS", "iclr_2020_BJgkbyHKDS", "iclr_2020_BJgkbyHKDS" ]
iclr_2020_Byl1W1rtvH
Recurrent Hierarchical Topic-Guided Neural Language Models
To simultaneously capture syntax and semantics from a text corpus, we propose a new larger-context language model that extracts recurrent hierarchical semantic structure via a dynamic deep topic model to guide natural language generation. Moving beyond a conventional language model that ignores long-range word dependencies and sentence order, the proposed model captures not only intra-sentence word dependencies, but also temporal transitions between sentences and inter-sentence topic dependences. For inference, we develop a hybrid of stochastic-gradient MCMC and recurrent autoencoding variational Bayes. Experimental results on a variety of real-world text corpora demonstrate that the proposed model not only outperforms state-of-the-art larger-context language models, but also learns interpretable recurrent multilayer topics and generates diverse sentences and paragraphs that are syntactically correct and semantically coherent.
reject
This paper was a very difficult case. All three original reviewers of the paper had never published in the area, and all of them advocated for acceptance of the paper. I, on the other hand, am an expert in the area who has published many papers, and I thought that while the paper is well-written and experimental evaluation is not incorrect, the method was perhaps less relevant given current state-of-the-art models. In addition, the somewhat non-standard evaluation was perhaps causing this fact to be masked. I asked the original reviewers to consider my comments multiple times both during the rebuttal period and after, and unfortunately none of them replied. Because of this, I elicited two additional reviews from people I knew were experts in the field. The reviews are below. I sent the PDF to the reviewers directly, and asked them to not look at the existing reviews (or my comments) when doing their review in order to make sure that they were making a fair assessment. Long story short, Reviewer 4 essentially agreed with my concerns and pointed out a few additional clarity issues. Reviewer 5 pointed out a number of clarity issues and was also concerned with the fact that d_j has access to all other sentences (including those following the current sentence). I know that at the end of Section 2 it is noted that at test time d_j only refers to previous sentences, but if so there is also a training-testing disconnect in model training, and it seems that this would hurt the model results. Based on this, I have decided to favor the opinions of three experts (me and the two additional reviewers) over the opinions of the original three reviewers, and not recommend the paper for acceptance at this time. In order to improve the paper I would suggest the following (1) an acknowledgement of standard methods to incorporate context by processing sequences consisting of multiple sentences simultaneously, (2) a more thorough comparison with state-of-the-art models that consider cross-sentential context on standard datasets such as WikiText or PTB. I would encourage the authors to consider this as they revise their paper. Finally, I would like to apologize to the authors that they did not get a chance to reply to the second set of reviews. As I noted above, I did try to make my best effort to encourage discussion during the rebuttal period.
train
[ "HygW6Tz7ar", "SJxPxG6G6H", "B1e7g2pjjS", "B1xyV5aior", "SyezH_ajoH", "HJlLML6ooB", "H1xrSldCKr", "rkxufFDxqS", "BJxNohr45S", "HJgqifHA5r", "B1x_ZuxnqH", "ByleVGe25S", "BJeimxqd9S", "H1xzIs-2PH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "public" ]
[ "[Additional review]\nThis paper proposes a technique to incorporate document-level topic model information into language models. \n\nWhile the underlying idea is interesting, my biggest issue is with the misleading assertions at the very beginning of the paper. In the second paragraph of Section 1, the paper claim...
[ 1, 1, -1, -1, -1, -1, 8, 8, 8, -1, -1, -1, -1, -1 ]
[ 3, 5, -1, -1, -1, -1, 3, 3, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2020_Byl1W1rtvH", "iclr_2020_Byl1W1rtvH", "HJgqifHA5r", "H1xrSldCKr", "rkxufFDxqS", "BJxNohr45S", "iclr_2020_Byl1W1rtvH", "iclr_2020_Byl1W1rtvH", "iclr_2020_Byl1W1rtvH", "ByleVGe25S", "H1xzIs-2PH", "BJeimxqd9S", "iclr_2020_Byl1W1rtvH", "iclr_2020_Byl1W1rtvH" ]
iclr_2020_B1xeZJHKPB
Aggregating explanation methods for neural networks stabilizes explanations
Despite a growing literature on explaining neural networks, no consensus has been reached on how to explain a neural network decision or how to evaluate an explanation. Our contributions in this paper are twofold. First, we investigate schemes to combine explanation methods and reduce model uncertainty to obtain a single aggregated explanation. The aggregation is more robust and aligns better with the neural network than any single explanation method.. Second, we propose a new approach to evaluating explanation methods that circumvents the need for manual evaluation and is not reliant on the alignment of neural networks and humans decision processes.
reject
This paper describes a new method for explaining the predictions of a CNN on a particular image. The method is based on aggregating the explanations of several methods. They also describe a new method of evaluating explanation methods which avoids manual evaluation of the explanations. However, the most critical reviewer questions the contribution of the proposed method, which is simple. Simple isn't always a bad thing, but I think here the reviewer has a point. The new method for evaluating explanation methods is interesting, but the sample images given are also very simple -- how does the method work when the image is cluttered? How about when the prediction is uncertain or wrong?
train
[ "Bkejrz1otS", "Hkev67lhsS", "S1eXG9ujFr", "HklB3JnsjH", "H1lAol5Eor", "HyxN6xcVjH", "Hke7ve94jS", "Bkxoi19EsB", "HJeX4JqEoS", "Bye0ht1B9B" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper presents a study on explanation methods, proposing an interesting way to aggregate their results and providing empirical evidence that aggregation can improve the quality of the explanations.\n\nThe paper considered only methods using CNN for classifying images, leaving other applications for future inve...
[ 8, -1, 3, -1, -1, -1, -1, -1, -1, 8 ]
[ 3, -1, 5, -1, -1, -1, -1, -1, -1, 1 ]
[ "iclr_2020_B1xeZJHKPB", "HyxN6xcVjH", "iclr_2020_B1xeZJHKPB", "H1lAol5Eor", "Hke7ve94jS", "Bkejrz1otS", "S1eXG9ujFr", "Bye0ht1B9B", "iclr_2020_B1xeZJHKPB", "iclr_2020_B1xeZJHKPB" ]
iclr_2020_rkglZyHtvH
Refining the variational posterior through iterative optimization
Variational inference (VI) is a popular approach for approximate Bayesian inference that is particularly promising for highly parameterized models such as deep neural networks. A key challenge of variational inference is to approximate the posterior over model parameters with a distribution that is simpler and tractable yet sufficiently expressive. In this work, we propose a method for training highly flexible variational distributions by starting with a coarse approximation and iteratively refining it. Each refinement step makes cheap, local adjustments and only requires optimization of simple variational families. We demonstrate theoretically that our method always improves a bound on the approximation (the Evidence Lower BOund) and observe this empirically across a variety of benchmark tasks. In experiments, our method consistently outperforms recent variational inference methods for deep learning in terms of log-likelihood and the ELBO. We see that the gains are further amplified on larger scale models, significantly outperforming standard VI and deep ensembles on residual networks on CIFAR10.
reject
In this paper a method for refining the variational approximation is proposed. The reviewers liked the contribution but a number reservations such as missing reference made the paper drop below the acceptance threshold. The authors are encouraged to modify paper and send to next conference. Reject.
train
[ "B1e_yO12jr", "BylUth6tqS", "rkgkfz39sr", "r1gWlfvuor", "HJxI_WBOjS", "HJgWtgS_oB", "rJeLMyHdjB", "SyxEr0E_ir", "BkgJiljxjB", "ryeMUJ66tS", "r1gLsnqAKS", "HygKbYcl5S" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "Thanks for your response.\n\n1. On the guarantee of improvement. Thanks for adding the formal proof, which is helpful and addresses my doubts. The discussion on the improvement over ELBO_init is still a bit hard to follow, due to the different notations involved and the order in which things are presented. I would...
[ -1, 6, -1, -1, -1, -1, -1, -1, 3, 6, 6, -1 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, 4, 3, 4, -1 ]
[ "HJgWtgS_oB", "iclr_2020_rkglZyHtvH", "HJxI_WBOjS", "iclr_2020_rkglZyHtvH", "BylUth6tqS", "ryeMUJ66tS", "BkgJiljxjB", "r1gLsnqAKS", "iclr_2020_rkglZyHtvH", "iclr_2020_rkglZyHtvH", "iclr_2020_rkglZyHtvH", "iclr_2020_rkglZyHtvH" ]
iclr_2020_S1eWbkSFPS
GRAPHS, ENTITIES, AND STEP MIXTURE
Graph neural networks have shown promising results on representing and analyzing diverse graph-structured data such as social, citation, and protein interaction networks. Existing approaches commonly suffer from the oversmoothing issue, regardless of whether policies are edge-based or node-based for neighborhood aggregation. Most methods also focus on transductive scenarios for fixed graphs, leading to poor generalization performance for unseen graphs. To address these issues, we propose a new graph neural network model that considers both edge-based neighborhood relationships and node-based entity features, i.e. Graph Entities with Step Mixture via random walk (GESM). GESM employs a mixture of various steps through random walk to alleviate the oversmoothing problem and attention to use node information explicitly. These two mechanisms allow for a weighted neighborhood aggregation which considers the properties of entities and relations. With intensive experiments, we show that the proposed GESM achieves state-of-the-art or comparable performances on four benchmark graph datasets comprising transductive and inductive learning tasks. Furthermore, we empirically demonstrate the significance of considering global information. The source code will be publicly available in the near future.
reject
Two reviewers are concerned about this paper while the other one is slightly positive. A reject is recommended.
train
[ "SJlsm7i6YH", "Hyeja6OUoH", "BygBly6Ijr", "Skegat3LoH", "rJgQEo2IiS", "SJxIP0_UjH", "HJxkOirdjr", "SyxfZ23UiB", "SJg4oGBstr", "S1eQBPadqH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper presents two models, namely GSM and GESM, to tackle the problem of transductive and inductive node classification. GSM is operating on asymmetric transition matrices and works by stacking propagation layers of different locality, where the final prediction is based on all propagation steps (JK concatena...
[ 3, -1, -1, -1, -1, -1, -1, -1, 6, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2020_S1eWbkSFPS", "S1eQBPadqH", "SJg4oGBstr", "SJlsm7i6YH", "SJlsm7i6YH", "S1eQBPadqH", "iclr_2020_S1eWbkSFPS", "SJlsm7i6YH", "iclr_2020_S1eWbkSFPS", "iclr_2020_S1eWbkSFPS" ]
iclr_2020_Syx7WyBtwB
Interpretations are useful: penalizing explanations to align neural networks with prior knowledge
For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective. Too often, the litany of proposed explainable deep learning methods stop at the first step, providing practitioners with insight into a model, but no way to act on it. In this paper, we propose contextual decomposition explanation penalization (CDEP), a method which enables practitioners to leverage existing explanation methods in order to increase the predictive accuracy of deep learning models. In particular, when shown that a model has incorrectly assigned importance to some features, CDEP enables practitioners to correct these errors by directly regularizing the provided explanations. Using explanations provided by contextual decomposition (CD) (Murdoch et al., 2018), we demonstrate the ability of our method to increase performance on an array of toy and real datasets.
reject
The paper contains interesting ideas for giving simple explanations to a NN; however, the reviewers do not feel the contribution is sufficiently novel to merit acceptance.
train
[ "rkeCgD1Gir", "HyeYjLkMor", "rJgWqUyGsB", "S1l8xL1GjB", "SyxRnHJMiH", "Hkgqk1x6Fr", "rklgSCRptr", "BJxDW5Jl5B", "Bkg-Tf8j_B", "SkgINhwquH" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "We would like to thank all reviewers for their time and effort. We have responded to their concerns below, and made the following changes to the manuscript as a result:\n\n- We have added references to Zaidan 2007 and Strout 2019\n\n- Per the comment from Joseph Janizek (author of the expected gradients paper), we...
[ -1, -1, -1, -1, -1, 6, 3, 3, -1, -1 ]
[ -1, -1, -1, -1, -1, 4, 5, 3, -1, -1 ]
[ "iclr_2020_Syx7WyBtwB", "rJgWqUyGsB", "Hkgqk1x6Fr", "rklgSCRptr", "BJxDW5Jl5B", "iclr_2020_Syx7WyBtwB", "iclr_2020_Syx7WyBtwB", "iclr_2020_Syx7WyBtwB", "SkgINhwquH", "iclr_2020_Syx7WyBtwB" ]
iclr_2020_ByeVWkBYPH
Neural Networks for Principal Component Analysis: A New Loss Function Provably Yields Ordered Exact Eigenvectors
In this paper, we propose a new loss function for performing principal component analysis (PCA) using linear autoencoders (LAEs). Optimizing the standard L2 loss results in a decoder matrix that spans the principal subspace of the sample covariance of the data, but fails to identify the exact eigenvectors. This downside originates from an invariance that cancels out in the global map. Here, we prove that our loss function eliminates this issue, i.e. the decoder converges to the exact ordered unnormalized eigenvectors of the sample covariance matrix. For this new loss, we establish that all local minima are global optima and also show that computing the new loss (and also its gradients) has the same order of complexity as the classical loss. We report numerical results on both synthetic simulations, and a real-data PCA experiment on MNIST (i.e., a 60,000 x784 matrix), demonstrating our approach to be practically applicable and rectify previous LAEs' downsides.
reject
Quoting from R3: "This paper proposes and analyzes a new loss function for linear autoencoders (LAEs) whose minima directly recover the principal components of the data. The core idea is to simultaneously solve a set of MSE LAE problems with tied weights and increasingly stringent masks on the encoder/decoder matrices." With two weak acceptance recommendations and a recommendation for rejection, this paper is borderline in terms of its scores. The approach and idea are interesting. The main shortcoming of the paper, as highlighted by the reviewers, is that the approach and theoretical analysis are not properly motivated to solve an actual problem faced in real-world data. The approach does not provide a better algorithm for recovering the eigenvectors of the data, nor is it proposed as part of a learning framework to solve a real-world problem. Experiments are shown on synthetic data and MNIST. As a stand-alone theoretical result, it leaves open questions as to the proposed utility.
train
[ "S1gCqzfYsH", "HkeAmzftor", "r1g7nlftiS", "HJebJxzKiH", "HkxnO6hitS", "SylQ9IDCtB", "Byl_DpEy9S" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This review has been extremely useful---responding to it has broadened our understanding of our submission and enabled us to identify several connections that were heretofore less clear to us. \n\nWe would love to hear your feedback on the following discussions inspired by your review, and we will be more than hap...
[ -1, -1, -1, -1, 6, 6, 3 ]
[ -1, -1, -1, -1, 1, 3, 3 ]
[ "Byl_DpEy9S", "Byl_DpEy9S", "SylQ9IDCtB", "HkxnO6hitS", "iclr_2020_ByeVWkBYPH", "iclr_2020_ByeVWkBYPH", "iclr_2020_ByeVWkBYPH" ]
iclr_2020_HylNWkHtvB
Domain-Independent Dominance of Adaptive Methods
From a simplified analysis of adaptive methods, we derive AvaGrad, a new optimizer which outperforms SGD on vision tasks when its adaptability is properly tuned. We observe that the power of our method is partially explained by a decoupling of learning rate and adaptability, greatly simplifying hyperparameter search. In light of this observation, we demonstrate that, against conventional wisdom, Adam can also outperform SGD on vision tasks, as long as the coupling between its learning rate and adaptability is taken into account. In practice, AvaGrad matches the best results, as measured by generalization accuracy, delivered by any existing optimizer (SGD or adaptive) across image classification (CIFAR, ImageNet) and character-level language modelling (Penn Treebank) tasks. This later observation, alongside of AvaGrad's decoupling of hyperparameters, could make it the preferred optimizer for deep learning, replacing both SGD and Adam.
reject
This paper proposes an adaptive gradient method for optimization in deep learning called AvaGrad. The authors argue that AvaGrad greatly simplifies hyperparameter search (over e.g. ADAM) and demonstrate competitive performance on benchmark image and text problems. In thorough reviews, thorough author response and discussion by the reviewers (which are are all appreciated) a few concerns about the work came to light and were debated. One reviewer was compelled by the author response to raise their recommendation to weak accept. However, none of the reviewers felt strongly enough to champion the paper for acceptance and even the reviewer assigning the highest score had reservations. A major issue of debate was the treatment of hyperparameters, i.e. that the authors tuned hyperparameters on a smaller problem and then assumed these would extrapolate to larger problems. In a largely empirical paper this does seem to be a significant concern. The space of adaptive optimizers for deep learning is a crowded one and thus the empirical (or theoretical) burden of proof of superiority is high. The authors state regarding a concurrent submission: "when hyperparameters are properly tuned, echoing our results on this matter", however, it seems that the reviewers disagree that the hyperparameters are indeed properly tuned in this paper. It's due to these remaining reservations that the recommendation is to reject.
test
[ "rklCY7VJ9r", "HkgBFoJCKS", "BJggfpXoiH", "SJe0hjnFoH", "HJeDIo3Kir", "SJlv1snKjS", "H1xb4q2toB", "H1gL5F3Yir", "SyltgthtiH", "H1g0TuhKir", "r1lrEWi3YB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "In this paper, the authors present a new adaptive gradient method AvaGrad. The authors claim the proposed method is less sensitive to its hyperparameters, compared to previous algorithms, and this is due to decoupling the learning rate and the damping parameter.\n\nOverall, the paper is well written, and is on an ...
[ 3, 6, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_HylNWkHtvB", "iclr_2020_HylNWkHtvB", "H1gL5F3Yir", "iclr_2020_HylNWkHtvB", "rklCY7VJ9r", "rklCY7VJ9r", "HkgBFoJCKS", "HkgBFoJCKS", "r1lrEWi3YB", "r1lrEWi3YB", "iclr_2020_HylNWkHtvB" ]
iclr_2020_BJlrZyrKDB
Statistically Consistent Saliency Estimation
The use of deep learning for a wide range of data problems has increased the need for understanding and diagnosing these models, and deep learning interpretation techniques have become an essential tool for data analysts. Although numerous model interpretation methods have been proposed in recent years, most of these procedures are based on heuristics with little or no theoretical guarantees. In this work, we propose a statistical framework for saliency estimation for black box computer vision models. We build a model-agnostic estimation procedure that is statistically consistent and passes the saliency checks of Adebayo et al. (2018). Our method requires solving a linear program, whose solution can be efficiently computed in polynomial time. Through our theoretical analysis, we establish an upper bound on the number of model evaluations needed to recover the region of importance with high probability, and build a new perturbation scheme for estimation of local gradients that is shown to be more efficient than the commonly used random perturbation schemes. Validity of the new method is demonstrated through sensitivity analysis.
reject
This submission proposes a statistically consistent saliency estimation method for visual model explainability. Strengths: -The method is novel, interesting, and passes some recently proposed sanity checks for these methods. Weaknesses: -The evaluation was flawed in several aspects. -The readability needed improvement. After the author feedback period remaining issues were: -A discussion of two points is missing: (i) why are these models so sensitive to the resolution of the saliency map? How does the performance of LEG change with the resolution (e.g. does it degrade for higher resolution?)? (ii) Figure 6 suggests that SHAP performs best at identifying "pixels that are crucial for the predictions". However, the authors use Figure 7 to argue that LEG is better at identifying salient "pixels that are more likely to be relevant for the prediction". These two observations are contradictory and should be resolved. -The evaluation is still missing some key details for interpreting the results. For example, how representative are the 3 images chosen in Figure 7? Also, in section 5.1 the authors don't describe how many images are included in their sanity check analysis or how those images were chosen. -The new discussion section is not actually a discussion section but a conclusion/summary section. Because of these issues, AC believes that the work is theoretically interesting but has not been sufficiently validated experimentally and does not give the reader sufficient insight into how it works and how it compares to other methods. Note also that the submission is also now more than 9 pages long, which requires that it be held to a higher standard of acceptance. Reviewers largely agreed with the stated shortcomings but were divided on their significance. AC shares the recommendation to reject.
train
[ "rkgorCQj5B", "Bkxs5ykUcB", "B1eTWxW2FB", "B1xxQlE2or", "r1g_1TGNjB", "Bylrty7Njr", "ByxPoy74iH", "B1e-3t6VjH", "r1gYNWXVsr", "HyeyFAMViS", "S1xcj6rAKH", "SyekXJ1AYH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary\nThis paper proposes an attribution method, linearly estimated gradient (LEG) for deep networks in the image setting.\nThe paper also introduces a variant of the estimator called LEG-TV, which includes a TV penalty, and provides a \ntheorem on the convergence rate of the estimator. The paper finds that the...
[ 8, 8, 3, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, 5, 1, -1, -1, -1, -1, -1, -1, -1, 1, 5 ]
[ "iclr_2020_BJlrZyrKDB", "iclr_2020_BJlrZyrKDB", "iclr_2020_BJlrZyrKDB", "iclr_2020_BJlrZyrKDB", "S1xcj6rAKH", "rkgorCQj5B", "rkgorCQj5B", "B1eTWxW2FB", "Bkxs5ykUcB", "SyekXJ1AYH", "iclr_2020_BJlrZyrKDB", "iclr_2020_BJlrZyrKDB" ]
iclr_2020_H1gHb1rFwr
Extreme Values are Accurate and Robust in Deep Networks
Recent evidence shows that convolutional neural networks (CNNs) are biased towards textures so that CNNs are non-robust to adversarial perturbations over textures, while traditional robust visual features like SIFT (scale-invariant feature transforms) are designed to be robust across a substantial range of affine distortion, addition of noise, etc with the mimic of human perception nature. This paper aims to leverage good properties of SIFT to renovate CNN architectures towards better accuracy and robustness. We borrow the scale-space extreme value idea from SIFT, and propose EVPNet (extreme value preserving network) which contains three novel components to model the extreme values: (1) parametric differences of Gaussian (DoG) to extract extrema, (2) truncated ReLU to suppress non-stable extrema and (3) projected normalization layer (PNL) to mimic PCA-SIFT like feature normalization. Experiments demonstrate that EVPNets can achieve similar or better accuracy than conventional CNNs, while achieving much better robustness on a set of adversarial attacks (FGSM,PGD,etc) even without adversarial training.
reject
This manuscript proposed biologically-inspired modifications to convolutional neural networks including differences of Gaussians convolutional filter, a truncated ReLU, and a modified projected normalization layer. The authors' results indicate that the modifications improve performance as well as improved robustness to adversarial attacks. The reviewers and AC agree that the problem studied is timely and interesting, and closely related to a variety of recent work on robust model architectures. However, this manuscript also received quite divergent reviews, resulting from differences in opinion about the novelty and importance of the results. In reviews and discussion, the reviewers noted issues with clarity of the presentation and sufficient justification of the approach and results. In the opinion of the AC, the manuscript in its current state is borderline and could be improved with more convincing empirical justification.
train
[ "ryevdCnYoB", "BJlcCanKjH", "H1lWGT3YiH", "SyeEkVTKjH", "r1lcfqfsYS", "Sylz5pxycr", "rklYN27c9H" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank the reviewer for the helpful feedback and suggestions. \n\n1. ## Details on How pDOG replace DoG ##\nIn DoG, the operation is shown in Eq-2,\n $D(x,y,\\sigma) = G(x, y, \\sigma) \\otimes I_1 – G(x,y, \\sigma) \\otimes I_0$.\nWhere $\\sigma$ is pre-designed Gaussian kernel size, $\\otimes$ m...
[ -1, -1, -1, -1, 3, 3, 8 ]
[ -1, -1, -1, -1, 5, 5, 3 ]
[ "r1lcfqfsYS", "Sylz5pxycr", "rklYN27c9H", "iclr_2020_H1gHb1rFwr", "iclr_2020_H1gHb1rFwr", "iclr_2020_H1gHb1rFwr", "iclr_2020_H1gHb1rFwr" ]
iclr_2020_BkgHWkrtPB
Where is the Information in a Deep Network?
Whatever information a deep neural network has gleaned from past data is encoded in its weights. How this information affects the response of the network to future data is largely an open question. In fact, even how to define and measure information in a network entails some subtleties. We measure information in the weights of a deep neural network as the optimal trade-off between accuracy of the network and complexity of the weights relative to a prior. Depending on the prior, the definition reduces to known information measures such as Shannon Mutual Information and Fisher Information, but in general it affords added flexibility that enables us to relate it to generalization, via the PAC-Bayes bound, and to invariance. For the latter, we introduce a notion of effective information in the activations, which are deterministic functions of future inputs. We relate this to the Information in the Weights, and use this result to show that models of low (information) complexity not only generalize better, but are bound to learn invariant representations of future inputs. These relations hinge not only on the architecture of the model, but also on how it is trained.
reject
This paper is full of ideas. However, a logical argument is only as strong as its weakest link, and I believe the current paper has some weak links. For example, the attempt to tie the behavior of SGD to free energy minimization relies on unrealistic approximations. Second, the bounds based on limiting flat priors become trivial. The authors in-depth response to my own review was much appreciated, especially given its last minute appearance. Unfortunately, I was not convinced by the arguments. In part, the authors argue that the logical argument they are making is not sensitive to certain issues that I raised, but this only highlights for me that the argument being made is not very precise. I can imagine a version of this work with sharper claims, built on clearly stated assumptions/conjectures about SGD's dynamics, RATHER THAN being framed as the consequences of clearly inaccurate approximations. The behavior of diffusions can be presented as evidence that the assumptions/conjectures (that cannot be proven at the moment, but which are needed to complete the logical argument) are reasonable. However, I am also not convinced that it is trivial to do this, and so the community must have a chance to review a major revision.
train
[ "BJgvM-q3jr", "Sylav-5hoH", "r1x4pAt2iB", "HJgCRGf3sr", "rker010oiH", "BJgSc6ajsB", "H1gk-6ajjS", "HJlrFsTijr", "rJl_CN_DKr", "Hygat6E1qH", "Sye3oGRmcH" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ ">> 2. Curvature assumption in Proof of Prop 3.4\n\nThe fact that the quadratic approximation is valid does not imply that the curvature is constant. It simply means that higher-order terms are negligible, which can happen while the curvature changes along the path. Proposition 3.4 gives the optimal value of the i...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 1, 1, 4 ]
[ "r1x4pAt2iB", "BJgvM-q3jr", "HJgCRGf3sr", "iclr_2020_BkgHWkrtPB", "rJl_CN_DKr", "Hygat6E1qH", "HJlrFsTijr", "Sye3oGRmcH", "iclr_2020_BkgHWkrtPB", "iclr_2020_BkgHWkrtPB", "iclr_2020_BkgHWkrtPB" ]
iclr_2020_ryg8WJSKPr
ConQUR: Mitigating Delusional Bias in Deep Q-Learning
Delusional bias is a fundamental source of error in approximate Q-learning. To date, the only techniques that explicitly address delusion require comprehensive search using tabular value estimates. In this paper, we develop efficient methods to mitigate delusional bias by training Q-approximators with labels that are "consistent" with the underlying greedy policy class. We introduce a simple penalization scheme that encourages Q-labels used across training batches to remain (jointly) consistent with the expressible policy class. We also propose a search framework that allows multiple Q-approximators to be generated and tracked, thus mitigating the effect of premature (implicit) policy commitments. Experimental results demonstrate that these methods can improve the performance of Q-learning in a variety of Atari games, sometimes dramatically.
reject
While there was some support for the ideas presented, the majority of reviewers felt that this submission is not ready for publication at ICLR in its present form. Concerns raised included the need for better motivation of the practicality of the approach, versus its computational cost. The need for improved evaluations was also raised.
train
[ "ryl0605msH", "SJlzn09QsS", "S1lVB0cmir", "SkxHeCc7ir", "H1gFcoFBYH", "BkxgvOhitH", "BklDpulB9r" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the constructive feedback and for the detailed questions regarding our experiments. Some brief responses to each of your numbered points in turn.\n\n1. [WHY ORDER OF MAGNITUDE CHANGE] The key difference between (1) the consistency-penalty experiment and (2) the full ConQUR experiments is that the for...
[ -1, -1, -1, -1, 3, 3, 6 ]
[ -1, -1, -1, -1, 5, 3, 5 ]
[ "H1gFcoFBYH", "H1gFcoFBYH", "BkxgvOhitH", "BklDpulB9r", "iclr_2020_ryg8WJSKPr", "iclr_2020_ryg8WJSKPr", "iclr_2020_ryg8WJSKPr" ]
iclr_2020_rkgIW1HKPB
Unsupervised Representation Learning by Predicting Random Distances
Deep neural networks have gained tremendous success in a broad range of machine learning tasks due to its remarkable capability to learn semantic-rich features from high-dimensional data. However, they often require large-scale labelled data to successfully learn such features, which significantly hinders their adaption into unsupervised learning tasks, such as anomaly detection and clustering, and limits their applications into critical domains where obtaining massive labelled data is prohibitively expensive. To enable downstream unsupervised learning on those domains, in this work we propose to learn features without using any labelled data by training neural networks to predict data distances in a randomly projected space. Random mapping is a highly efficient yet theoretical proven approach to obtain approximately preserved distances. To well predict these random distances, the representation learner is optimised to learn class structures that are implicitly embedded in the randomly projected space. Experimental results on 19 real-world datasets show our learned representations substantially outperform state-of-the-art competing methods in both anomaly detection and clustering tasks.
reject
The reviewers agree that this is an interesting paper but it required major modifications. After rebuttal, thee paper is much improved but unfortunately not above the bar yet. We encourage the authors to iterate on this work again.
train
[ "BJl5zFAisS", "Hkxfxi0jjH", "SyejtPCosB", "BJlMz_AooH", "rkxisntMor", "SJxM3ExoFS", "SyeWNBsjFr" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your positive and constructive comments, which helps substantially refine our paper. Your concerns are addressed as follows.\n\n1. The discussion of two optional losses. In the refined paper, we have created the Section 2.2. to focus on the discussion of the two optional losses, and Sections 1 and 2.1 h...
[ -1, -1, -1, -1, 6, 6, 3 ]
[ -1, -1, -1, -1, 3, 3, 4 ]
[ "SJxM3ExoFS", "iclr_2020_rkgIW1HKPB", "rkxisntMor", "SyeWNBsjFr", "iclr_2020_rkgIW1HKPB", "iclr_2020_rkgIW1HKPB", "iclr_2020_rkgIW1HKPB" ]
iclr_2020_rJgDb1SFwB
MGP-AttTCN: An Interpretable Machine Learning Model for the Prediction of Sepsis
With a mortality rate of 5.4 million lives worldwide every year and a healthcare cost of more than 16 billion dollars in the USA alone, sepsis is one of the leading causes of hospital mortality and an increasing concern in the ageing western world. Recently, medical and technological advances have helped re-define the illness criteria of this disease, which is otherwise poorly understood by the medical society. Together with the rise of widely accessible Electronic Health Records, the advances in data mining and complex nonlinear algorithms are a promising avenue for the early detection of sepsis. This work contributes to the research effort in the field of automated sepsis detection with an open-access labelling of the medical MIMIC-III data set. Moreover, we propose MGP-AttTCN: a joint multitask Gaussian Process and attention-based deep learning model to early predict the occurrence of sepsis in an interpretable manner. We show that our model outperforms the current state-of-the-art and present evidence that different labelling heuristics lead to discrepancies in task difficulty.
reject
The problem of introducing interpretability into sepsis prediction frameworks is one that I find a very important contribution, and I personally like the ideas presented in this paper. However, there are two reviewers, who have experience at the boundary of ML and HC, who are flagging this paper as currently not focusing on the technical novelty, and explaining the HC application enough to be appreciated by the ICLR audience. As such my recommendation is to edit the exposition so that it more appropriate for a general ML audience, or to submit it to an ML for HC meeting. Great work, and I hope it finds the right audience/focus soon.
train
[ "BylDbvpdtB", "rJgYwrPKoH", "BygirHwtir", "rylwMBDKsr", "r1ge0ZnWcS", "H1gqUrzD5H" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "I've read the rebuttal and I'd like to keep my score as is. My main concern is the questionable role of attention in making the model more interpretable (which is the main contribution of the paper).\n\n###########################\n\nThe paper proposes a new model for automated sepsis detection using multitask GP ...
[ 3, -1, -1, -1, 1, 8 ]
[ 3, -1, -1, -1, 4, 3 ]
[ "iclr_2020_rJgDb1SFwB", "BylDbvpdtB", "r1ge0ZnWcS", "H1gqUrzD5H", "iclr_2020_rJgDb1SFwB", "iclr_2020_rJgDb1SFwB" ]
iclr_2020_ryxPbkrtvr
BOSH: An Efficient Meta Algorithm for Decision-based Attacks
Adversarial example generation becomes a viable method for evaluating the robustness of a machine learning model. In this paper, we consider hard-label black- box attacks (a.k.a. decision-based attacks), which is a challenging setting that generates adversarial examples based on only a series of black-box hard-label queries. This type of attacks can be used to attack discrete and complex models, such as Gradient Boosting Decision Tree (GBDT) and detection-based defense models. Existing decision-based attacks based on iterative local updates often get stuck in a local minimum and fail to generate the optimal adversarial example with the smallest distortion. To remedy this issue, we propose an efficient meta algorithm called BOSH-attack, which tremendously improves existing algorithms through Bayesian Optimization (BO) and Successive Halving (SH). In particular, instead of traversing a single solution path when searching an adversarial example, we maintain a pool of solution paths to explore important regions. We show empirically that the proposed algorithm converges to a better solution than existing approaches, while the query count is smaller than applying multiple random initializations by a factor of 10.
reject
This paper proposes BOSH-attack, a meta-algorithm for decision-based attack, where a model that can be accessed only via label queries for a given input is attacked by a minimal perturbation to the input that changes the predicted label. BOSH improves over existing local update algorithms by leveraging Bayesian Optimization (BO) and Successive Halving (SH). It has valuable contributions. But various improvements as detailed in the review comments can be made to further strength the manuscript.
train
[ "rkxncuZQYH", "rylxbjI3sr", "HygDXsL2jS", "HJlHc5UnsS", "HylNib0yqB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "In this paper, the authors study the adversarial example generation problem, in the difficult case where the attacked model is a black box. Since the model is unknown, the approaches based on the minimization of a loss function with a gradient based optimizer do not apply. The current alternatives, known as decisi...
[ 3, -1, -1, -1, 3 ]
[ 3, -1, -1, -1, 3 ]
[ "iclr_2020_ryxPbkrtvr", "HylNib0yqB", "HylNib0yqB", "rkxncuZQYH", "iclr_2020_ryxPbkrtvr" ]
iclr_2020_BygKZkBtDH
Balancing Cost and Benefit with Tied-Multi Transformers
This paper proposes a novel procedure for training multiple Transformers with tied parameters which compresses multiple models into one enabling the dynamic choice of the number of encoder and decoder layers during decoding. In sequence-to-sequence modeling, typically, the output of the last layer of the N-layer encoder is fed to the M-layer decoder, and the output of the last decoder layer is used to compute loss. Instead, our method computes a single loss consisting of NxM losses, where each loss is computed from the output of one of the M decoder layers connected to one of the N encoder layers. A single model trained by our method subsumes multiple models with different number of encoder and decoder layers, and can be used for decoding with fewer than the maximum number of encoder and decoder layers. We then propose a mechanism to choose a priori the number of encoder and decoder layers for faster decoding, and also explore recurrent stacking of layers and knowledge distillation to enable further parameter reduction. In a case study of neural machine translation, we present a cost-benefit analysis of the proposed approaches and empirically show that they greatly reduce decoding costs while preserving translation quality.
reject
The paper proposed a method for training multiple transformers with tied parameters and enabling dynamic choice of the number of encoder and decoder layers. The method is evaluated in neural machine translation and shown to reduce decoding costs without compromising translation quality. The reviewers generally agreed that the proposed method is interesting, but raised issues regarding the significance of the claimed benefits and the quality of overall presentation of the paper. Based on a consensus reached in a post rebuttal discussion with the reviewers, I am recommending rejecting this paper.
train
[ "Byx3UPgOiB", "S1ewpvg_sr", "HJl57DgdjH", "Syl23DhcFr", "B1liTs2TYS", "H1gErESpKr" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank you for your review and for taking the time to read our paper thoroughly.\n\nOur responses to your questions are as follows:\n\n1. Thank you for your suggestion regarding the reorganization of the paper. The basic model, which includes the multi-softmax functions, is the main point of the paper. Dynamic l...
[ -1, -1, -1, 1, 6, 1 ]
[ -1, -1, -1, 5, 5, 5 ]
[ "H1gErESpKr", "Syl23DhcFr", "B1liTs2TYS", "iclr_2020_BygKZkBtDH", "iclr_2020_BygKZkBtDH", "iclr_2020_BygKZkBtDH" ]
iclr_2020_rJxqZkSFDB
Searching to Exploit Memorization Effect in Learning from Corrupted Labels
Sample-selection approaches, which attempt to pick up clean instances from the training data set, have become one promising direction to robust learning from corrupted labels. These methods all build on the memorization effect, which means deep networks learn easy patterns first and then gradually over-fit the training data set. In this paper, we show how to properly select instances so that the training process can benefit the most from the memorization effect is a hard problem. Specifically, memorization can heavily depend on many factors, e.g., data set and network architecture. Nonetheless, there still exists general patterns of how memorization can occur. These facts motivate us to exploit memorization by automated machine learning (AutoML) techniques. First, we designed an expressive but compact search space based on observed general patterns. Then, we propose to use the natural gradient-based search algorithm to efficiently search through space. Finally, extensive experiments on both synthetic data sets and benchmark data sets demonstrate that the proposed method can not only be much efficient than existing AutoML algorithms but can also achieve much better performance than the state-of-the-art approaches for learning from corrupted labels.
reject
This paper develops a method for sample selection that exploits the memorization effect. While the paper has been substantially improved from its original form, the paper still does not meet the quality bar of ICLR in terms of presentation of the results and experimental validation. The paper will benefit from a revision and resubmission to another venue.
train
[ "HJx3cO08jB", "HkgpJ90UiS", "rJeIh5CUiB", "HylWH90Uor", "rkeYaKRIiB", "ryl61yYAtr", "Skg9z1HZcH", "BkxT0H5DcB" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your comments. Please note that you are \"ICLR 2020 Conference Paper1554 AnonReviewer1\".\n\nQ1. It introduces too many basic concepts in autoML\n\nThanks for the suggestion. In the revised version, we have changed the outline of Section 2.2 and removed unnecessary concepts, e.g., supernet and one-shot....
[ -1, -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, -1, 3, 1, 1 ]
[ "ryl61yYAtr", "rkeYaKRIiB", "iclr_2020_rJxqZkSFDB", "BkxT0H5DcB", "Skg9z1HZcH", "iclr_2020_rJxqZkSFDB", "iclr_2020_rJxqZkSFDB", "iclr_2020_rJxqZkSFDB" ]
iclr_2020_SklibJBFDB
Evaluating Semantic Representations of Source Code
Learned representations of source code enable various software developer tools, e.g., to detect bugs or to predict program properties. At the core of code representations often are word embeddings of identifier names in source code, because identifiers account for the majority of source code vocabulary and convey important semantic information. Unfortunately, there currently is no generally accepted way of evaluating the quality of word embeddings of identifiers, and current evaluations are biased toward specific downstream tasks. This paper presents IdBench, the first benchmark for evaluating to what extent word embeddings of identifiers represent semantic relatedness and similarity. The benchmark is based on thousands of ratings gathered by surveying 500 software developers. We use IdBench to evaluate state-of-the-art embedding techniques proposed for natural language, an embedding technique specifically designed for source code, and lexical string distance functions, as these are often used in current developer tools. Our results show that the effectiveness of embeddings varies significantly across different embedding techniques and that the best available embeddings successfully represent semantic relatedness. On the downside, no existing embedding provides a satisfactory representation of semantic similarities, e.g., because embeddings consider identifiers with opposing meanings as similar, which may lead to fatal mistakes in downstream developer tools. IdBench provides a gold standard to guide the development of novel embeddings that address the current limitations.
reject
This paper presents a dataset to evaluate the quality of embeddings learnt for source code. The dataset consists of three different subtasks: relatedness, similarity, and contextual similarity. The main contribution of the paper is the construction of these datasets which should be useful to the community. However, there are valid concerns raised about the size of the datasets (which is pretty small) and the baselines used to evaluate the embeddings -- there should be a baselines using a contextual embeddings model like BERT which could have been fine-tuned on the source code data. If these comments are addressed, the paper can be a good contribution in an NLP conference. As of now, I recommend a Rejection.
train
[ "SyljNtrNoB", "BJxZMKBEiS", "B1xgSdrVoH", "HJeSIqF6KS", "Skxrit96Kr", "rygm5dnr9S" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks a lot for your insightful review! We are happy to see that the motivation for our work and the contributions of our paper have been made clear.", "Thanks for your review. Please let us address your three concerns:\n\n1) Importance of identifier embeddings:\nThe first four paragraphs of the paper try to an...
[ -1, -1, -1, 6, 3, 1 ]
[ -1, -1, -1, 4, 3, 4 ]
[ "HJeSIqF6KS", "Skxrit96Kr", "rygm5dnr9S", "iclr_2020_SklibJBFDB", "iclr_2020_SklibJBFDB", "iclr_2020_SklibJBFDB" ]
iclr_2020_B1eibJrtwr
Abstractive Dialog Summarization with Semantic Scaffolds
The demand for abstractive dialog summary is growing in real-world applications. For example, customer service center or hospitals would like to summarize customer service interaction and doctor-patient interaction. However, few researchers explored abstractive summarization on dialogs due to the lack of suitable datasets. We propose an abstractive dialog summarization dataset based on MultiWOZ. If we directly apply previous state-of-the-art document summarization methods on dialogs, there are two significant drawbacks: the informative entities such as restaurant names are difficult to preserve, and the contents from different dialog domains are sometimes mismatched. To address these two drawbacks, we propose Scaffold Pointer Network (SPNet) to utilize the existing annotation on speaker role, semantic slot and dialog domain. SPNet incorporates these semantic scaffolds for dialog summarization. Since ROUGE cannot capture the two drawbacks mentioned, we also propose a new evaluation metric that considers critical informative entities in the text. On MultiWOZ, our proposed SPNet outperforms state-of-the-art abstractive summarization methods on all the automatic and human evaluation metrics.
reject
This paper proposes an approach for abstractive summarization of multi-domain dialogs, called SPNet, that incrementally builds on previous approaches such as pointer-generator networks. SPNet also separately includes speaker role, slot and domain labels, and is evaluated against a new metric, Critical Information Completeness (CIC), to tackle issues with ROUGE. The reviewers suggested a set of issues, including the meaningfulness of the task, incremental nature of the work and lack of novelty, and consistency issues in the write up. Unfortunately authors did not respond to the reviewer comments. I suggest rejecting the paper.
train
[ "S1e6F8NOtB", "HJWCJKttH", "BkeK_KTlcH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper \"Abstractive Dialog Summarization with Semantic Scaffolds\" presents a new architecture that the authors claim is more suited for summarizing dialogues. The dataset for summarization was synthesized from an existing conversation dataset. \n\nThe new architecture is a minor variation of an existing point...
[ 1, 1, 3 ]
[ 4, 3, 5 ]
[ "iclr_2020_B1eibJrtwr", "iclr_2020_B1eibJrtwr", "iclr_2020_B1eibJrtwr" ]
iclr_2020_Skxn-JSYwr
EXPLOITING SEMANTIC COHERENCE TO IMPROVE PREDICTION IN SATELLITE SCENE IMAGE ANALYSIS: APPLICATION TO DISEASE DENSITY ESTIMATION
High intra-class diversity and inter-class similarity is a characteristic of remote sensing scene image data sets currently posing significant difficulty for deep learning algorithms on classification tasks. To improve accuracy, post-classification methods have been proposed for smoothing results of model predictions. However, those approaches require an additional neural network to perform the smoothing operation, which adds overhead to the task. We propose an approach that involves learning deep features directly over neighboring scene images without requiring use of a cleanup model. Our approach utilizes a siamese network to improve the discriminative power of convolutional neural networks on a pair of neighboring scene images. It then exploits semantic coherence between this pair to enrich the feature vector of the image for which we want to predict a label. Empirical results show that this approach provides a viable alternative to existing methods. For example, our model improved prediction accuracy by 1 percentage point and dropped the mean squared error value by 0.02 over the baseline, on a disease density estimation task. These performance gains are comparable with results from existing post-classification methods, moreover without implementation overheads.
reject
This papers proposed a solution to the problem of disease density estimation using satellite scene images. The method combines a classification and regression task. The reviewers were unanimous in their recommendation that the submission not be accepted to ICLR. The main concern was a lack of methodological novelty. The authors responded to reviewer comments, and indicated a list of improvements that still remain to be done indicating that the paper should at least go through another review cycle.
train
[ "B1gzVVJsjS", "rklgjG6Zjr", "HylZ2Rw6FH", "ByewhdO65r" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary of reviewer concerns:\n\n1. Our claim of having developed an approach for improving prediction accuracy for satellite scene image analysis that has greater efficiency than post-classification approaches is not validated with experiments (reviewer #2, #5).\n2. We should compare performance of our model agai...
[ -1, 1, 3, 1 ]
[ -1, 3, 4, 4 ]
[ "HylZ2Rw6FH", "iclr_2020_Skxn-JSYwr", "iclr_2020_Skxn-JSYwr", "iclr_2020_Skxn-JSYwr" ]
iclr_2020_SyxTZ1HYwB
TWO-STEP UNCERTAINTY NETWORK FOR TASKDRIVEN SENSOR PLACEMENT
Optimal sensor placement achieves the minimal cost of sensors while obtaining the prespecified objectives. In this work, we propose a framework for sensor placement to maximize the information gain called Two-step Uncertainty Network(TUN). TUN encodes an arbitrary number of measurements, models the conditional distribution of high dimensional data, and estimates the task-specific information gain at un-observed locations. Experiments on the synthetic data show that TUN outperforms the random sampling strategy and Gaussian Process-based strategy consistently.
reject
This paper proposes a sensor placement strategy based on maximising the information gain. Instead of using Gaussian process, the authors apply neural nets as function approximators. A limited empirical evaluation is performed to assess the performance of the proposed strategy. The reviewers have raised several major issues, including the lack of novelty, clarity, and missing critical details in the exposition. The authors didn’t address any of the raised concerns in the rebuttal. I will hence recommend rejection of this paper.
train
[ "BJgf-t36YB", "BJeSdnb1qr" ]
[ "official_reviewer", "official_reviewer" ]
[ "This paper describes a sensor placement strategy based on information gain on an unknown quantity of interest, which already exists in the active learning literature. As is well-known in the literature, this is equivalent to minimizing the expected remaining entropy. What the authors have done differently is to co...
[ 1, 1 ]
[ 5, 3 ]
[ "iclr_2020_SyxTZ1HYwB", "iclr_2020_SyxTZ1HYwB" ]
iclr_2020_Syxp-1HtvB
Semantic Hierarchy Emerges in the Deep Generative Representations for Scene Synthesis
Despite the success of Generative Adversarial Networks (GANs) in image synthesis, there lacks enough understanding on what networks have learned inside the deep generative representations and how photo-realistic images are able to be composed from random noises. In this work, we show that highly-structured semantic hierarchy emerges from the generative representations as the variation factors for synthesizing scenes. By probing the layer-wise representations with a broad set of visual concepts at different abstraction levels, we are able to quantify the causality between the activations and the semantics occurring in the output image. Such a quantification identifies the human-understandable variation factors learned by GANs to compose scenes. The qualitative and quantitative results suggest that the generative representations learned by GAN are specialized to synthesize different hierarchical semantics: the early layers tend to determine the spatial layout and configuration, the middle layers control the categorical objects, and the later layers finally render the scene attributes as well as color scheme. Identifying such a set of manipulatable latent semantics facilitates semantic scene manipulation.
reject
The paper proposes to study what information is encoded in different layers of StyleGAN. The authors do so by training classifiers for different layers of latent codes and investigating whether changing the latent code changes the generated output in the expected fashion. The paper received borderline reviews with two weak accepts and one weak reject. Initially, the reviewers were more negative (with one reject, one weak reject, and one weak accept). After the rebuttal, the authors addressed most of the reviewer questions/concerns. Overall, the reviewers thought the results were interesting and appreciated the care the authors took in their investigations. The main concern of the reviewers is that the analysis is limited to only StyleGAN. It would be more interesting and informative if the authors applied their methodology to different GANs. Then they can analyze whether the methodology and findings holds for other types of GANs as well. R1 notes that given the wide interest in StyleGAN-like models, the work maybe of interest to the community despite the limited investigation. The reviewers also point out the writing can be improved to be more precise. The AC agrees that the paper is mostly well written and well presented. However, there are limitations in what is achieved in the paper and it would be of limited interest to the community. The AC recommends that the authors consider improving their work, potentially broadening their investigation to other GAN architectures, and resubmit to an appropriate venue.
train
[ "BylIovgxcr", "B1gSrKSEoB", "SkgTE_BVsS", "B1xx4YHNir", "rJljzdBNsr", "BylzFBrNor", "rJegBk1SqB", "r1gTx_WOcH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Updates after author response:\nI'd like to thank the authors for their detailed responses. Some of my primary concerns were regarding the presentation, and I feel they have been mostly addressed with the changes to the introduction and abstract (I'd still recommend using 'layerwise latent code' instead of 'layerw...
[ 6, -1, -1, -1, -1, -1, 3, 6 ]
[ 3, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_Syxp-1HtvB", "BylIovgxcr", "rJegBk1SqB", "BylIovgxcr", "rJegBk1SqB", "r1gTx_WOcH", "iclr_2020_Syxp-1HtvB", "iclr_2020_Syxp-1HtvB" ]
iclr_2020_rkgAb1Btvr
Fourier networks for uncertainty estimates and out-of-distribution detection
A simple method for obtaining uncertainty estimates for Neural Network classifiers (e.g. for out-of-distribution detection) is to use an ensemble of independently trained networks and average the softmax outputs. While this method works, its results are still very far from human performance on standard data sets. We investigate how this method works and observe three fundamental limitations: "Unreasonable" extrapolation, "unreasonable" agreement between the networks in an ensemble, and the filtering out of features that distinguish the training distribution from some out-of-distribution inputs, but do not contribute to the classification. To mitigate these problems we suggest "large" initializations in the first layers and changing the activation function to sin(x) in the last hidden layer. We show that this combines the out-of-distribution behavior from nearest neighbor methods with the generalization capabilities of neural networks, and achieves greatly improved out-of- distribution detection on standard data sets (MNIST/fashionMNIST/notMNIST, SVHN/CIFAR10).
reject
This paper presents a new method for detecting out-of-distribution (OOD) samples. A reviewer pointed out that the paper discovers an interesting finding and the addressed problem is important. On the other hand, other reviewers pointed out theoretical/empirical justifications are limited. In particular, I think that experimental supports why the proposed method is superior beyond the existing ones are limited. I encourages the authors to consider more scenarios of OOD detection (e.g., datasets and architectures) and more baselines as the problem of measuring the confidence of neural networks or detecting outliers have rich literature. This would guide more comprehensive understandings on the proposed method. Hence, I recommend rejection.
test
[ "BkxqXb7TFr", "H1xbytdCFH", "BklRXdoFsr", "BkxULksYoS", "SyguRR9FiH", "H1g8Iq1Z5S", "BygwY6oiwr", "HJeo4VSsDB", "rJxZt2o5wr", "HkxSZjucwB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "public", "author", "public" ]
[ "** Updates after rebuttal **\n\nI thank the authors for the response, though I am still skeptical about the evaluation of the method, which might be a result of heavy tuning and overfit to the chosen test sets. The proposed approach also requires more theoretical justification.\n\n---------------------------------...
[ 3, 6, -1, -1, -1, 1, -1, -1, -1, -1 ]
[ 1, 4, -1, -1, -1, 4, -1, -1, -1, -1 ]
[ "iclr_2020_rkgAb1Btvr", "iclr_2020_rkgAb1Btvr", "H1xbytdCFH", "H1g8Iq1Z5S", "BkxqXb7TFr", "iclr_2020_rkgAb1Btvr", "HJeo4VSsDB", "rJxZt2o5wr", "HkxSZjucwB", "iclr_2020_rkgAb1Btvr" ]
iclr_2020_ryxC-kBYDS
Gaussian Conditional Random Fields for Classification
In this paper, a Gaussian conditional random field model for structured binary classification (GCRFBC) is proposed. The model is applicable to classification problems with undirected graphs, intractable for standard classification CRFs. The model representation of GCRFBC is extended by latent variables which yield some appealing properties. Thanks to the GCRF latent structure, the model becomes tractable, efficient, and open to improvements previously applied to GCRF regression. Two different forms of the algorithm are presented: GCRFBCb (GCRGBC - Bayesian) and GCRFBCnb (GCRFBC - non-Bayesian). The extended method of local variational approximation of sigmoid function is used for solving empirical Bayes in GCRFBCb variant, whereas MAP value of latent variables is the basis for learning and inference in the GCRFBCnb variant. The inference in GCRFBCb is solved by Newton-Cotes formulas for one-dimensional integration. Both models are evaluated on synthetic data and real-world data. It was shown that both models achieve better prediction performance than relevant baselines. Advantages and disadvantages of the proposed models are discussed.
reject
Main content: Blind review #2 summarizes it well: The authors provide a method to modify GRFs to be used for classification. The idea is simple and easy to get through, the writing is clean. The method boils down to using a latent variable that acts as a "pseudo-regressor" that is passed through a sigmoid for classification. The authors then discuss learning and inference in the proposed model, and propose two different variants that differ on scalability and a bit on performance as well. The idea of using the \xi transformation for the lower bound of the sigmoid was interesting to me -- since I have not seen it before, its possible its commonly used in the field and hopefully the other reviewers can talk more about the novelty here. The empirical results are very promising, which is the main reason I vote for weak acceptance. I think the paper has value, albeit I would say its a bit weak on novelty, and I am not 100% convinced about the this conference being the right fit for this paper. The authors augment MRFs for classification and evaluate and present the results well. -- Discussion: As blind review #1 points out: Even from the experiments (including the new traffic one), it is unclear how much better the method is either because we don't know if the improvements are statistically significant and that in many of the results, unstructured models like RF or logistic regression are very competitive casting some doubt on whether these datasets were well suited for structured prediction. -- This paper is a desk reject as review #2's points out that anonymity was broken by the inclusion of a code link that reveals the authorship, which is true as a simple search on the GitHub user "andrijaster" immediately brings us to https://arxiv.org/pdf/1902.00045.pdf which is a draft of this submission showing all author names.
train
[ "H1evo4FcjS", "B1glOEKcjH", "HkxMCAOqjH", "ryl7tCdqoS", "r1gpwXPsYB", "B1lu-0I2Fr", "HkxXn-jJ9B" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks to all the reviewers for their helpful and constructive feedback. We have uploaded a new paper revision to address the comments and feedback:\n\n1. Added discussion on advantages and disadvantages of GCRFBC model (Introduction).\n2. Additional references and discussion concerning relevant references connect...
[ -1, -1, -1, -1, 1, 6, 6 ]
[ -1, -1, -1, -1, 4, 1, 3 ]
[ "iclr_2020_ryxC-kBYDS", "r1gpwXPsYB", "B1lu-0I2Fr", "HkxXn-jJ9B", "iclr_2020_ryxC-kBYDS", "iclr_2020_ryxC-kBYDS", "iclr_2020_ryxC-kBYDS" ]
iclr_2020_SklgfkSFPH
On PAC-Bayes Bounds for Deep Neural Networks using the Loss Curvature
We investigate whether it's possible to tighten PAC-Bayes bounds for deep neural networks by utilizing the Hessian of the training loss at the minimum. For the case of Gaussian priors and posteriors we introduce a Hessian-based method to obtain tighter PAC-Bayes bounds that relies on closed form solutions of layerwise subproblems. We thus avoid commonly used variational inference techniques which can be difficult to implement and time consuming for modern deep architectures. We conduct a theoretical analysis that links the random initialization, minimum, and curvature at the minimum of a deep neural network to limits on what is provable about generalization through PAC-Bayes. Through careful experiments we validate our theoretical predictions and analyze the influence of the prior mean, prior covariance, posterior mean and posterior covariance on obtaining tighter bounds.
reject
The paper computes an "approximate" generalization bound based on loss curvature. Several expert reviewers found a long list of issues, including missing related work and a sloppy mix of formal statements and heuristics, without proper accounting of what could be gleaned from some many heuristic steps. Ultimately, the paper needs to be rewritten and re-reviewed.
train
[ "HylamSZnor", "BkxxJuKtiB", "HygxQ1qFiS", "rJehjrPusS", "SkgRRv3lor", "SJl3nwnljr", "SkgoVP3lor", "Bkg_WwneoS", "BkgdJN2xor", "SJgRwH_loB", "HklGiy_gsr", "rygvRwUgjH", "BygyoQSKYB", "BkeaQvJptS", "SygqmIu4qS", "SyxIC7Le5r", "HkgKHTkCKS" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "The authors have changed significantly the abstract, introduction and contributions sections of the submission, addressing a number of reviewer concerns. In the revised text we highlight that we make a number of approximations to the original PAC-Bayes objective and derive some formal results for this approximatio...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 3, 1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 5, -1, -1 ]
[ "rygvRwUgjH", "rJehjrPusS", "BkxxJuKtiB", "BkgdJN2xor", "BygyoQSKYB", "BygyoQSKYB", "iclr_2020_SklgfkSFPH", "iclr_2020_SklgfkSFPH", "BygyoQSKYB", "BkeaQvJptS", "SygqmIu4qS", "iclr_2020_SklgfkSFPH", "iclr_2020_SklgfkSFPH", "iclr_2020_SklgfkSFPH", "iclr_2020_SklgfkSFPH", "HkgKHTkCKS", ...
iclr_2020_Hkxbz1HKvr
Learning Key Steps to Attack Deep Reinforcement Learning Agents
Deep reinforcement learning agents are known to be vulnerable to adversarial attacks. In particular, recent studies have shown that attacking a few key steps is effective for decreasing the agent's cumulative reward. However, all existing attacking methods find those key steps with human-designed heuristics, and it is not clear how more effective key steps can be identified. This paper introduces a novel reinforcement learning framework that learns more effective key steps through interacting with the agent. The proposed framework does not require any human heuristics nor knowledge, and can be flexibly coupled with any white-box or black-box adversarial attack scenarios. Experiments on benchmark Atari games across different scenarios demonstrate that the proposed framework is superior to existing methods for identifying more effective key steps.
reject
This paper considers adversarial attacks in deep reinforcement learning, and specifically focuses on the problem of identifying key steps to attack. The paper poses learning these key steps as an RL problem with a cost for the attacker choosing to attack. The reviewers agreed that this was an interesting problem setup, and the ability to learn these attacks without heuristics is promising. The main concern, which was felt was not adequately addressed in the rebuttals, was that the results need to be more than just competitive with heuristic approaches. The fact that the attack ratio cannot be reliably changed, even with varying $\lambda$ still presents a major hurdle in the evaluation of the proposed method. For the aforementioned reasons, I recommend rejecting this paper.
train
[ "SJehO3OJ9r", "BJxeMwmioH", "S1lULw7jir", "HygBOdmsjr", "Syet6D7ijH", "HkeJyj8wtB", "SJgW8qeTYB" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes to learn the ‘key-steps’ at which to to apply an adversarial attack on a reinforcement learning agent. The framing of this problem is a Lagrangian relaxation of a constrained minimization problem which takes the form of an RL problem itself, where the attacking agent’s reward is the negative re...
[ 3, -1, -1, -1, -1, 3, 1 ]
[ 3, -1, -1, -1, -1, 3, 5 ]
[ "iclr_2020_Hkxbz1HKvr", "iclr_2020_Hkxbz1HKvr", "SJehO3OJ9r", "HkeJyj8wtB", "SJgW8qeTYB", "iclr_2020_Hkxbz1HKvr", "iclr_2020_Hkxbz1HKvr" ]
iclr_2020_Skl-fyHKPH
A Mean-Field Theory for Kernel Alignment with Random Features in Generative Adverserial Networks
We propose a novel supervised learning method to optimize the kernel in maximum mean discrepancy generative adversarial networks (MMD GANs). Specifically, we characterize a distributionally robust optimization problem to compute a good distribution for the random feature model of Rahimi and Recht to approximate a good kernel function. Due to the fact that the distributional optimization is infinite dimensional, we consider a Monte-Carlo sample average approximation (SAA) to obtain a more tractable finite dimensional optimization problem. We subsequently leverage a particle stochastic gradient descent (SGD) method to solve finite dimensional optimization problems. Based on a mean-field analysis, we then prove that the empirical distribution of the interactive particles system at each iteration of the SGD follows the path of the gradient descent flow on the Wasserstein manifold. We also establish the non-asymptotic consistency of the finite sample estimator. Our empirical evaluation on synthetic data-set as well as MNIST and CIFAR-10 benchmark data-sets indicates that our proposed MMD GAN model with kernel learning indeed attains higher inception scores well as Fr\`{e}chet inception distances and generates better images compared to the generative moment matching network (GMMN) and MMD GAN with untrained kernels.
reject
This paper was assessed by three reviewers who scored it as 6/1/6. The main criticism included somewhat weak experiments due to the manual tuning of bandwidth, the use of old (and perhaps mostly solved/not challenging) datasets such as Mnist and Cifar10, lack of ablation studies. The other issue voiced in the review is that the proposed method is very close to a MMD-GAN with a kernel plus random features. Taking into account all positives and negatives, we regret to conclude that this submission falls short of the quality required by ICLR2020, thus it cannot be accepted at this time.
train
[ "SJx5gjF2FB", "BylvawhsiB", "BkxQkynosr", "B1xUXK3isr", "B1lbCgh6KB", "Syx3pUyf5S" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes to learn a kernel for training MMD-GAN by optimizing over the probability distribution that defines the kernel by means of random features. This is unlike the usual setting of MMD-GAN where the kernel is parametrized by composing a fixed top-kernel with a discriminator network that is optimized...
[ 1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, 3, 4 ]
[ "iclr_2020_Skl-fyHKPH", "SJx5gjF2FB", "Syx3pUyf5S", "B1lbCgh6KB", "iclr_2020_Skl-fyHKPH", "iclr_2020_Skl-fyHKPH" ]
iclr_2020_rJgffkSFPS
Multi-objective Neural Architecture Search via Predictive Network Performance Optimization
Neural Architecture Search (NAS) has shown great potentials in finding a better neural network design than human design. Sample-based NAS is the most fundamental method aiming at exploring the search space and evaluating the most promising architecture. However, few works have focused on improving the sampling efficiency for a multi-objective NAS. Inspired by the nature of the graph structure of a neural network, we propose BOGCN-NAS, a NAS algorithm using Bayesian Optimization with Graph Convolutional Network (GCN) predictor. Specifically, we apply GCN as a surrogate model to adaptively discover and incorporate nodes structure to approximate the performance of the architecture. For NAS-oriented tasks, we also design a weighted loss focusing on architectures with high performance. Our method further considers an efficient multi-objective search which can be flexibly injected into any sample-based NAS pipelines to efficiently find the best speed/accuracy trade-off. Extensive experiments are conducted to verify the effectiveness of our method over many competing methods, e.g. 128.4x more efficient than Random Search and 7.8x more efficient than previous SOTA LaNAS for finding the best architecture on the largest NAS dataset NasBench-101.
reject
This paper proposes to use Graph Convolutional Networks (GCNs) in Bayesian optimization for neural architecture search. While the paper title includes multi-objective, this component appears to only be a posthoc evaluation of the Pareto front of networks evaluated using a single-objective search -- this could be performed for any method that evaluates more than one network. Performance on NAS-Bench-101 appears to be very good. In the private discussion of reviewers and AC, several issues were raised, including whether the approach is compared fairly to LaNAS and whether the GCN will predict well for large search spaces. Also, unfortunately, no code is provided, making it unclear whether the work is reproducible. The reviewers unanimously agreed on a weak rejection score. I concur with this assessment and therefore recommend rejection.
test
[ "HklMcrr0YB", "B1xCIu0jjr", "SJxkCG0siS", "SylorXRjor", "HkexsmCsoH", "SylLqFmiYB", "S1eGfMLzcH", "BJxK3oT85B", "r1l5Hf5ptH", "H1xAiuEB5H", "ByxRlhQrcr", "Hyxav_e0FB", "BJlWsqrpKS", "HklXcUF7dr", "SkeABfSpYr", "rJgzqjHn_S", "rkgJFBgZuS" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "public", "author", "public", "author", "author", "public", "public", "public" ]
[ "This paper proposed BOGCN-NAS that encodes current architecture with Graph convolutional network (GCN) and uses the feature extracted from GCN as the input to perform a Bayesian regression (predicting bias and variance, See Eqn. 5-6). They use Bayesian Optimization to pick the most promising next model with Expect...
[ 3, -1, -1, -1, -1, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, -1, -1, -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_rJgffkSFPS", "HklMcrr0YB", "S1eGfMLzcH", "S1eGfMLzcH", "SylLqFmiYB", "iclr_2020_rJgffkSFPS", "iclr_2020_rJgffkSFPS", "H1xAiuEB5H", "SkeABfSpYr", "ByxRlhQrcr", "Hyxav_e0FB", "r1l5Hf5ptH", "rJgzqjHn_S", "rkgJFBgZuS", "iclr_2020_rJgffkSFPS", "iclr_2020_rJgffkSFPS", "iclr_2020...
iclr_2020_SJeQGJrKwH
DS-VIC: Unsupervised Discovery of Decision States for Transfer in RL
We learn to identify decision states, namely the parsimonious set of states where decisions meaningfully affect the future states an agent can reach in an environment. We utilize the VIC framework, which maximizes an agent’s `empowerment’, ie the ability to reliably reach a diverse set of states -- and formulate a sandwich bound on the empowerment objective that allows identification of decision states. Unlike previous work, our decision states are discovered without extrinsic rewards -- simply by interacting with the world. Our results show that our decision states are: 1) often interpretable, and 2) lead to better exploration on downstream goal-driven tasks in partially observable environments.
reject
This work is interesting because it's aim is to push the work in intrinsic motivation towards crisp definitions, and thus reads like an algorithmic paper rather than yet another reward heuristic and system building paper. There is some nice theory here, integration with options, and clear connections to existing work. However, the paper is not ready for publication. There were were several issues that could not be resolved in the reviewers minds (even after the author response and extensive discussion). The primary issues were: (1) There was significant confusion around the beta sensitivity---figs 6,7,8 appear misleading or at least contradictory to the message of the paper. (2) The need for x,y env states. (3) The several reviewers found the decision states unintuitive and confused the quantitative analysis focus if they given the authors primary focus is transfer performance. (4) All reviewers found the experiments lacking. Overall, the results generally don't support the claims of the paper, and there are too many missing details and odd empirical choices. Again, there was extensive discussion because all agreed this is an interesting line of work. Taking the reviewers excellent suggestions on board will almost certainly result in an excellent paper. Keep going!
val
[ "SJxf_1_9nr", "SkxAp2npYr", "B1lexwJ2jB", "BkxEtLJnoH", "rygx6BJ2sS", "Skgk7BkhjB", "Skxb3Ek3iH", "BJlvPXJhsH", "BJgGuIGTKH", "rkebgjk0Fr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a mechanism for identifying decision states, even on previously unseen tasks. Decision states are states from which the option taken has high mutual information with the final state of that option, but low mutual information with the action at a time-step, given the current state. An intrinsic ...
[ 3, 3, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ 1, 5, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_SJeQGJrKwH", "iclr_2020_SJeQGJrKwH", "BkxEtLJnoH", "rkebgjk0Fr", "SkxAp2npYr", "Skxb3Ek3iH", "BJgGuIGTKH", "iclr_2020_SJeQGJrKwH", "iclr_2020_SJeQGJrKwH", "iclr_2020_SJeQGJrKwH" ]
iclr_2020_r1e4MkSFDr
Continuous Convolutional Neural Network forNonuniform Time Series
Convolutional neural network (CNN) for time series data implicitly assumes that the data are uniformly sampled, whereas many event-based and multi-modal data are nonuniform or have heterogeneous sampling rates. Directly applying regularCNN to nonuniform time series is ungrounded, because it is unable to recognize and extract common patterns from the nonuniform input signals. Converting the nonuniform time series to uniform ones by interpolation preserves the pattern extraction capability of CNN, but the interpolation kernels are often preset and may be unsuitable for the data or tasks. In this paper, we propose the ContinuousCNN (CCNN), which estimates the inherent continuous inputs by interpolation, and performs continuous convolution on the continuous input. The interpolation and convolution kernels are learned in an end-to-end manner, and are able to learn useful patterns despite the nonuniform sampling rate. Besides, CCNN is a strict generalization to CNN. Results of several experiments verify that CCNN achieves abetter performance on nonuniform data, and learns meaningful continuous kernels
reject
This paper presents a continuous CNN model that can handle nonuniform time series data. It learns the interpolation kernel and convolutional architectures in an end-to-end manner, which is shown to achieve higher performance compared to naïve baselines. All reviewers scored Weak Reject and there was no strong opinion to support the paper during discussion. Although I felt some of the reviewers’ comments are missing the points, I generally agree that the novelty of the method is rather straightforward and incremental, and that the experimental evaluation is not convincing enough. Particularly, comparison with more recent state-of-the-art point process methods should be included. For example, [1-3] claim better performance than RMTPP. Considering that the contribution of the paper is more on empirical side and CCNN is not only the solution for handing nonuniform time series data, I think this point should be properly addressed and discussed. Based on these reasons, I’d like to recommend rejection. [1] Xiao et al., Modeling the Intensity Function of Point Process via Recurrent Neural Networkss, AAAI 2017. [2] Li et al., Learning Temporal Point Processes via Reinforcement Learning, NIPS 2018. [3] Turkmen et al, FastPoint: Scalable Deep Point Processes, ECML-PKDD 2019.
test
[ "rJxnj7B3iH", "HylfyXS2or", "BJxjNMrhjB", "r1lPKXh6KB", "BkeIvRLRYH", "SylsLane9H" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your feedback! Below is our response to your concern.\n\n1. Experimental v.s. theoretical \nIt’s hard to agree with the claim of the reviewer that a paper is experimental. If the reviewer means particularly this paper is purely theoretical, I would apologize for not emphasizing enough to let the revi...
[ -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, 3, 3, 4 ]
[ "r1lPKXh6KB", "BkeIvRLRYH", "SylsLane9H", "iclr_2020_r1e4MkSFDr", "iclr_2020_r1e4MkSFDr", "iclr_2020_r1e4MkSFDr" ]
iclr_2020_rkxVz1HKwB
Certifiably Robust Interpretation in Deep Learning
Deep learning interpretation is essential to explain the reasoning behind model predictions. Understanding the robustness of interpretation methods is important especially in sensitive domains such as medical applications since interpretation results are often used in downstream tasks. Although gradient-based saliency maps are popular methods for deep learning interpretation, recent works show that they can be vulnerable to adversarial attacks. In this paper, we address this problem and provide a certifiable defense method for deep learning interpretation. We show that a sparsified version of the popular SmoothGrad method, which computes the average saliency maps over random perturbations of the input, is certifiably robust against adversarial perturbations. We obtain this result by extending recent bounds for certifiably robust smooth classifiers to the interpretation setting. Experiments on ImageNet samples validate our theory.
reject
This paper discusses new methods to perform adversarial attacks on salience maps. In its current form, this paper in its current form has unfortunately has not convinced several of the reviewers/commenters of the motivation behind proposing such a method. I tend to share the same opinion. I would encourage the authors to re-think the motivation of the work, and if there are indeed solid use cases to express them explicitly in the next version of the paper.
test
[ "Hkxsru03FB", "rJe_xgpssr", "BkeotR3osr", "H1e7_anior", "HkeO62hooB", "Hyenpjhior", "H1guSchjjS", "r1epdz-RKr", "SJlg24cQcr", "S1ezL4_05H", "H1e5BxVC5r", "HkgezgVA5H" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "public", "public", "public" ]
[ "This paper introduces an extension of Cohen et al. (2019)’s result that allows one to derive robustness certificates for interpretation methods, as well as a bound on the top-K overlap of saliency methods. These results motivate the introduction of Sparsified SmoothGrad and a relaxation of this method that has dif...
[ 3, -1, -1, -1, -1, -1, -1, 1, 3, -1, -1, -1 ]
[ 1, -1, -1, -1, -1, -1, -1, 5, 3, -1, -1, -1 ]
[ "iclr_2020_rkxVz1HKwB", "Hyenpjhior", "HkgezgVA5H", "H1e5BxVC5r", "Hkxsru03FB", "r1epdz-RKr", "SJlg24cQcr", "iclr_2020_rkxVz1HKwB", "iclr_2020_rkxVz1HKwB", "H1e5BxVC5r", "HkgezgVA5H", "iclr_2020_rkxVz1HKwB" ]
iclr_2020_HJlHzJBFwB
Accelerating Monte Carlo Bayesian Inference via Approximating Predictive Uncertainty over the Simplex
Estimating the predictive uncertainty of a Bayesian learning model is critical in various decision-making problems, e.g., reinforcement learning, detecting adversarial attack, self-driving car. As the model posterior is almost always intractable, most efforts were made on finding an accurate approximation the true posterior. Even though a decent estimation of the model posterior is obtained, another approximation is required to compute the predictive distribution over the desired output. A common accurate solution is to use Monte Carlo (MC) integration. However, it needs to maintain a large number of samples, evaluate the model repeatedly and average multiple model outputs. In many real-world cases, this is computationally prohibitive. In this work, assuming that the exact posterior or a decent approximation is obtained, we propose a generic framework to approximate the output probability distribution induced by model posterior with a parameterized model and in an amortized fashion. The aim is to approximate the true uncertainty of a specific Bayesian model, meanwhile alleviating the heavy workload of MC integration at testing time. The proposed method is universally applicable to Bayesian classification models that allow for posterior sampling. Theoretically, we show that the idea of amortization incurs no additional costs on approximation performance. Empirical results validate the strong practical performance of our approach.
reject
This paper proposes to speed up Bayesian deep learning at test time by training a student network to approximate the BNN's output distribution. The idea is certainly a reasonable thing to try, and the writing is mostly good (though as some reviewers point out, certain sections might not be necessary). The idea is fairly obvious, though, so the question is whether the experimental results are impressive enough by themselves to justify acceptance. The method is able to get close to the performance achieved by Monte Carlo estimators with much lower cost, although there is a nontrivial drop in accuracy. This is probably worth paying if it achieves 500x computation reduction as claimed in the paper, though the practical gains are probably much smaller since Monte Carlo methods are rarely used with 500 samples. Overall, this seems a bit below the bar for ICLR.
train
[ "SJx1BMq5FB", "BkxPM6usjr", "S1xACUAQir", "S1x-L40Xjr", "BylIHICmor", "HylknWCmiB", "rylxzAp7jS", "Skg2gUHcFr", "SJgNjT9W9B" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank the authors for your detailed rebuttal. I agree with the authors that the proposed method acts as a useful tool for \"real-time evaluation of induced predictive uncertainty\", and the experiments also validate that the method indeed achieves comparable performance with smaller computations. But for now, I am...
[ 3, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 1 ]
[ "iclr_2020_HJlHzJBFwB", "iclr_2020_HJlHzJBFwB", "BylIHICmor", "HylknWCmiB", "Skg2gUHcFr", "SJx1BMq5FB", "SJgNjT9W9B", "iclr_2020_HJlHzJBFwB", "iclr_2020_HJlHzJBFwB" ]
iclr_2020_S1xSzyrYDB
Cyclic Graph Dynamic Multilayer Perceptron for Periodic Signals
We propose a feature extraction for periodic signals. Virtually every mechanized transportation vehicle, power generation, industrial machine, and robotic system contains rotating shafts. It is possible to collect data about periodicity by mea- suring a shaft’s rotation. However, it is difficult to perfectly control the collection timing of the measurements. Imprecise timing creates phase shifts in the resulting data. Although a phase shift does not materially affect the measurement of any given data point collected, it does alter the order in which all of the points are col- lected. It is difficult for classical methods, like multi-layer perceptron, to identify or quantify these alterations because they depend on the order of the input vectors’ components. This paper proposes a robust method for extracting features from phase shift data by adding a graph structure to each data point and constructing a suitable machine learning architecture for graph data with cyclic permutation. Simulation and experimental results illustrate its effectiveness.
reject
The reviewers all appreciated the area explored by this work but there was a consensus that it lacked a thorough presentation of existing works, as well as relevant baselines. I encourage the authors to better position their work with respect to the existing literature for what should be a stronger submission for a future conference.
train
[ "BkliLUPDqS", "S1g1uWIO9r", "HklFYLqOqr", "SkeTY92ocH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents a novel architecture for extracting features for periodic signals that is sample efficient and has superior performance than previous approaches. The proposed method is based on a graph architecture that takes into account the ordering of the vertices, contrary to standard GNNs. In order to ext...
[ 3, 6, 3, 3 ]
[ 3, 1, 1, 1 ]
[ "iclr_2020_S1xSzyrYDB", "iclr_2020_S1xSzyrYDB", "iclr_2020_S1xSzyrYDB", "iclr_2020_S1xSzyrYDB" ]
iclr_2020_rJeIGkBKPS
Improving Confident-Classifiers For Out-of-distribution Detection
Discriminatively trained neural classifiers can be trusted, only when the input data comes from the training distribution (in-distribution). Therefore, detecting out-of-distribution (OOD) samples is very important to avoid classification errors. In the context of OOD detection for image classification, one of the recent approaches proposes training a classifier called “confident-classifier” by minimizing the standard cross-entropy loss on in-distribution samples and minimizing the KLdivergence between the predictive distribution of OOD samples in the low-density“boundary” of in-distribution and the uniform distribution (maximizing the entropy of the outputs). Thus, the samples could be detected as OOD if they have low confidence or high entropy. In this paper, we analyze this setting both theoretically and experimentally. We also propose a novel algorithm to generate the“boundary” OOD samples to train a classifier with an explicit “reject” class for OOD samples. We compare our approach against several recent classifier-based OOD detectors including the confident-classifiers on MNIST and Fashion-MNISTdatasets. Overall the proposed approach consistently performs better than others across most of the experiments.
reject
The paper improves the previous method for detecting out-of-distribution (OOD) samples. Some theoretical analysis/motivation is interesting as pointed out by a reviewer. I think the paper is well written in overall and has some potential. However, as all reviewers pointed out, I think experimental results are quite below the borderline to be accepted (considering the ICLR audience), i.e., the authors should consider non-MNIST-like and more realistic datasets. This indicates the limitation on the scalability of the proposed method. Hence, I recommend rejection.
train
[ "S1xdZMcnKS", "SJlnCEcaFS", "rklCnE7IsB", "HJeuOdXIiS", "Sylbmv78jB", "HJeHSCR-5r" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "Comments on rebuttal\n\nI don’t think that the authors made a valid argument to address my concerns about theoretical justification and experiments. As I mentioned in the review, the assumption and statements in the paper are not clear to me. Moreover, I think the authors should evaluate their methods on more real...
[ 3, 3, -1, -1, -1, 6 ]
[ 5, 5, -1, -1, -1, 4 ]
[ "iclr_2020_rJeIGkBKPS", "iclr_2020_rJeIGkBKPS", "HJeHSCR-5r", "S1xdZMcnKS", "SJlnCEcaFS", "iclr_2020_rJeIGkBKPS" ]
iclr_2020_H1eUz1rKPr
Representation Learning with Multisets
We study the problem of learning permutation invariant representations that can capture containment relations. We propose training a model on a novel task: predicting the size of the symmetric difference between pairs of multisets, sets which may contain multiple copies of the same object. With motivation from fuzzy set theory, we formulate both multiset representations and how to predict symmetric difference sizes given these representations. We model multiset elements as vectors on the standard simplex and multisets as the summations of such vectors, and we predict symmetric difference as the l1-distance between multiset representations. We demonstrate that our representations more effectively predict the sizes of symmetric differences than DeepSets-based approaches with unconstrained object representations. Furthermore, we demonstrate that the model learns meaningful representations, mapping objects of different classes to different standard basis vectors.
reject
While the reviewers appreciated the problem to learn a multiset representation, two reviewers found the technical contribution to be minor, as well as limited experiments. The rebuttal and revision addressed concerns about the motivation of the approach, but the experimental issues remain. The paper would likely substantially improve with additional experiments.
train
[ "Byei_1i2oB", "HyldCNs2sH", "BJlr1ejnor", "B1x15XL7Yr", "r1gycv53Kr", "SklYt2LaYH" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your thoughtful comments!\n\nWe have revised our work to address your concerns as much as possible.\n\nTo your first point, we agree that our definitions of multiset operations could be better motivated. We have added a formalization of the problem we are trying to solve and of multisets themselves, ...
[ -1, -1, -1, 3, 3, 6 ]
[ -1, -1, -1, 1, 4, 3 ]
[ "r1gycv53Kr", "SklYt2LaYH", "B1x15XL7Yr", "iclr_2020_H1eUz1rKPr", "iclr_2020_H1eUz1rKPr", "iclr_2020_H1eUz1rKPr" ]
iclr_2020_HklvMJSYPB
Adaptive Adversarial Imitation Learning
We present the ADaptive Adversarial Imitation Learning (ADAIL) algorithm for learning adaptive policies that can be transferred between environments of varying dynamics, by imitating a small number of demonstrations collected from a single source domain. This problem is important in robotic learning because in real world scenarios 1) reward functions are hard to obtain, 2) learned policies from one domain are difficult to deploy in another due to varying source to target domain statistics, 3) collecting expert demonstrations in multiple environments where the dynamics are known and controlled is often infeasible. We address these constraints by building upon recent advances in adversarial imitation learning; we condition our policy on a learned dynamics embedding and we employ a domain-adversarial loss to learn a dynamics-invariant discriminator. The effectiveness of our method is demonstrated on simulated control tasks with varying environment dynamics and the learned adaptive agent outperforms several recent baselines.
reject
This paper extends adversarial imitation learning to an adaptive setting where environment dynamics change frequently. The authors propose a novel approach with pragmatic design choices to address the challenges that arise in this setting. Several questions and requests for clarification were addressed during the reviewing phase. The paper remains borderline after the rebuttal. Remaining concerns include the size of the algorithmic or conceptual contribution of the paper.
train
[ "BkeFzAE2jH", "HJgnzoNosB", "SJg7htEioB", "ByeGcKNsor", "SkewZK4jiB", "rkgA3d4oiB", "SygreDEosB", "BkgSu9lhYS", "SkxqREopYS", "BJlqXqoptB" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I think that the presentation has been improved noticeably. I need to take a closer look on the current submission to make up my mind on whether to argue for acceptance or not. Until then, I'll keep the initial rating.", "We would like to thank the reviewers for their time and valuable feedback. We have made the...
[ -1, -1, -1, -1, -1, -1, -1, 6, 3, 1 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "SkewZK4jiB", "iclr_2020_HklvMJSYPB", "ByeGcKNsor", "BJlqXqoptB", "rkgA3d4oiB", "SkxqREopYS", "BkgSu9lhYS", "iclr_2020_HklvMJSYPB", "iclr_2020_HklvMJSYPB", "iclr_2020_HklvMJSYPB" ]
iclr_2020_BkeDGJBKvB
Multitask Soft Option Learning
We present Multitask Soft Option Learning (MSOL), a hierarchical multi-task framework based on Planning-as-Inference. MSOL extends the concept of Options, using separate variational posteriors for each task, regularized by a shared prior. The learned soft-options are temporally extended, allowing a higher-level master policy to train faster on new tasks by making decisions with lower frequency. Additionally, MSOL allows fine-tuning of soft-options for new tasks without unlearning previously useful behavior, and avoids problems with local minima in multitask training. We demonstrate empirically that MSOL significantly outperforms both hierarchical and flat transfer-learning baselines in challenging multi-task environments.
reject
Apologies for only receiving two reviews. R2 gave a WR and R3 gave an A. Given the lack of 3rd review and split nature of the scores, the AC has closely scrutinized the paper/reviews/comments/rebuttal. Thoughts: - Paper is on interesting topic. - AC agrees with R2's concern about the evaluation not using more complex environments like Mujoco. Without evaluation on a standard benchmark, it is difficult to know objectively if the approach works. - AC agrees with authors that the DISTRAL approach forms a strong baseline. - Nevertheless, the experiments aren't super compelling either. - AC has some concerns about scaling issues w.r.t. model size & #tasks. The paper is very borderline, but the AC sides with R2's concerns and unfortunately feels the paper cannot be accepted without a stronger evaluation. With this, it would make a compelling paper.
train
[ "Bkgbl8_niH", "BJeiYdShiB", "BJx8fVtqiH", "HJxXf8mOjB", "HkgLRS7dir", "S1xF8IXHtH", "H1li36yCYS" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for your comments. \n\n>Nonetheless, while we agree in principle with your observation that additional high-dimensional experiments would further strengthen our claims, we observe that practically, the amount of compute required to be able to do is beyond our abilities.\n\nWould it be possible to comment...
[ -1, -1, -1, -1, -1, 8, 3 ]
[ -1, -1, -1, -1, -1, 3, 4 ]
[ "BJeiYdShiB", "BJx8fVtqiH", "HkgLRS7dir", "S1xF8IXHtH", "H1li36yCYS", "iclr_2020_BkeDGJBKvB", "iclr_2020_BkeDGJBKvB" ]
iclr_2020_BJe_z1HFPr
Resizable Neural Networks
In this paper, we present a deep convolutional neural network (CNN) which performs arbitrary resize operation on intermediate feature map resolution at stage-level. Motivated by weight sharing mechanism in neural architecture search, where a super-network is trained and sub-networks inherit the weights from the super-network, we present a novel CNN approach. We construct a spatial super-network which consists of multiple sub-networks, where each sub-network is a single scale network that obtain a unique spatial configuration, the convolutional layers are shared across all sub-networks. Such network, named as Resizable Neural Networks, are equivalent to training infinite single scale networks, but has no extra computational cost. Moreover, we present a training algorithm such that all sub-networks achieve better performance than individually trained counterparts. On large-scale ImageNet classification, we demonstrate its effectiveness on various modern network architectures such as MobileNet, ShuffleNet, and ResNet. To go even further, we present three variants of resizable networks: 1) Resizable as Architecture Search (Resizable-NAS). On ImageNet, Resizable-NAS ResNet-50 attain 0.4% higher on accuracy and 44% smaller than the baseline model. 2) Resizable as Data Augmentation (Resizable-Aug). While we use resizable networks as a data augmentation technique, it obtains superior performance on ImageNet classification, outperform AutoAugment by 1.2% with ResNet-50. 3) Adaptive Resizable Network (Resizable-Adapt). We introduce the adaptive resizable networks as dynamic networks, which further improve the performance with less computational cost via data-dependent inference.
reject
This paper offers likely novel schemes for image resizing. The performance improvement is clear. Unfortunately two reviewers find substantial clarity issues in the manuscript after revision, and the AC concurs that this is still an issue. The paper is borderline but given the number of higher ranked papers in the pool is unable to be accepted unfortunately.
val
[ "BygNawPTYS", "HylWQ0t2ir", "ByxPE3YniH", "SJeSE9YnjB", "Syl6fYR_OH", "HJeQ86Ug9r" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes Resizable Neural Networks, which trains networks with different resolution scalings at the same time with shared weights. It serves as data-augmentation and improves accuracy over base networks. Additionally, the same technique can perform an architecture search. Experimental results show signi...
[ 3, -1, -1, -1, 6, 3 ]
[ 3, -1, -1, -1, 4, 5 ]
[ "iclr_2020_BJe_z1HFPr", "Syl6fYR_OH", "BygNawPTYS", "HJeQ86Ug9r", "iclr_2020_BJe_z1HFPr", "iclr_2020_BJe_z1HFPr" ]
iclr_2020_BkgOM1rKvr
The Surprising Behavior Of Graph Neural Networks
We highlight a lack of understanding of the behaviour of Graph Neural Networks (GNNs) in various topological contexts. We present 4 experimental studies which counter-intuitively demonstrate that the performance of GNNs is weakly dependent on the topology, sensitive to structural noise and the modality (attributes or edges) of information, and degraded by strong coupling between nodal attributes and structure. We draw on the empirical results to recommend reporting of topological context in GNN evaluation and propose a simple (attribute-structure) decoupling method to improve GNN performance.
reject
The paper empirically investigates the behaviour of graph neural networks, as a function of topology, structural noise, and coupling between nodal attributes and structure. While the paper is interesting, reviewers in general felt that the presentation lacked clarity and aspects of the experiments were hard to interpret. The authors are encouraged to continue with this work, accounting for reviewer comments in subsequent versions.
train
[ "HJl0Hm52jS", "Hke2YVqnir", "BylKNV53jH", "H1xRWXq3sr", "ryeDNdsitr", "rkxfWpoaYH", "H1lbDwNCtS" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your discerning review. We hope to have addressed most of your concerns:\n-We have added an explanation of different aspects of a complex networks in the dataset’s section\n-We have incorporated more comprehensive figure captions\n-The discussion and conclusion sections present the improvement our ex...
[ -1, -1, -1, -1, 6, 3, 1 ]
[ -1, -1, -1, -1, 3, 1, 4 ]
[ "ryeDNdsitr", "iclr_2020_BkgOM1rKvr", "rkxfWpoaYH", "H1lbDwNCtS", "iclr_2020_BkgOM1rKvr", "iclr_2020_BkgOM1rKvr", "iclr_2020_BkgOM1rKvr" ]
iclr_2020_H1e5GJBtDr
Axial Attention in Multidimensional Transformers
Self-attention effectively captures large receptive fields with high information bandwidth, but its computational resource requirements grow quadratically with the number of points over which attention is performed. For data arranged as large multidimensional tensors, such as images and videos, the quadratic growth makes self-attention prohibitively expensive. These tensors often have thousands of positions that one wishes to capture and proposed attentional alternatives either limit the resulting receptive field or require custom subroutines. We propose Axial Attention, a simple generalization of self-attention that naturally aligns with the multiple dimensions of the tensors in both the encoding and the decoding settings. The Axial Transformer uses axial self-attention layers and a shift operation to efficiently build large and full receptive fields. Notably the proposed structure of the layers allows for the vast majority of the context to be computed in parallel during decoding without introducing any independence assumptions. This semi-parallel structure goes a long way to making decoding from even a very large Axial Transformer broadly applicable. We demonstrate state-of-the-art results for the Axial Transformer on the ImageNet-32 and ImageNet-64 image benchmarks as well as on the BAIR Robotic Pushing video benchmark. We open source the implementation of Axial Transformers.
reject
This paper proposes a self-attention-based autoregressive model called Axial Transformers for images and other data organized as high dimensional tensors. The Axial Attention is applied within each axis of the data to accelerate the processing. Most of the authors claim that main idea behind Axial Attention is widely applicable, which can be used in many core vision tasks, such as detection and classification. However, the revision fails to provide more application for Axial attention. Overall, the idea behind this paper is interesting but more convincing experimental results are needed.
train
[ "HkgnTM9hir", "Bkp7J9Mor", "S1l61fO7jS", "SyeFnbO7jB", "r1lycyCMsH", "H1ezBTaziH", "SkgThTazjS", "S1lPeozCYH", "ryeuXv839B", "BylznUt3qr" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear reviewers, thank you for your comments. We have uploaded a revised version of our paper incorporating your feedback. Specifically:\n\n- We are now more explicit about the scope of our paper and its intended contribution. Our work is about autoregressive modeling for images and other data organized as multidim...
[ -1, 1, -1, -1, -1, -1, -1, 6, 1, 3 ]
[ -1, 5, -1, -1, -1, -1, -1, 3, 5, 3 ]
[ "iclr_2020_H1e5GJBtDr", "iclr_2020_H1e5GJBtDr", "SyeFnbO7jB", "H1ezBTaziH", "ryeuXv839B", "Bkp7J9Mor", "BylznUt3qr", "iclr_2020_H1e5GJBtDr", "iclr_2020_H1e5GJBtDr", "iclr_2020_H1e5GJBtDr" ]
iclr_2020_r1lczkHKPr
Off-policy Multi-step Q-learning
In the past few years, off-policy reinforcement learning methods have shown promising results in their application for robot control. Deep Q-learning, however, still suffers from poor data-efficiency which is limiting with regard to real-world applications. We follow the idea of multi-step TD-learning to enhance data-efficiency while remaining off-policy by proposing two novel Temporal-Difference formulations: (1) Truncated Q-functions which represent the return for the first n steps of a policy rollout and (2) Shifted Q-functions, acting as the farsighted return after this truncated rollout. We prove that the combination of these short- and long-term predictions is a representation of the full return, leading to the Composite Q-learning algorithm. We show the efficacy of Composite Q-learning in the tabular case and compare our approach in the function-approximation setting with TD3, Model-based Value Expansion and TD3(Delta), which we introduce as an off-policy variant of TD(Delta). We show on three simulated robot tasks that Composite TD3 outperforms TD3 as well as state-of-the-art off-policy multi-step approaches in terms of data-efficiency.
reject
The authors propose TD updates for Truncated Q-functions and Shifted Q-functions, reflecting short- and long-term predictions, respectively. They show that they can be combined to form an estimate of the full-return, leading to a Composite Q-learning algorithm. They claim to demonstrated improved data-efficiency in the tabular setting and on three simulated robot tasks. All of the reviewers found the ideas in the paper interesting, however, based on the issues raised by Reviewer 3, everyone agreed that substantial revisions to the paper are necessary to properly incorporate the new results. As a result, I am recommending rejection for this submission at this time. I encourage the authors to incorporate the feedback from the reviewers, and believe that after that is done, the paper will be a strong submission.
train
[ "HkgVopXatr", "BJlkunYoKB", "HyxxejAhtS", "BJgr-K_hjH", "rJlDnDzooB", "BJlJWPPhjS", "S1xhsuI2jB", "HJlGOtH2ir", "B1x5KM73jB", "r1gEACM2iH", "r1gpMq6sjr", "SyxsEmLioS", "BygikvHjjr", "SylVwkXisH", "Byx7x8zojS", "HygzprGoiB", "HklysSGssr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author" ]
[ "This paper proposes the Composite Q-learning algorithm, which combines the algorithmic ideas of using compositional TD methods to truncate the horizon of the return, as well as shift a return in time. They claim that this approach will improve the method's data efficiency relative to standard Q-learning. They demo...
[ 1, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_r1lczkHKPr", "iclr_2020_r1lczkHKPr", "iclr_2020_r1lczkHKPr", "BJlJWPPhjS", "iclr_2020_r1lczkHKPr", "S1xhsuI2jB", "HJlGOtH2ir", "r1gEACM2iH", "r1gpMq6sjr", "SyxsEmLioS", "HygzprGoiB", "BygikvHjjr", "SylVwkXisH", "Byx7x8zojS", "HkgVopXatr", "HyxxejAhtS", "BJlkunYoKB" ]
iclr_2020_HJxnM1rFvr
HUBERT Untangles BERT to Improve Transfer across NLP Tasks
We introduce HUBERT which combines the structured-representational power of Tensor-Product Representations (TPRs) and BERT, a pre-trained bidirectional transformer language model. We validate the effectiveness of our model on the GLUE benchmark and HANS dataset. We also show that there is shared structure between different NLP datasets which HUBERT, but not BERT, is able to learn and leverage. Extensive transfer-learning experiments are conducted to confirm this proposition.
reject
The paper introduces additional layers on top BERT type models for disentangling of semantic and positional information. The paper demonstrates (small) performance gains in transfer learning compared to pure BERT baseline. Both reviewers and authors have engaged in a constructive discussion of the merits of the proposed method. Although the reviewers appreciate the ideas and parts of the paper the consensus among the reviewers is that the evaluation of the method is not clearcut enough to warrant publication. Rejection is therefore recommended. Given the good ideas presented in the paper and the promising results the authors are encouraged to take the feedback into account and submit to the next ML conference.
train
[ "HkehmlJ2sr", "ryxJoJ13or", "rkla4k1hsB", "ByxT3ARijB", "B1l6scnPYr", "B1gqjnahKB", "rJgzyIYGqr", "BJgM--mYOS", "rJeL3yMZOS" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "We would like to thank you for the comments and feedback.\n \nIn this work, we propose a new model combining the power of deep neural language models such as BERT with symbolic representations such as Tensor-Product Representations. To the best of our knowledge, this is the first work that examines implicit struct...
[ -1, -1, -1, -1, 1, 3, 3, -1, -1 ]
[ -1, -1, -1, -1, 3, 4, 5, -1, -1 ]
[ "B1l6scnPYr", "B1gqjnahKB", "rJgzyIYGqr", "iclr_2020_HJxnM1rFvr", "iclr_2020_HJxnM1rFvr", "iclr_2020_HJxnM1rFvr", "iclr_2020_HJxnM1rFvr", "rJeL3yMZOS", "iclr_2020_HJxnM1rFvr" ]
iclr_2020_rJgRMkrtDr
Learning Video Representations using Contrastive Bidirectional Transformer
This paper proposes a self-supervised learning approach for video features that results in significantly improved performance on downstream tasks (such as video classification, captioning and segmentation) compared to existing methods. Our method extends the BERT model for text sequences to the case of sequences of real-valued feature vectors, by replacing the softmax loss with noise contrastive estimation (NCE). We also show how to learn representations from sequences of visual features and sequences of words derived from ASR (automatic speech recognition), and show that such cross-modal training (when possible) helps even more.
reject
This paper studies self-supervised video representations with a multi-modal learning process that the authors then use for performance on a variety of tasks. The main contribution of the paper is a successful effort to incorporate BERT-like models into vision tasks. Reviewers acknowledged the extensive empirical evaluation and the good performance of the approach. However, they raised some concerns about the lack of clarity and the absence of analysis and interpretation of the results. The AC shares this view, and recommends rejection at this time, encouraging the authors to revise their work addressing these analysis and clarity questions.
train
[ "H1glc6Jsir", "BJxcPa1sjB", "B1g4yTJjsr", "ryg9-YehtB", "HJeDipXRtr", "Syl7oex0cB" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your positive feedback.", "Thank you for your positive feedback.\n\nComparison to HowTo100M:\n\nAlthough we use the HowTo100M dataset for pre-training, there are key differences to (Miech, 2019c):\n1. Miech et al. improve text-video embedding by training on HowTo100M and show the gain by transferri...
[ -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, 4, 5, 3 ]
[ "HJeDipXRtr", "ryg9-YehtB", "Syl7oex0cB", "iclr_2020_rJgRMkrtDr", "iclr_2020_rJgRMkrtDr", "iclr_2020_rJgRMkrtDr" ]
iclr_2020_B1xRGkHYDS
A bi-diffusion based layer-wise sampling method for deep learning in large graphs
The Graph Convolutional Network (GCN) and its variants are powerful models for graph representation learning and have recently achieved great success on many graph-based applications. However, most of them target on shallow models (e.g. 2 layers) on relatively small graphs. Very recently, although many acceleration methods have been developed for GCNs training, it still remains a severe challenge how to scale GCN-like models to larger graphs and deeper layers due to the over-expansion of neighborhoods across layers. In this paper, to address the above challenge, we propose a novel layer-wise sampling strategy, which samples the nodes layer by layer conditionally based on the factors of the bi-directional diffusion between layers. In this way, we potentially restrict the time complexity linear to the number of layers, and construct a mini-batch of nodes with high local bi-directional influence (correlation). Further, we apply the self-attention mechanism to flexibly learn suitable weights for the sampled nodes, which allows the model to be able to incorporate both the first-order and higher-order proximities during a single layer propagation process without extra recursive propagation or skip connection. Extensive experiments on three large benchmark graphs demonstrate the effectiveness and efficiency of the proposed model.
reject
This paper addresses the challenge of time complexity in aggregating neighbourhood information in GCNs. As we aggregate information from larger hops (deeper neighbourhoods) the number of nodes can increases exponentially thereby increasing time complexity. To overcome this the authors propose a sampling method which samples nodes layer by layer based on bidirectional diffusion between layers. They demonstrate the effectiveness of their approach on 3 large benchmarks. While the ideas presented in the paper were interesting the reviewers raised some concerns which I have summarised fellow: 1) Novelty: The reviewers felt that the techniques presented were not very novel and is very similar to one existing work as pointed out by R4 2) Writing: The writing needs to be improved. The authors have already made an attempt towards this but it could be improved further 3) Comparisons with baselines: R4 has raised some concerns the settings/configurations used for the baseline methods. In particular, the results for the baseline methods are lower than those reported in the original papers. I have read the author's rebuttal for this but I am not completely convinced about it. I would suggest that the authors address this issue in subsequent submissions Based on the above reasons I recommend that the paper cannot be accepted.
train
[ "BJezS00aYH", "rkxhzovcor", "SkxlBGeDoH", "r1gaD7lPoS", "BkeqTflwor", "HyekiqqfiH", "H1eTO6VRYS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper was an interesting read. The idea of this paper is to challenge the use of Laplacian matrix in GCN. Indeed, typical GCNs use the same adjacency matrix across different layers. In particular, this typically leads in Euclidean case to learning isotropic filters (because the euclidean Laplacian is isotropi...
[ 6, -1, -1, -1, -1, 3, 3 ]
[ 3, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_B1xRGkHYDS", "r1gaD7lPoS", "BJezS00aYH", "HyekiqqfiH", "H1eTO6VRYS", "iclr_2020_B1xRGkHYDS", "iclr_2020_B1xRGkHYDS" ]
iclr_2020_rJlk71rYvH
Counterfactual Regularization for Model-Based Reinforcement Learning
In sequential tasks, planning-based agents have a number of advantages over model-free agents, including sample efficiency and interpretability. Recurrent action-conditional latent dynamics models trained from pixel-level observations have been shown to predict future observations conditioned on agent actions accurately enough for planning in some pixel-based control tasks. Typically, models of this type are trained to reconstruct sequences of ground-truth observations, given ground-truth actions. However, an action-conditional model can take input actions and states other than the ground truth, to generate predictions of unobserved counterfactual states. Because counterfactual state predictions are generated by differentiable networks, relationships among counterfactual states can be included in a training objective. We explore the possibilities of counterfactual regularization terms applicable during training of action-conditional sequence models. We evaluate their effect on pixel-level prediction accuracy and model-based agent performance, and we show that counterfactual regularization improves the performance of model-based agents in test-time environments that differ from training.
reject
I agree with the reviewers that this paper has serious limitations in the experimental evaluation.
train
[ "rkgu8hhqYr", "ryeNrBz3jB", "rylvNBMCFS", "Hylr1WGlcH" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer" ]
[ "The paper presents regularization techniques for model based reinforcement learning which attempt to build counterfactual reasoning into the model. In particular, they present auxiliary loss terms which can be used in \"what if\" scenarios where the actual state is unknown. Given certain assumptions, they show tha...
[ 3, -1, 3, 3 ]
[ 4, -1, 3, 3 ]
[ "iclr_2020_rJlk71rYvH", "iclr_2020_rJlk71rYvH", "iclr_2020_rJlk71rYvH", "iclr_2020_rJlk71rYvH" ]
iclr_2020_r1l1myStwr
Continuous Meta-Learning without Tasks
Meta-learning is a promising strategy for learning to efficiently learn within new tasks, using data gathered from a distribution of tasks. However, the meta-learning literature thus far has focused on the task segmented setting, where at train-time, offline data is assumed to be split according to the underlying task, and at test-time, the algorithms are optimized to learn in a single task. In this work, we enable the application of generic meta-learning algorithms to settings where this task segmentation is unavailable, such as continual online learning with a time-varying task. We present meta-learning via online changepoint analysis (MOCA), an approach which augments a meta-learning algorithm with a differentiable Bayesian changepoint detection scheme. The framework allows both training and testing directly on time series data without segmenting it into discrete tasks. We demonstrate the utility of this approach on a nonlinear meta-regression benchmark as well as two meta-image-classification benchmarks.
reject
In this paper the authors view meta-learning under a general, less studied viewpoint, which does not make the typical assumption that task segmentation is provided. In this context, change-point analysis is used as a tool to complement meta-learning in this expanded domain. The expansion of meta-learning in this more general and often more practical context is significant and the paper is generally well written. However, considering this particular (non)segmentation setting is not an entirely novel idea; for example the reviewers have already pointed out [1] (which the authors agreed to discuss), but also [2] is another relevant work. The authors are highly encouraged to incorporate results, or at least a discussion, with respect to at least [2]. It seems likely that inferring boundaries could be more powerful, but it is important to better motivate this for a final paper. Moreover, the paper could be strengthened by significantly expanding the discussion about practical usefulness of the approach. R3 provides a suggestion towards this direction, that is, to explore the performance in a situation where task segmentation is truly unavailable. [1] Rahaf et el. "Task-Free Continual Learning". [2] Riemer et al. "Learning to learn without forgetting by maximizing transfer and minimizing interference".
train
[ "B1eczZEItS", "HkxCfJoOsS", "HkxnkyodjS", "S1xXa0quiB", "B1lUKCcOiS", "HyxY4QkRYr", "ByeuiSECKB" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper considers the meta-learning in the task un-segmented setting and apply bayesian online change point detection with meta-learning. The task un-segmented is claimed to exist in real applications and the paper explains the idea in a clear way. \n\nMy major concerns and questions are the following:\n\n1) In ...
[ 3, -1, -1, -1, -1, 6, 8 ]
[ 3, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_r1l1myStwr", "iclr_2020_r1l1myStwr", "ByeuiSECKB", "HyxY4QkRYr", "B1eczZEItS", "iclr_2020_r1l1myStwr", "iclr_2020_r1l1myStwr" ]
iclr_2020_BkgeQ1BYwS
Implicit Generative Modeling for Efficient Exploration
Efficient exploration remains a challenging problem in reinforcement learning, especially for those tasks where rewards from environments are sparse. A commonly used approach for exploring such environments is to introduce some "intrinsic" reward. In this work, we focus on model uncertainty estimation as an intrinsic reward for efficient exploration. In particular, we introduce an implicit generative modeling approach to estimate a Bayesian uncertainty of the agent's belief of the environment dynamics. Each random draw from our generative model is a neural network that instantiates the dynamic function, hence multiple draws would approximate the posterior, and the variance in the future prediction based on this posterior is used as an intrinsic reward for exploration. We design a training algorithm for our generative model based on the amortized Stein Variational Gradient Descent. In experiments, we compare our implementation with state-of-the-art intrinsic reward-based exploration approaches, including two recent approaches based on an ensemble of dynamic models. In challenging exploration tasks, our implicit generative model consistently outperforms competing approaches regarding data efficiency in exploration.
reject
There is insufficient support to recommend accepting this paper. The authors provided detailed responses, but the reviewers unanimously kept their recommendation as reject. The novelty and significance of the main contribution was not made sufficiently clear, given the context of related work. Critically, the experimental evaluation was not considered to be convincing, lacking detailed explanation and justification, and a sufficiently thorough comparison to strong baselines, The submitted reviews should help the authors improve their paper.
val
[ "HJexYyOstB", "HkggAYgJcS", "rkeGci93oB", "B1ggPycusr", "rkluCCKdsH", "Bke0uCKuiS", "rJgBYahg5r" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Update: I thank the authors for their response. I believe the paper has been improved by the additional baselines, number of seeds, clarifications to related work and qualitative analysis of the results. I have increased my score to 3 since I still have some concerns. I strongly believe the baselines should be tun...
[ 3, 3, -1, -1, -1, -1, 3 ]
[ 4, 1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_BkgeQ1BYwS", "iclr_2020_BkgeQ1BYwS", "iclr_2020_BkgeQ1BYwS", "HJexYyOstB", "HkggAYgJcS", "rJgBYahg5r", "iclr_2020_BkgeQ1BYwS" ]
iclr_2020_B1xgQkrYwS
On Iterative Neural Network Pruning, Reinitialization, and the Similarity of Masks
We examine how recently documented, fundamental phenomena in deep learn-ing models subject to pruning are affected by changes in the pruning procedure. Specifically, we analyze differences in the connectivity structure and learning dynamics of pruned models found through a set of common iterative pruning techniques, to address questions of uniqueness of trainable, high-sparsity sub-networks, and their dependence on the chosen pruning method. In convolutional layers, we document the emergence of structure induced by magnitude-based un-structured pruning in conjunction with weight rewinding that resembles the effects of structured pruning. We also show empirical evidence that weight stability can be automatically achieved through apposite pruning techniques.
reject
This is an observational work with experiments for comparing iterative pruning methods. I agree with the main concerns of all reviewers: (a) Experimental setups are of too small-scale or with easy datasets, so hard to believe they would generalize for other settings, e.g., large-scale residual networks. This aspect is very important as this is an observational paper. (b) The main take-home contribution/message is weak considering the high-standard of ICLR. Hence, I recommend rejection. I would encourage the authors to consider the above concerns as it could yield a valuable contribution.
train
[ "BJlW4p93iH", "B1llSFchjH", "S1etmY5hoB", "rkeO1YchjH", "r1e9ZDlfYH", "BJeL9xKqtB", "BJeo-bA4qB" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear area chairs, reviewers, and readers, \n\nThank you all for taking the time to read our contribution. We have answered concerns and questions at the individual reviewer level.\n\nOn the whole, the authors would like to argue, from our point of view, that the point of science is not to always be directly and im...
[ -1, -1, -1, -1, 1, 3, 3 ]
[ -1, -1, -1, -1, 3, 3, 4 ]
[ "iclr_2020_B1xgQkrYwS", "r1e9ZDlfYH", "BJeL9xKqtB", "BJeo-bA4qB", "iclr_2020_B1xgQkrYwS", "iclr_2020_B1xgQkrYwS", "iclr_2020_B1xgQkrYwS" ]
iclr_2020_SkgbmyHFDS
What Can Learned Intrinsic Rewards Capture?
Reinforcement learning agents can include different components, such as policies, value functions, state representations, and environment models. Any or all of these can be the loci of knowledge, i.e., structures where knowledge, whether given or learned, can be deposited and reused. Regardless of its composition, the objective of an agent is behave so as to maximise the sum of suitable scalar functions of state: the rewards. As far as the learning algorithm is concerned, these rewards are typically given and immutable. In this paper we instead consider the proposition that the reward function itself may be a good locus of knowledge. This is consistent with a common use, in the literature, of hand-designed intrinsic rewards to improve the learning dynamics of an agent. We adopt a multi-lifetime setting of the Optimal Rewards Framework, and investigate how meta-learning can be used to find good reward functions in a data-driven way. To this end, we propose to meta-learn an intrinsic reward function that allows agents to maximise their extrinsic rewards accumulated until the end of their lifetimes. This long-term lifetime objective allows our learned intrinsic reward to generate systematic multi-episode exploratory behaviour. Through proof-of-concept experiments, we elucidate interesting forms of knowledge that may be captured by a suitably trained intrinsic reward such as the usefulness of exploring uncertain states and rewards.
reject
The authors present a metalearning-based approach to learning intrinsic rewards that improve RL performance across distributions of problems. This is essentially a more computationally efficient approach to approaches suggested by Singh (2009/10). The reviewers agreed that the core idea was good, if a bit incremental, but were also concerned about the similarity to the Singh et al. work, the simplicity of the toy domains tested, and comparison to relevant methods. The reviewers felt that the authors addressed their main concerns and significantly improved the paper; however the similarity to Singh et al. remains, and thus the concerns about incrementalism. Thus, I recommend this paper for rejection at this time.
train
[ "BJxcM8b5FB", "Hkg-jyQioH", "SJltPW1ioB", "HJlH6qc0YS", "SyglkD0diB", "HJxsOotOiH", "SJgBqjLujS", "SJgRvi8djr", "HkgqriU_iS", "rygaes8ujB", "HkeUwwNKYS" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "\nThe paper proposes a meta-learning approach to learn reward functions for reinforcement learning agents. It defines an algorithm to optimize an intrinsic reward function for a distribution of tasks in order to maximise the agent’s lifetime rewards. The properties of this reward function and meta-learning algorit...
[ 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6 ]
[ 5, -1, -1, 4, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_SkgbmyHFDS", "SJltPW1ioB", "iclr_2020_SkgbmyHFDS", "iclr_2020_SkgbmyHFDS", "SJgBqjLujS", "SJgRvi8djr", "HJlH6qc0YS", "HkeUwwNKYS", "BJxcM8b5FB", "iclr_2020_SkgbmyHFDS", "iclr_2020_SkgbmyHFDS" ]
iclr_2020_HkxWXkStDB
Improving Robustness Without Sacrificing Accuracy with Patch Gaussian Augmentation
Deploying machine learning systems in the real world requires both high accuracy on clean data and robustness to naturally occurring corruptions. While architectural advances have led to improved accuracy, building robust models remains challenging, involving major changes in training procedure and datasets. Prior work has argued that there is an inherent trade-off between robustness and accuracy, as exemplified by standard data augmentation techniques such as Cutout, which improves clean accuracy but not robustness, and additive Gaussian noise, which improves robustness but hurts accuracy. We introduce Patch Gaussian, a simple augmentation scheme that adds noise to randomly selected patches in an input image. Models trained with Patch Gaussian achieve state of the art on the CIFAR-10 and ImageNet Common Corruptions benchmarks while also maintaining accuracy on clean data. We find that this augmentation leads to reduced sensitivity to high frequency noise (similar to Gaussian) while retaining the ability to take advantage of relevant high frequency information in the image (similar to Cutout). We show it can be used in conjunction with other regularization methods and data augmentation policies such as AutoAugment. Finally, we find that the idea of restricting perturbations to patches can also be useful in the context of adversarial learning, yielding models without the loss in accuracy that is found with unconstrained adversarial training.
reject
The paper in its current form was just not well enough received by the reviewers to warrant an acceptance rating. It seems this work may have promise and the authors are encouraged to continue with this line of work.
train
[ "BJxQWxB5sr", "HyeLRDxYor", "BJgEIn_vjS", "HJlhgh_wjH", "rylqOtOvoB", "HylUyxwTKH", "H1xPl92pKB", "BkeUdm9QcB" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "> these experiments only shows the empirical behavior of Patch Gaussian over baselines on some sample datasets\n\nWe stress that ImageNet and CIFAR are the most studied vision datasets, and for robustness they are the only datasets with standardized benchmarks (ImageNet-C and CIFAR-10-C). \n\n> I wish we can desig...
[ -1, -1, -1, -1, -1, 3, 3, 8 ]
[ -1, -1, -1, -1, -1, 1, 1, 3 ]
[ "HyeLRDxYor", "BJgEIn_vjS", "HylUyxwTKH", "H1xPl92pKB", "BkeUdm9QcB", "iclr_2020_HkxWXkStDB", "iclr_2020_HkxWXkStDB", "iclr_2020_HkxWXkStDB" ]
iclr_2020_HkeZQJBKDB
Universal approximations of permutation invariant/equivariant functions by deep neural networks
In this paper, we develop a theory about the relationship between G-invariant/equivariant functions and deep neural networks for finite group G. Especially, for a given G-invariant/equivariant function, we construct its universal approximator by deep neural network whose layers equip G-actions and each affine transformations are G-equivariant/invariant. Due to representation theory, we can show that this approximator has exponentially fewer free parameters than usual models.
reject
The article studies universal approximation for the restricted class of equivariant functions, which can have a smaller number of free parameters. The reviewers found the topic important and also that the approach has merits. However, they pointed out that the article is very hard to read and that more intuitions, a clearer comparison with existing work, and connections to practice would be important. The responses did clarify some of the differences to previous works. However, there was no revision addressing the main concerns.
train
[ "rklrHsfJ5B", "BklO8pH9oB", "BkeHJTrqir", "HklCuhrcsS", "SJlwDP3aYS", "HJxLzccRYS" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a universal approximation theorem for functions invariant and equivariant to finite group actions. It constructs the approximations using fully-connected deep neural networks with ReLU activations. It proves a bound on the number of parameters in the build equivariant model. The proof structure...
[ 3, -1, -1, -1, 3, 3 ]
[ 1, -1, -1, -1, 1, 4 ]
[ "iclr_2020_HkeZQJBKDB", "SJlwDP3aYS", "HJxLzccRYS", "rklrHsfJ5B", "iclr_2020_HkeZQJBKDB", "iclr_2020_HkeZQJBKDB" ]
iclr_2020_BylQm1HKvB
CONTRIBUTION OF INTERNAL REFLECTION IN LANGUAGE EMERGENCE WITH AN UNDER-RESTRICTED SITUATION
Owing to language emergence, human beings have been able to understand the intentions of others, generate common concepts, and extend new concepts. Artificial intelligence researchers have not only predicted words and sentences statistically in machine learning, but also created a language system by communicating with the machine itself. However, strong constraints are exhibited in current studies (supervisor signals and rewards exist, or the concepts were fixed on only a point), thus hindering the emergence of real-world languages. In this study, we improved Batali (1998) and Choi et al. (2018)’s research and attempted language emergence under conditions of low constraints such as human language generation. We included the bias that exists in humans as an “internal reflection function” into the system. Irrespective of function, messages corresponding to the label could be generated. However, through qualitative and quantitative analysis, we confirmed that the internal reflection function caused “overlearning” and different structuring of message patterns. This result suggested that the internal reflection function performed effectively in creating a grounding language from raw images with an under-restricted situation such as human language generation.
reject
This paper is very different from most ICLR submissions, and appears to be addressing interesting themes. However the paper seems poorly written, and generally unclear. The motivation, task, method and evaluation are all unclear. I recommend that the authors add explicit definitions, equations, algorithm boxes, and more examples to make their paper clearer.
train
[ "ryl4RZvKiH", "ryl3mZvYiH", "SyemxWPFjr", "H1lU1caoFr", "B1gfgBHicr" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We thank you for your detailed review and helpful comments. We address your concerns as follows.\n\n-There are problems with clarity, particularly after page 4 (Sections 3.2 and 3.3)\n\nThank you for your comment. Please see our 2nd and 4th explanations for Reviewer 1. In particular, we have supplemented the inter...
[ -1, -1, -1, 3, 3 ]
[ -1, -1, -1, 5, 1 ]
[ "B1gfgBHicr", "H1lU1caoFr", "H1lU1caoFr", "iclr_2020_BylQm1HKvB", "iclr_2020_BylQm1HKvB" ]
iclr_2020_HJemQJBKDr
Continual Density Ratio Estimation (CDRE): A new method for evaluating generative models in continual learning
We propose a new method Continual Density Ratio Estimation (CDRE), which can estimate density ratios between a target distribution of real samples and a distribution of samples generated by a model while the model is changing over time and the data of the target distribution is not available after a certain time point. This method perfectly fits the setting of continual learning, in which one model is supposed to learn different tasks sequentially and the most crucial restriction is that model has none or very limited access to the data of all learned tasks. Through CDRE, we can evaluate generative models in continual learning using f-divergences. To the best of our knowledge, there is no existing method that can evaluate generative models under the setting of continual learning without storing real samples from the target distribution.
reject
The paper seems technically correct and has some novelty, but the relevance of the paper is questionable. Considering the selectiveness of ICLR, I cannot recommend the paper for acceptance at this point. In more detail: the authors propose a technique for estimating density rations between a target distribution of real samples and a distribution of samples generated by the model, without storing samples. The method seems to be technically well executed and verified. However, there was major concerns among multiple reviewers that the addressed problem does not seem relevant to the ICLR community. The question addressed seemed artificial, and it was not considered realistic (by R2 and also by R1 in the confidential discussion). R3 also expressed doubts at the usefulness of the method. Furthermore, some doubts were expressed regarding clarity (although opinions were mixed on that) and on the justification of the modification of the VAE objective to the continual setting.
val
[ "Skl_FzIaYS", "SkeJFie2jr", "Bkg4EhDsoS", "Hye2JPl5jB", "HJxn_QeciH", "HyexPrlciS", "Bye8oOS6Yr", "rJevdma6Fr" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "######### Updated Review ###########\n\nI'd like to thank the author(s) for their rebuttal. However, I am still on the same boat with R1 and recommend rejection for this submission. \n\n\n################################\n\n\nThis submission seeks to evaluate generative models in a continual learning setup without...
[ 1, -1, -1, -1, -1, -1, 1, 6 ]
[ 3, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_HJemQJBKDr", "Bkg4EhDsoS", "HyexPrlciS", "Skl_FzIaYS", "Bye8oOS6Yr", "rJevdma6Fr", "iclr_2020_HJemQJBKDr", "iclr_2020_HJemQJBKDr" ]
iclr_2020_SkxV7kHKvr
TWIN GRAPH CONVOLUTIONAL NETWORKS: GCN WITH DUAL GRAPH SUPPORT FOR SEMI-SUPERVISED LEARNING
Graph Neural Networks as a combination of Graph Signal Processing and Deep Convolutional Networks shows great power in pattern recognition in non-Euclidean domains. In this paper, we propose a new method to deploy two pipelines based on the duality of a graph to improve accuracy. By exploring the primal graph and its dual graph where nodes and edges can be treated as one another, we have exploited the benefits of both vertex features and edge features. As a result, we have arrived at a framework that has great potential in both semisupervised and unsupervised learning.
reject
All three reviewers are consistently negative on this paper. Thus a reject is recommended.
test
[ "S1xQZ57pKr", "SJemyeYW5r", "BJgD85xqqB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes two graph convolutional network for semi-supervised node classification. The model is composed of two GCNs, one on the primal graph and one on the dual graph. The paper is well written and easy to follow. However, the novelty and contribution are rather limited, and the performance improvement ...
[ 3, 1, 3 ]
[ 4, 3, 3 ]
[ "iclr_2020_SkxV7kHKvr", "iclr_2020_SkxV7kHKvr", "iclr_2020_SkxV7kHKvr" ]
iclr_2020_SJl47yBYPS
Towards Simplicity in Deep Reinforcement Learning: Streamlined Off-Policy Learning
The field of Deep Reinforcement Learning (DRL) has recently seen a surge in the popularity of maximum entropy reinforcement learning algorithms. Their popularity stems from the intuitive interpretation of the maximum entropy objective and their superior sample efficiency on standard benchmarks. In this paper, we seek to understand the primary contribution of the entropy term to the performance of maximum entropy algorithms. For the Mujoco benchmark, we demonstrate that the entropy term in Soft Actor Critic (SAC) principally addresses the bounded nature of the action spaces. With this insight, we propose a simple normalization scheme which allows a streamlined algorithm without entropy maximization match the performance of SAC. Our experimental results demonstrate a need to revisit the benefits of entropy regularization in DRL. We also propose a simple non-uniform sampling method for selecting transitions from the replay buffer during training. We further show that the streamlined algorithm with the simple non-uniform sampling scheme outperforms SAC and achieves state-of-the-art performance on challenging continuous control tasks.
reject
The paper studies the role of entropy in maximum entropy RL, particularly in soft actor-critic, and proposes an action normalization scheme that leads to a new algorithm, called Streamlined Off-Policy (SOP), that does not maximize entropy, but retains or exceeds the performance of SAC. Independently from SOP, the paper also introduces Emphasizing Recent Experience (ERE) that samples minibatches from the replay buffer by prioritizing the most recent samples. After rounds of discussion and a revised version with added experiments, the reviewers viewed ERE as the main contribution, while had doubts regarding the claimed benefits of SOP. However, the paper is currently structured around SOP, and the effectiveness of ERE, which can be applied to any off-policy algorithm, is not properly studied. Therefore, I recommend rejection, but encourage the authors to revisit the work with an emphasis on ERE.
train
[ "H1x1YN36Kr", "B1gpjwjOOS", "HyeO-W-0YB", "B1xrT5V2oB", "BJgCauVnsB", "SJeIf-ejjH", "Hkl7dBE2jr", "Bye7Tbeiir", "H1xldZejoH", "HygLsdD_oH", "HkgCz0IdjS", "rygGVGFzoS", "Hkg8xaOMiH", "Bylzx9uGiH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "\n# Summary\nThe paper identifies a problem with TD3 related to action clipping. The authors notice that SAC alleviates this problem by means of entropy regularization. Given the insight that action clipping is crucial, the authors propose an alternative approach to avoid action clipping in TD3, which is empirical...
[ 6, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_SJl47yBYPS", "iclr_2020_SJl47yBYPS", "iclr_2020_SJl47yBYPS", "HygLsdD_oH", "B1gpjwjOOS", "HygLsdD_oH", "iclr_2020_SJl47yBYPS", "B1gpjwjOOS", "HkgCz0IdjS", "rygGVGFzoS", "Bylzx9uGiH", "HyeO-W-0YB", "B1gpjwjOOS", "H1x1YN36Kr" ]
iclr_2020_SJeUm1HtDH
Swoosh! Rattle! Thump! - Actions that Sound
Truly intelligent agents need to capture the interplay of all their senses to build a rich physical understanding of their world. In robotics, we have seen tremendous progress in using visual and tactile perception; however we have often ignored a key sense: sound. This is primarily due to lack of data that captures the interplay of action and sound. In this work, we perform the first large-scale study of the interactions between sound and robotic action. To do this, we create the largest available sound-action-vision dataset with 15,000 interactions on 60 objects using our robotic platform Tilt-Bot. By tilting objects and allowing them to crash into the walls of a robotic tray, we collect rich four-channel audio information. Using this data, we explore the synergies between sound and action, and present three key insights. First, sound is indicative of fine-grained object class information, e.g., sound can differentiate a metal screwdriver from a metal wrench. Second, sound also contains information about the causal effects of an action, i.e. given the sound produced, we can predict what action was applied on the object. Finally, object representations derived from audio embeddings are indicative of implicit physical properties. We demonstrate that on previously unseen objects, audio embeddings generated through interactions can predict forward models 24% better than passive visual embeddings.
reject
This paper investigates using sound to improve classification, motion prediction, and representation learning all from data generated by a real robot. All the reviewers were intrigued by the work. The paper provides experiments on real robots (never a small task), and a data-set for the community, and a sequence of illustrative experiments. Because the paper combines existing techniques, its main contribution is the empirical demonstrations of the utility of using sound. Overall, it was not quite enough for the reviewers. The main issues were: (1) motion prediction is perhaps expected given the physical setup, (2) lack of comparison with other approaches, (3) lack of diversity in the demonstrations (10 objects, one domain). The authors added two new experiments with a different setup, further demonstrating their claims. In addition the authors highlighted that the novelty of this task means there are no clear baselines (to which r3 agreed). The new experiments are briefly described in the response (and visuals on a website), but the authors did not update the paper. The new experiments could potentially significantly strength the paper. However, the terse description in the response and the supplied visuals made it difficult for the reviewers to judge their contribution. Overall, this is certainly a very interesting direction. The results on real world data demonstrate promise, even if they are not the benchmarking style the community is used too.
test
[ "r1lpIChYir", "r1gYVC3tiS", "Syl6-0htjr", "HJxX0ahtiB", "rJlDUEniFB", "Skxm-jETFr", "Hyxux2VH9r" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for finding our idea interesting and appreciating our new direction! \n\nNovelty/Technical Novelty: Kindly refer to the discussion in global comments on novelty.\n\nExperimental Results: Kindly refer to the discussion in global comments on comparison with SOTA and new experiments on Robotic Manipulation ...
[ -1, -1, -1, -1, 6, 3, 3 ]
[ -1, -1, -1, -1, 5, 4, 5 ]
[ "Skxm-jETFr", "rJlDUEniFB", "Hyxux2VH9r", "iclr_2020_SJeUm1HtDH", "iclr_2020_SJeUm1HtDH", "iclr_2020_SJeUm1HtDH", "iclr_2020_SJeUm1HtDH" ]
iclr_2020_SyxDXJStPS
Reparameterized Variational Divergence Minimization for Stable Imitation
State-of-the-art results in imitation learning are currently held by adversarial methods that iteratively estimate the divergence between student and expert policies and then minimize this divergence to bring the imitation policy closer to expert behavior. Analogous techniques for imitation learning from observations alone (without expert action labels), however, have not enjoyed the same ubiquitous successes. Recent work in adversarial methods for generative models has shown that the measure used to judge the discrepancy between real and synthetic samples is an algorithmic design choice, and that different choices can result in significant differences in model performance. Choices including Wasserstein distance and various f-divergences have already been explored in the adversarial networks literature, while more recently the latter class has been investigated for imitation learning. Unfortunately, we find that in practice this existing imitation-learning framework for using f-divergences suffers from numerical instabilities stemming from the combination of function approximation and policy-gradient reinforcement learning. In this work, we alleviate these challenges and offer a reparameterization of adversarial imitation learning as f-divergence minimization before further extending the framework to handle the problem of imitation from observations only. Empirically, we demonstrate that our design choices for coupling imitation learning and f-divergences are critical to recovering successful imitation policies. Moreover, we find that with the appropriate choice of f-divergence, we can obtain imitation-from-observation algorithms that outperform baseline approaches and more closely match expert performance in continous-control tasks with low-dimensional observation spaces. With high-dimensional observations, we still observe a significant gap with and without action labels, offering an interesting avenue for future work.
reject
The submission performs empirical analysis on f-VIM (Ke, 2019), a method for imitation learning by f-divergence minimization. The paper especially focues on a state-only formulation akin to GAILfO (Torabi et al., 2018b). The main contributions are: 1) The paper identifies numerical proplems with the output activations of f-VIM and suggest a scheme to choose them such that the resulting rewards are bounded. 2) A regularizer that was proposed by Mescheder et al. (2018) for GANs is tested in the adversarial imitation learning setting. 3) In order to handle state-only demonstrations, the technique of GAILfO is applied to f-VIM (then denoted f-VIMO) which inputs state-nextStates instead of state-actions to the discriminator. The reviewers found the submitted paper hard to follow, which suggests a revision might make more apparent the author's contributions in later submissions of this work.
train
[ "HyxFuo-vYB", "HJgm_OE2sr", "rygFMXN2sH", "r1eIsMLssr", "BkxBpGwGsr", "SyeL3GPGiS", "B1xhyGwzjB", "BJlj0WDfiS", "r1epoxDGiB", "SJg7BxvMjH", "BkgezmwptS", "ryg7NuiTtH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "* Summary:\nThe paper proposes an IL method based on the f-divergence. Specifically, the paper extends f-VIM (Ke et al., 2019), which uses the f-divergence for IL, by using a sigmoid function for discriminator output’s activation function. This choice of activation function yields an alternative objective function...
[ 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 1 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_SyxDXJStPS", "B1xhyGwzjB", "BkxBpGwGsr", "HyxFuo-vYB", "SyeL3GPGiS", "HyxFuo-vYB", "BJlj0WDfiS", "BkgezmwptS", "ryg7NuiTtH", "iclr_2020_SyxDXJStPS", "iclr_2020_SyxDXJStPS", "iclr_2020_SyxDXJStPS" ]
iclr_2020_HJePXkHtvS
Deep Generative Classifier for Out-of-distribution Sample Detection
The capability of reliably detecting out-of-distribution samples is one of the key factors in deploying a good classifier, as the test distribution always does not match with the training distribution in most real-world applications. In this work, we propose a deep generative classifier which is effective to detect out-of-distribution samples as well as classify in-distribution samples, by integrating the concept of Gaussian discriminant analysis into deep neural networks. Unlike the discriminative (or softmax) classifier that only focuses on the decision boundary partitioning its latent space into multiple regions, our generative classifier aims to explicitly model class-conditional distributions as separable Gaussian distributions. Thereby, we can define the confidence score by the distance between a test sample and the center of each distribution. Our empirical evaluation on multi-class images and tabular data demonstrate that the generative classifier achieves the best performances in distinguishing out-of-distribution samples, and also it can be generalized well for various types of deep neural networks.
reject
The paper presents a training method for deep neural networks to detect out-of-distribution samples under perspective of Gaussian discriminant analysis. Reviewers and AC agree that some idea is given in the previous work (although it does not focus on training), and additional ideas in the paper are not super novel. Furthermore, experimental results are weak, e.g., comparison with other deep generative classifiers are desirable, as the paper focuses on training such deep models. Hence, I recommend rejection.
train
[ "Sygh7IBZ5r", "rkg7luoWcS", "Byl13TdZsS", "HJgDWCr-oH", "BJxYqil-iH", "SJgAzmZUYr", "SJx0sj7NOS", "Syl8zfSZ_S" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "public" ]
[ "This paper proposes a metric learning-based generative model for detecting the out-of-distribution examples. A new objective function is proposed to model class-dependent class-distribution into a Gaussian analysis models. For the proposed objective, the illustration of derived KL divergence under the Gaussian di...
[ 3, 6, -1, -1, -1, 3, -1, -1 ]
[ 4, 1, -1, -1, -1, 5, -1, -1 ]
[ "iclr_2020_HJePXkHtvS", "iclr_2020_HJePXkHtvS", "rkg7luoWcS", "SJgAzmZUYr", "Sygh7IBZ5r", "iclr_2020_HJePXkHtvS", "Syl8zfSZ_S", "iclr_2020_HJePXkHtvS" ]
iclr_2020_SJx_QJHYDB
Finding Winning Tickets with Limited (or No) Supervision
The lottery ticket hypothesis argues that neural networks contain sparse subnetworks, which, if appropriately initialized (the winning tickets), are capable of matching the accuracy of the full network when trained in isolation. Empirically made in different contexts, such an observation opens interesting questions about the dynamics of neural network optimization and the importance of their initializations. However, the properties of winning tickets are not well understood, especially the importance of supervision in the generating process. In this paper, we aim to answer the following open questions: can we find winning tickets with few data samples or few labels? can we even obtain good tickets without supervision? Perhaps surprisingly, we provide a positive answer to both, by generating winning tickets with limited access to data, or with self-supervision---thus without using manual annotations---and then demonstrating the transferability of the tickets to challenging classification tasks such as ImageNet.
reject
The paper studies finding winning tickets with limited supervision. The authors consider a variety of different settings. An interesting contribution is to show that findings on small datasets may be misleading. That said, all three reviewers agree that novelty is limited, and some found inconsistencies and passages that were hard to read: Based on this, it seems the paper doesn't quite meet the ICLR bar in its current form.
train
[ "BkxyHdzhsr", "rJgexFfhir", "BJxzhLG3jr", "SJl5O8GnjS", "H1gzfIG2or", "SkxRhOSTFB", "SJeuh5z19B", "B1gdk_E5cH", "ryxRdy869S" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1) Following the reviewer's comment, we report in Appendix E tables the exact accuracies in each setting. We report mean and standard errors for our experiments which we run with 3 (ImageNet and Places) or 6 (CIFAR) different seeds. We thank the reviewer for this recommendation and for helping us improving the cla...
[ -1, -1, -1, -1, -1, 1, 3, 6, 3 ]
[ -1, -1, -1, -1, -1, 5, 4, 1, 1 ]
[ "SJeuh5z19B", "SkxRhOSTFB", "B1gdk_E5cH", "ryxRdy869S", "iclr_2020_SJx_QJHYDB", "iclr_2020_SJx_QJHYDB", "iclr_2020_SJx_QJHYDB", "iclr_2020_SJx_QJHYDB", "iclr_2020_SJx_QJHYDB" ]
iclr_2020_HJeFmkBtvB
Annealed Denoising score matching: learning Energy based model in high-dimensional spaces
Energy based models outputs unmormalized log-probability values given datasamples. Such a estimation is essential in a variety of application problems suchas sample generation, denoising, sample restoration, outlier detection, Bayesianreasoning, and many more. However, standard maximum likelihood training iscomputationally expensive due to the requirement of sampling model distribution.Score matching potentially alleviates this problem, and denoising score matching(Vincent, 2011) is a particular convenient version. However, previous attemptsfailed to produce models capable of high quality sample synthesis. We believethat it is because they only performed denoising score matching over a singlenoise scale. To overcome this limitation, here we instead learn an energy functionusing all noise scales. When sampled using Annealed Langevin dynamics andsingle step denoising jump, our model produced high-quality samples comparableto state-of-the-art techniques such as GANs, in addition to assigning likelihood totest data comparable to previous likelihood models. Our model set a new sam-ple quality baseline in likelihood-based models. We further demonstrate that our model learns sample distribution and generalize well on an image inpainting tasks.
reject
This paper presents a variant of the Noise Conditional Score Network (NCSN) which does score matching using a single Gaussian scale mixture noise model. Unlike the NCSN, it learns a single energy-based model, and therefore can be compared directly to other models in terms of compression. I've read the paper, and the methods, exposition, and experiments all seem solid. Numerically, the score is slightly below the cutoff; reviewers generally think the paper is well-executed, but lacking in novelty and quality of results relative to Song & Ermon (2019).
train
[ "rkej7Q86KS", "SJeizFPSjS", "S1erAdDHoB", "rJe2xFDrjB", "H1l69OwBiS", "rygT2g4otH", "BkesvLyAFH" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "########Updated Review ###########\n\nI would like to thank the author(s) for their reply, which I have carefully read and it partly addresses my original concerns. Still, as agreed by all three reviewers, this paper might not be a significant step up compared with [1]. I am raising my point to weak reject to refl...
[ 3, -1, -1, -1, -1, 6, 3 ]
[ 4, -1, -1, -1, -1, 4, 1 ]
[ "iclr_2020_HJeFmkBtvB", "rygT2g4otH", "BkesvLyAFH", "rkej7Q86KS", "iclr_2020_HJeFmkBtvB", "iclr_2020_HJeFmkBtvB", "iclr_2020_HJeFmkBtvB" ]
iclr_2020_HygYmJBKwH
YaoGAN: Learning Worst-case Competitive Algorithms from Self-generated Inputs
We tackle the challenge of using machine learning to find algorithms with strong worst-case guarantees for online combinatorial optimization problems. Whereas the previous approach along this direction (Kong et al., 2018) relies on significant domain expertise to provide hard distributions over input instances at training, we ask whether this can be accomplished from first principles, i.e., without any human-provided data beyond specifying the objective of the optimization problem. To answer this question, we draw insights from classic results in game theory, analysis of algorithms, and online learning to introduce a novel framework. At the high level, similar to a generative adversarial network (GAN), our framework has two components whose respective goals are to learn the optimal algorithm as well as a set of input instances that captures the essential difficulty of the given optimization problem. The two components are trained against each other and evolved simultaneously. We test our ideas on the ski rental problem and the fractional AdWords problem. For these well-studied problems, our preliminary results demonstrate that the framework is capable of finding algorithms as well as difficult input instances that are consistent with known optimal results. We believe our new framework points to a promising direction which can facilitate the research of algorithm design by leveraging ML to improve the state of the art both in theory and in practice.
reject
The authors propose an intriguing way to designing competitive online algorithms. However, the state of the paper and the provided evidence of the success of the proposed methodology is too preliminary to merit acceptance.
train
[ "rkeZqrk0tr", "H1lxDJomsr", "BygZ2yjmir", "SylS_OoXsH", "BkgUZcsmsS", "SyghCKsmor", "BkgNVosXiB", "H1xP2ojXjB", "H1egloiXiS", "Skgxo_i6_S", "SkxLGy2ftH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Update to the Review after the rebuttal from the Authors:\nAfter carefully reviewing the responses by the authors especially on my concerns about the significance of solving an instance of a given problem and the improvement in the exposition of the ideas I would like to amend my earlier decision and recommend to ...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, 1 ]
[ "iclr_2020_HygYmJBKwH", "iclr_2020_HygYmJBKwH", "H1lxDJomsr", "rkeZqrk0tr", "SylS_OoXsH", "SylS_OoXsH", "SkxLGy2ftH", "Skgxo_i6_S", "SkxLGy2ftH", "iclr_2020_HygYmJBKwH", "iclr_2020_HygYmJBKwH" ]
iclr_2020_HkgqmyrYDH
WORD SEQUENCE PREDICTION FOR AMHARIC LANGUAGE
Word prediction is guessing what word comes after, based on some current information, and it is the main focus of this study. Even though Amharic is used by a large number of populations, no significant work is done on the topic. In this study, Amharic word sequence prediction model is developed using Machine learning. We used statistical methods using Hidden Markov Model by incorporating detailed parts of speech tag and user profiling or adaptation. One of the needs for this research is to overcome the challenges on inflected languages. Word sequence prediction is a challenging task for inflected languages (Gustavii &Pettersson, 2003; Seyyed & Assi, 2005). These kinds of languages are morphologically rich and have enormous word forms, which is a word can have different forms. As Amharic language is morphologically rich it shares the problem (Tessema, 2014).This problem makes word prediction system much more difficult and results poor performance. Previous researches used dictionary approach with no consideration of context information. Due to this reason, storing all forms in a dictionary won’t solve the problem as in English and other less inflected languages. Therefore, we introduced two models; tags and words and linear interpolation that use parts of speech tag information in addition to word n-grams in order to maximize the likelihood of syntactic appropriateness of the suggestions. The statistics included in the systems varies from single word frequencies to parts-of-speech tag n-grams. We described a combined statistical and lexical word prediction system and developed Amharic language models of bigram and trigram for the training purpose. The overall study followed Design Science Research Methodology (DSRM).
reject
This paper presents a language model for Amharic using HMMs and incorporating POS tags. The paper is very short and lacks essential parts such as describing the exact model and the experimental design and results. The reviewers all rejected this paper, and there was no author rebuttal. This paper is clearly not appropriate for publication at ICLR.
train
[ "BJeDrU0bKH", "HJxiqVt3tr", "rygH6_Mh5r" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I would not like to sound offensive, but this paper is clearly below the standards of the conference, and outside any academic orthodoxy for the matter:\n- It is only 3 pages long including references, and does not even follow the conference template.\n- It has only 3 sections (\"introduction\", \"methodology\" an...
[ 1, 1, 1 ]
[ 5, 4, 3 ]
[ "iclr_2020_HkgqmyrYDH", "iclr_2020_HkgqmyrYDH", "iclr_2020_HkgqmyrYDH" ]
iclr_2020_Bkx5XyrtPS
Depth creates no more spurious local minima in linear networks
We show that for any convex differentiable loss, a deep linear network has no spurious local minima as long as it is true for the two layer case. This reduction greatly simplifies the study on the existence of spurious local minima in deep linear networks. When applied to the quadratic loss, our result immediately implies the powerful result by Kawaguchi (2016). Further, with the recent work by Zhou& Liang (2018), we can remove all the assumptions in (Kawaguchi, 2016). This property holds for more general “multi-tower” linear networks too. Our proof builds on the work in (Laurent & von Brecht, 2018) and develops a new perturbation argument to show that any spurious local minimum must have full rank, a structural property which can be useful more generally
reject
Paper shows that the question of linear deep networks having spurious local minima under benign conditions on the loss function can be reduced to the two layer case. This paper is motivated by and builds upon works that are proven for specific cases. Reviewers found the techniques used to prove the result not very novel in light of existing techniques. Novelty of technique is of particular importance to this area because these results have little practical value in linear networks on their own; the goal is to extend these techniques to the more interesting non-linear case.
train
[ "Bkg1_DOvsS", "B1lWtYBXor", "rkg90tx-oS", "rkg1Svl-jr", "HJlLXukZoS", "B1gk7EgVYr", "ryxmC2kpFB", "SylpH0iMqH" ]
[ "author", "public", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the reference!", "Hi Authors,\nThank you for your interesting paper. I wanted to bring to your attention that your insights into spurious local minima is related to our paper which shows, both theoretically and empirically, that highly suboptimal local minima do exist in the loss landscape of nonline...
[ -1, -1, -1, -1, -1, 3, 6, 3 ]
[ -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "B1lWtYBXor", "iclr_2020_Bkx5XyrtPS", "ryxmC2kpFB", "B1gk7EgVYr", "SylpH0iMqH", "iclr_2020_Bkx5XyrtPS", "iclr_2020_Bkx5XyrtPS", "iclr_2020_Bkx5XyrtPS" ]
iclr_2020_Syx9Q1rYvH
Mutual Information Maximization for Robust Plannable Representations
Extending the capabilities of robotics to real-world complex, unstructured environments requires the capability of developing better perception systems while maintaining low sample complexity. When dealing with high-dimensional state spaces, current methods are either model-free, or model-based with reconstruction based objectives. The sample inefficiency of the former constitutes a major barrier for applying them to the real-world. While the latter present low sample complexity, they learn latent spaces that need to reconstruct every single detail of the scene. Real-world environments are unstructured and cluttered with objects. Capturing all the variability on the latent representation harms its applicability to downstream tasks. In this work, we present mutual information maximization for robust plannable representations (MIRO), an information theoretic representational learning objective for model-based reinforcement learning. Our objective optimizes for a latent space that maximizes the mutual information with future observations and emphasizes the relevant aspects of the dynamics, which allows to capture all the information needed for planning. We show that our approach learns a latent representation that in cluttered scenes focuses on the task relevant features, ignoring the irrelevant aspects. At the same time, state-of-the-art methods with reconstruction objectives are unable to learn in such environments.
reject
The manuscript concerns a mutual information maximization objective for dynamics model learning, with the aim of using this representation for planning / skill learning. The central claim is that this objective promotes robustness to visual distractors, compared with reconstruction-based objectives. The proposed method is evaluated on DeepMind Control Suite tasks from rendered pixel observations, modified to include simple visual distractors. Reviewers concurred that the problem under consideration is important, and (for the most part) that the presentation was clear, though one reviewer disagreed, remarking that the method is only introduced on the 5th page. A central sticking point was whether the method would reliably give rise to representations that ignore distractors and preferentially encode task information. (I would note that a very similar phenomenon to the behaviour they describe has been empirically demonstrated before in Warde-Farley et al 2018, also on DM Control Suite tasks, where the most predictable/controllable elements of a scene are reliably imitated by a goal-conditioned policy trained against a MI-based reward). The distractors evaluated were criticized as unrealistically stochastic, that fully deterministic distractors may confound the procedure; while a revised version of the manuscript experimented with *less* random distractors, these distractors were still unpredictable at the scale of more than a few frames. While the manuscript has improved considerably in several ways based on reviewer feedback, reviewers remain unconvinced by the empirical investigation, particularly the choice of distractors. I therefore recommend rejection at this time, while encouraging the authors to incorporate criticisms to strengthen a resubmission.
train
[ "B1x4s2u0Fr", "Bkg6m1X2sr", "B1lHIkXnoB", "HklgKJQ3jr", "r1eaHySCFr", "Hyl7bVL0tr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\n########### Post-rebuttal summary ############\nThe proposed method relies on the fact that the distractors have highly unpredictable movements such that a mutual information objective between frames of a sequence learns to ignore them. The experimental evaluations are performed with distractors that randomly ch...
[ 3, -1, -1, -1, 1, 6 ]
[ 4, -1, -1, -1, 3, 5 ]
[ "iclr_2020_Syx9Q1rYvH", "r1eaHySCFr", "Hyl7bVL0tr", "B1x4s2u0Fr", "iclr_2020_Syx9Q1rYvH", "iclr_2020_Syx9Q1rYvH" ]
iclr_2020_H1lTQ1rFvS
R2D2: Reuse & Reduce via Dynamic Weight Diffusion for Training Efficient NLP Models
We propose R2D2 layers, a new neural block for training efficient NLP models. Our proposed method is characterized by a dynamic weight diffusion mechanism which learns to reuse and reduce parameters in the conventional transformation layer, commonly found in popular Transformer/LSTMs models. Our method is inspired by recent Quaternion methods which share parameters via the Hamilton product. This can be interpreted as a neural and learned approximation of the Hamilton product which imbues our method with increased flexibility and expressiveness, i.e., we are no longer restricted by the 4D nature of Quaternion weight sharing. We conduct extensive experiments in the NLP domain, showing that R2D2 (i) enables a parameter savings of up to 2 times to 16 times with minimal degradation of performance and (ii) outperforms other parameter savings alternative such as low-rank factorization and Quaternion methods.
reject
This paper proposes a very interesting alternative to feed-forward network layers, based on Quaternion methods and Hamilton products, which has the benefit of reducing the number of parameters in the neural network (more than 50% smaller) without sacrificing performance. They conducted extensive experiments on language tasks (NMT and NLI, among others) using transformers and LSTMs. The paper appears to be clearly presented and have extensive results on a variety of tasks. However all reviewers pointed out that there is a lack of in-depth analysis and thus insight into why this approach works, as well as questions on the specific effects of regularization. These concerns were not addressed in the rebuttal period, instead leaving it to future work. My assessment is that, with further analysis, ablation studies, and comparison to alternative methods for reducing model size (quantization, etc), this paper has the potential to be quite impactful, and I look forward to future versions of this work. As it currently stands, however, I don’t believe it’s suitable for publication at ICLR.
train
[ "BygXzJ0siH", "BJlV6jaoiB", "r1gTNqaosS", "SJx3GtTwtH", "B1xr0FcpFS", "BJer5FcCYH" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewer,\n\nThanks for the insightful review! We are happy to hear that you liked our paper!\n\nPertaining to the dynamics of the model, we believe that our method is a more expressive, parameterized adaptation of the Hamilton product - which already brings about benefits from latent inter component interact...
[ -1, -1, -1, 3, 6, 6 ]
[ -1, -1, -1, 5, 4, 5 ]
[ "B1xr0FcpFS", "SJx3GtTwtH", "BJer5FcCYH", "iclr_2020_H1lTQ1rFvS", "iclr_2020_H1lTQ1rFvS", "iclr_2020_H1lTQ1rFvS" ]
iclr_2020_r1la7krKPS
Measuring Calibration in Deep Learning
Overconfidence and underconfidence in machine learning classifiers is measured by calibration: the degree to which the probabilities predicted for each class match the accuracy of the classifier on that prediction. We propose two new measures for calibration, the Static Calibration Error (SCE) and Adaptive Calibration Error (ACE). These measures take into account every prediction made by a model, in contrast to the popular Expected Calibration Error.
reject
The authors propose two measures of calibration that don't simply rely on the top prediction. The reviewers gave a lot of useful feedback. Unfortunately, the authors didn't respond.
test
[ "SyxmYff5Fr", "rJe58YUoYr", "r1escKFm5S", "Skl4_SQaDr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public" ]
[ "After an interesting review of calibration methods, the paper describes two new methods for assessing calibration. The first method, SCE, is an extension of the usual ECE to the multi class setting. The second method, ACE, is a slight variation where bins are computed adaptively.\n\nThe paper is interesting and re...
[ 6, 3, 1, -1 ]
[ 3, 4, 3, -1 ]
[ "iclr_2020_r1la7krKPS", "iclr_2020_r1la7krKPS", "iclr_2020_r1la7krKPS", "iclr_2020_r1la7krKPS" ]
iclr_2020_BylyV1BtDB
FR-GAN: Fair and Robust Training
We consider the problem of fair and robust model training in the presence of data poisoning. Ensuring fairness usually involves a tradeoff against accuracy, so if the data poisoning is mistakenly viewed as additional bias to be fixed, the accuracy will be sacrificed even more. We demonstrate that this phenomenon indeed holds for state-of-the-art model fairness techniques. We then propose FR-GAN, which holistically performs fair and robust model training using generative adversarial networks (GANs). We first use a generator that attempts to classify examples as accurately as possible. In addition, we deploy two discriminators: (1) a fairness discriminator that predicts the sensitive attribute from classification results and (2) a robustness discriminator that distinguishes examples and predictions from a clean validation set. Our framework respects all the prominent fairness measures: disparate impact, equalized odds, and equal opportunity. Also, FR-GAN optimizes fairness without requiring the knowledge of prior statistics of the sensitive attributes. In our experiments, FR-GAN shows almost no decrease in fairness and accuracy in the presence of data poisoning unlike other state-of-the-art fairness methods, which are vulnerable. In addition, FR-GAN can be adjusted using parameters to maintain reasonable accuracy and fairness even if the validation set is too small or unavailable.
reject
This manuscript proposes an approach for fair and robust training of predictive modeling -- both of which are implemented using adversarial methods, i.e., an adversarial loss for fairness and an adversarial loss for robustness. The resulting model is evaluated empirically and shown to improve fairness and robustness performance. The reviewers and AC agree that the problem studied is timely and interesting, as there is limited work on joint fairness and robustness. However, the reviewers were unconvinced about the novelty and clarity of the conceptual and empirical results. In reviews and discussion, the reviewers also noted insufficient motivation for the approach.
val
[ "HklcxeOYsB", "r1gaUy_KjS", "SJlInCvKir", "ryg38nhhtS", "SJe-mj1AFH", "rklyVs7S5B" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the insightful comments.\n\nQ3-1. (1) motivation\n\nA3-1. \nWe believe that given that real data would increasingly become both biased and poisoned (this is what we expect in the big data era - see the next paragraph for details), our main contribution of providing an integrated solution for fair and ro...
[ -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, 1, 4, 4 ]
[ "ryg38nhhtS", "SJe-mj1AFH", "rklyVs7S5B", "iclr_2020_BylyV1BtDB", "iclr_2020_BylyV1BtDB", "iclr_2020_BylyV1BtDB" ]
iclr_2020_S1xJ4JHFvS
Acutum: When Generalization Meets Adaptability
In spite of the slow convergence, stochastic gradient descent (SGD) is still the most practical optimization method due to its outstanding generalization ability and simplicity. On the other hand, adaptive methods have attracted much more attention of optimization and machine learning communities, both for the leverage of life-long information and for the deep and fundamental mathematical theory. Taking the best of both worlds is the most exciting and challenging question in the field of optimization for machine learning. In this paper, we take a small step towards such ultimate goal. We revisit existing adaptive methods from a novel point of view, which reveals a fresh understanding of momentum. Our new intuition empowers us to remove the second moments in Adam without the loss of performance. Based on our view, we propose a new method, named acute adaptive momentum (Acutum). To the best of our knowledge, Acutum is the first adaptive gradient method without second moments. Experimentally, we demonstrate that our method has a faster convergence rate than Adam/Amsgrad, and generalizes as well as SGD with momentum. We also provide a convergence analysis of our proposed method to complement our intuition.
reject
The paper addresses an important problem of finding a good trade-off between generalization and convergence speed of stochastic gradient methods for training deep nets. However, there is a consensus among the reviewers, even after rebuttals provided by the authors, that the contribution is somewhat limited and the paper may require additional work before it is ready to be published.
test
[ "HJelXDOTFS", "S1gjWbZ2iB", "Byg5DzhcsS", "SkxJST6zsS", "r1xK5nTGsr", "BJeulyAMjr", "rJgbavlCYr", "BkgxL3dCtH" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes new stochastic optimization methods to achieve a fast convergence of adaptive SGD and preserve the generalization ability of SGD. The idea is to let the search direction opposite to the gradient at the current batch of examples and a bit orthogonal to previous batch of examples. The algorithm i...
[ 3, -1, -1, -1, -1, -1, 1, 6 ]
[ 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_S1xJ4JHFvS", "Byg5DzhcsS", "SkxJST6zsS", "HJelXDOTFS", "rJgbavlCYr", "BkgxL3dCtH", "iclr_2020_S1xJ4JHFvS", "iclr_2020_S1xJ4JHFvS" ]
iclr_2020_rJel41BtDH
Pseudo-Labeling and Confirmation Bias in Deep Semi-Supervised Learning
Semi-supervised learning, i.e. jointly learning from labeled an unlabeled samples, is an active research topic due to its key role on relaxing human annotation constraints. In the context of image classification, recent advances to learn from unlabeled samples are mainly focused on consistency regularization methods that encourage invariant predictions for different perturbations of unlabeled samples. We, conversely, propose to learn from unlabeled data by generating soft pseudo-labels using the network predictions. We show that a naive pseudo-labeling overfits to incorrect pseudo-labels due to the so-called confirmation bias and demonstrate that mixup augmentation and setting a minimum number of labeled samples per mini-batch are effective regularization techniques for reducing it. The proposed approach achieves state-of-the-art results in CIFAR-10/100 and Mini-ImageNet despite being much simpler than other state-of-the-art. These results demonstrate that pseudo-labeling can outperform consistency regularization methods, while the opposite was supposed in previous work. Code will be made available.
reject
The paper focuses on semi-supervised learning and presents a pseudo labeling-based approach with i) mixup (Zhang et al. 2018); ii) keeping $k$ labelled examples in each minibatch. The paper is clear and well-written; it presents a simple and empirically effective idea. Reviewers appreciate the nice proof of concept on the two-moons dataset, the fact that the approach is validated with different architectures. Some details would need to be clarified, e.g. about the dropout control. A main contribution of the paper is to show that pseudo-labelling plus the combination of mixup and certainty (keeping $k$ labelled examples in each minibatch) can outperform the state of the art based on consistency regularization methods, while being simpler and computationally much less demanding. While the paper does a good job of showing that "it works", the reader however misses some discussion about "why it works". It is most interesting that the performances are not improving with $k$ (Table 1). An in-depth analysis of the trade-off between the uncertainty (through mix-up and the entropy of the pseudo-labels) and certainty, and how it impacts the performance, would be appreciated. You might consider monitoring how this trade-off evolves along learning; I suspect that evolving $k$ along the epochs might make sense; the question is to find a simple way to control online this hyper-parameter. The area chair encourages the authors to continue this very promising path of research, and dig a little bit deeper, considering the question of optimizing the trade-off between certainty and uncertainty along the training trajectory.
train
[ "SJlg3L55tH", "Hye47n6KjS", "SkeyKpaFoS", "BJgwVTaYoS", "HJlApoptjr", "Syeyoi6KsS", "Syla9DLY_r", "B1g97ORFtB", "ByeJgrROYH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "public" ]
[ "Summary: This paper focuses on the semi-supervised learning problem, and proposes a way to improve previous pseudo-labeling methods. In pseudo-labeling, there is an issue called confirmation bias, which accumulates the early errors of wrong pseudo labels. By adding some simple tricks such as adding mixup augment...
[ 3, -1, -1, -1, -1, -1, 8, 3, -1 ]
[ 3, -1, -1, -1, -1, -1, 4, 4, -1 ]
[ "iclr_2020_rJel41BtDH", "B1g97ORFtB", "Syla9DLY_r", "Syla9DLY_r", "SJlg3L55tH", "SJlg3L55tH", "iclr_2020_rJel41BtDH", "iclr_2020_rJel41BtDH", "iclr_2020_rJel41BtDH" ]
iclr_2020_ryefE1SYDr
LIA: Latently Invertible Autoencoder with Adversarial Learning
Deep generative models such as Variational AutoEncoder (VAE) and Generative Adversarial Network (GAN) play an increasingly important role in machine learning and computer vision. However, there are two fundamental issues hindering their real-world applications: the difficulty of conducting variational inference in VAE and the functional absence of encoding real-world samples in GAN. In this paper, we propose a novel algorithm named Latently Invertible Autoencoder (LIA) to address the above two issues in one framework. An invertible network and its inverse mapping are symmetrically embedded in the latent space of VAE. Thus the partial encoder first transforms the input into feature vectors and then the distribution of these feature vectors is reshaped to fit a prior by the invertible network. The decoder proceeds in the reverse order of the encoder's composite mappings. A two-stage stochasticity-free training scheme is designed to train LIA via adversarial learning, in the sense that the decoder of LIA is first trained as a standard GAN with the invertible network and then the partial encoder is learned from an autoencoder by detaching the invertible network from LIA. Experiments conducted on the FFHQ face dataset and three LSUN datasets validate the effectiveness of LIA for inference and generation.
reject
A nice idea: the latent prior is replaced by a GAN. A general agreement between all four reviewers to reject the submission, based on a not thorough enough description of the approach, and possibly not being novel.
train
[ "SJeLsABqFH", "BJxZZTx5ir", "B1l9exHBjH", "ByxHOkHBir", "BylZiANHor", "BkgxA2NrjB", "BkxbdqGb5B", "rJxSeEOj5S", "ByljTrZ0qB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "LIA: Latently Invertible Autoencoder Review\n\nThis paper proposes a novel generative autoencoder, and a two-stage scheme for training it. A typical VAE is trained with a variational approximation: during training latents are sampled from mu(x) + sigma(x) * N(0,1), mu and sigma are regularized with KL div to match...
[ 3, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ 5, -1, -1, -1, -1, -1, 1, 3, 4 ]
[ "iclr_2020_ryefE1SYDr", "BkgxA2NrjB", "ByljTrZ0qB", "BylZiANHor", "SJeLsABqFH", "rJxSeEOj5S", "iclr_2020_ryefE1SYDr", "iclr_2020_ryefE1SYDr", "iclr_2020_ryefE1SYDr" ]
iclr_2020_BJx7N1SKvB
A Random Matrix Perspective on Mixtures of Nonlinearities in High Dimensions
One of the distinguishing characteristics of modern deep learning systems is that they typically employ neural network architectures that utilize enormous numbers of parameters, often in the millions and sometimes even in the billions. While this paradigm has inspired significant research on the properties of large networks, relatively little work has been devoted to the fact that these networks are often used to model large complex datasets, which may themselves contain millions or even billions of constraints. In this work, we focus on this high-dimensional regime in which both the dataset size and the number of features tend to infinity. We analyze the performance of a simple regression model trained on the random features F=f(WX+B) for a random weight matrix W and random bias vector B, obtaining an exact formula for the asymptotic training error on a noisy autoencoding task. The role of the bias can be understood as parameterizing a distribution over activation functions, and our analysis actually extends to general such distributions, even those not expressible with a traditional additive bias. Intruigingly, we find that a mixture of nonlinearities can outperform the best single nonlinearity on the noisy autoecndoing task, suggesting that mixtures of nonlinearities might be useful for approximate kernel methods or neural network architecture design.
reject
In this work, the authors focus on the high-dimensional regime in which both the dataset size and the number of features tend to infinity. They analyze the performance of a simple regression model trained on the random features and revealed several interesting and important observations. Unfortunately, the reviewers could not reach a consensus as to whether this paper had sufficient novelty to merit acceptance at this time. Incorporating their feedback would move the paper closer towards the acceptance threshold.
train
[ "HyxOtaQviB", "rkxjQeEwiH", "Byg3S1NPjH", "BylmyJNwjB", "rylKZCXDiS", "Bkl2KwYTtH", "rklsQVQCtS", "SJgdT5JY5S" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We are grateful to all reviewers for their constructive feedback and for the time they took to review our work. We have uploaded a new version of the paper and added detailed responses to their comments below. In particular, we address some technical questions about our paper, and explain the overall merits of our...
[ -1, -1, -1, -1, -1, 8, 3, 6 ]
[ -1, -1, -1, -1, -1, 3, 1, 1 ]
[ "iclr_2020_BJx7N1SKvB", "SJgdT5JY5S", "rklsQVQCtS", "rklsQVQCtS", "Bkl2KwYTtH", "iclr_2020_BJx7N1SKvB", "iclr_2020_BJx7N1SKvB", "iclr_2020_BJx7N1SKvB" ]
iclr_2020_r1l7E1HFPH
Multi-step Greedy Policies in Model-Free Deep Reinforcement Learning
Multi-step greedy policies have been extensively used in model-based Reinforcement Learning (RL) and in the case when a model of the environment is available (e.g., in the game of Go). In this work, we explore the benefits of multi-step greedy policies in model-free RL when employed in the framework of multi-step Dynamic Programming (DP): multi-step Policy and Value Iteration. These algorithms iteratively solve short-horizon decision problems and converge to the optimal solution of the original one. By using model-free algorithms as solvers of the short-horizon problems we derive fully model-free algorithms which are instances of the multi-step DP framework. As model-free algorithms are prone to instabilities w.r.t. the decision problem horizon, this simple approach can help in mitigating these instabilities and results in an improved model-free algorithms. We test this approach and show results on both discrete and continuous control problems.
reject
This paper extends recent multi-step dynamic programming algorithms to reinforcement learning with function approximation. In particular, the paper extends h-step optimal Bellman operators (and associated k-PI and k-VI algorithms) to deep reinforcement learning. The paper describes new extensions to DQN and TRPO algorithms. This approach is claimed to reduce the instability of model-free algorithms, and the approach is tested on Atari and Mujoco domains. The reviewers noticed several limitations of the work. The reviewers found little theoretical contribution in this work and they were unsatisfied with the empirical contributions. The reviewers were unconvinced of the strength and clarity of the empirical results with the Atari and Mujoco domains along with the deep learning network architectures. The reviewers suggested that simpler domains with a simpler function approximation scheme could enable more through experiments and more conclusive results. The claim in the abstract of addressing the instabilities was also not adequately studied in the paper. This paper is not ready for publication. The primary contribution of this work is the empirical evaluation, and the evaluation is not sufficiently clear for the reviewers.
train
[ "rJlPyxFhor", "S1gcu3dnjH", "HJe7nfqXiH", "B1l5akg2jB", "SJeiUS9miB", "r1eXcBqXsr", "BylDpXqQoH", "SJguBE9QsH", "BklUIEo6_r", "BJeblVM6tr", "H1geTaI6KS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Finally, about my rating of the paper, I have decided to keep it the same: weak accept. However, there is still some work to do on the paper, so I would not mind if the current iteration of the paper was rejected. \n\nAs reviewer #1 mentioned, the empirical evaluations in the main body of the paper are hard to rea...
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 1, 3, 4 ]
[ "S1gcu3dnjH", "B1l5akg2jB", "H1geTaI6KS", "HJe7nfqXiH", "BklUIEo6_r", "BklUIEo6_r", "BJeblVM6tr", "BJeblVM6tr", "iclr_2020_r1l7E1HFPH", "iclr_2020_r1l7E1HFPH", "iclr_2020_r1l7E1HFPH" ]
iclr_2020_H1lXVJStwB
Dynamic Instance Hardness
We introduce dynamic instance hardness (DIH) to facilitate the training of machine learning models. DIH is a property of each training sample and is computed as the running mean of the sample's instantaneous hardness as measured over the training history. We use DIH to evaluate how well a model retains knowledge about each training sample over time. We find that for deep neural nets (DNNs), the DIH of a sample in relatively early training stages reflects its DIH in later stages and as a result, DIH can be effectively used to reduce the set of training samples in future epochs. Specifically, during each epoch, only samples with high DIH are trained (since they are historically hard) while samples with low DIH can be safely ignored. DIH is updated each epoch only for the selected samples, so it does not require additional computation. Hence, using DIH during training leads to an appreciable speedup. Also, since the model is focused on the historically more challenging samples, resultant models are more accurate. The above, when formulated as an algorithm, can be seen as a form of curriculum learning, so we call our framework DIH curriculum learning (or DIHCL). The advantages of DIHCL, compared to other curriculum learning approaches, are: (1) DIHCL does not require additional inference steps over the data not selected by DIHCL in each epoch, (2) the dynamic instance hardness, compared to static instance hardness (e.g., instantaneous loss), is more stable as it integrates information over the entire training history up to the present time. Making certain mathematical assumptions, we formulate the problem of DIHCL as finding a curriculum that maximizes a multi-set function f(⋅), and derive an approximation bound for a DIH-produced curriculum relative to the optimal curriculum. Empirically, DIHCL-trained DNNs significantly outperform random mini-batch SGD and other recently developed curriculum learning methods in terms of efficiency, early-stage convergence, and final performance, and this is shown in training several state-of-the-art DNNs on 11 modern datasets.
reject
All three reviewers, even after the rebuttal, agreed that the paper did not meet with bar for acceptance. A common complaint was lack of clarity being a major problem. Unfortunately, the paper cannot be accepted in its current form. The authors are encouraged to improve the presentation of their approach and resubmit to a new venue.
val
[ "Bkx48v0Jqr", "S1xASJsjsS", "BkgYi09siB", "rJeU91sssS", "ryeC-rzhYr", "Syg2KlC0Fr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "*Revision after author response*\n\nI thank the authors for the comments on my questions. \n\nUnfortunately, I do not feel that these comments addressed my main concerns. For all my experimental analysis questions, the authors promised some analyses for future versions, but I was hoping to see at least a minor pre...
[ 3, -1, -1, -1, 1, 3 ]
[ 4, -1, -1, -1, 3, 3 ]
[ "iclr_2020_H1lXVJStwB", "Syg2KlC0Fr", "Bkx48v0Jqr", "ryeC-rzhYr", "iclr_2020_H1lXVJStwB", "iclr_2020_H1lXVJStwB" ]
iclr_2020_BJe4V1HFPr
Disentangling Style and Content in Anime Illustrations
Existing methods for AI-generated artworks still struggle with generating high-quality stylized content, where high-level semantics are preserved, or separating fine-grained styles from various artists. We propose a novel Generative Adversarial Disentanglement Network which can disentangle two complementary factors of variations when only one of them is labelled in general, and fully decompose complex anime illustrations into style and content in particular. Training such model is challenging, since given a style, various content data may exist but not the other way round. Our approach is divided into two stages, one that encodes an input image into a style independent content, and one based on a dual-conditional generator. We demonstrate the ability to generate high-fidelity anime portraits with a fixed content and a large variety of styles from over a thousand artists, and vice versa, using a single end-to-end network and with applications in style transfer. We show this unique capability as well as superior output to the current state-of-the-art.
reject
This paper proposes a two-stage adversarial training approach for learning a disentangled representation of style and content of anime images. Unlike the previous style transfer work, here style is defined as the identity of a particular anime artist, rather than a set of uninterpretable style features. This allows the trained network to generate new anime images which have a particular content and are drawn in the style of a particular artist. While the approach works well, the reviewers voiced concerns about the method (overly complicated and somewhat incremental) and the quality of the experimental section (lack of good baselines and quantitative comparisons at least in terms of the disentanglement quality). It was also mentioned that releasing the code and the dataset would strengthen the appeal of the paper. While the authors have addressed some of the reviewers’ concerns, unfortunately it was not enough to persuade the reviewers to change their marks. Hence, I have to recommend a rejection.
test
[ "rygSqFE2oH", "HylCthV3sH", "HJgsz9V2or", "rkgoormqiB", "Hkl2BrX9ir", "HygXXr75iH", "SkeldOzRtS", "r1g81xakcH", "rklNwrXSqB" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We have updated our submission, mainly to address reviewer 2 and 3's concern about the lack of quantitative evaluations. In particular, we augmented the experiments on the NIST dataset in appendix B with evaluations of the effectiveness of the disentangling encoder, in the new section B.4, exploiting the fact that...
[ -1, -1, -1, -1, -1, -1, 3, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "iclr_2020_BJe4V1HFPr", "rklNwrXSqB", "SkeldOzRtS", "SkeldOzRtS", "r1g81xakcH", "rklNwrXSqB", "iclr_2020_BJe4V1HFPr", "iclr_2020_BJe4V1HFPr", "iclr_2020_BJe4V1HFPr" ]
iclr_2020_BylB4kBtwB
Retrieving Signals in the Frequency Domain with Deep Complex Extractors
Recent advances have made it possible to create deep complex-valued neural networks. Despite this progress, the potential power of fully complex intermediate computations and representations has not yet been explored for many challenging learning problems. Building on recent advances, we propose a novel mechanism for extracting signals in the frequency domain. As a case study, we perform audio source separation in the Fourier domain. Our extraction mechanism could be regarded as a local ensembling method that combines a complex-valued convolutional version of Feature-Wise Linear Modulation (FiLM) and a signal averaging operation. We also introduce a new explicit amplitude and phase-aware loss, which is scale and time invariant, taking into account the complex-valued components of the spectrogram. Using the Wall Street Journal Dataset, we compare our phase-aware loss to several others that operate both in the time and frequency domains and demonstrate the effectiveness of our proposed signal extraction method and proposed loss. When operating in the complex-valued frequency domain, our deep complex-valued network substantially outperforms its real-valued counterparts even with half the depth and a third of the parameters. Our proposed mechanism improves significantly deep complex-valued networks' performance and we demonstrate the usefulness of its regularizing effect.
reject
The paper discusses audio source separation with complex NNs. The approach is good and may increase an area of research. But the experimental section is very weak and needs to be improved to merit publication.
test
[ "SkeBp6aOoS", "S1g39b0OiH", "S1letkAuoH", "HyeTfJ0djB", "H1eiOTauiB", "r1lh86adoS", "BklD_3adoH", "HygJE9ZXYr", "B1gkXWp6tr", "rygaxccJcB" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the useful feedback and appreciate the encouraging comments.\n\nTo address quickly existing state of the art methods, we have added a Table 4 to our paper, summarizing models in the literature and including ConvTasNet.\n\nOne important clarification that Table 4 provides is that the vario...
[ -1, -1, -1, -1, -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 1, 1, 3 ]
[ "rygaxccJcB", "S1letkAuoH", "HyeTfJ0djB", "HygJE9ZXYr", "r1lh86adoS", "BklD_3adoH", "B1gkXWp6tr", "iclr_2020_BylB4kBtwB", "iclr_2020_BylB4kBtwB", "iclr_2020_BylB4kBtwB" ]
iclr_2020_Skl8EkSFDr
Self-Supervised GAN Compression
Deep learning's success has led to larger and larger models to handle more and more complex tasks; trained models can contain millions of parameters. These large models are compute- and memory-intensive, which makes it a challenge to deploy them with minimized latency, throughput, and storage requirements. Some model compression methods have been successfully applied on image classification and detection or language models, but there has been very little work compressing generative adversarial networks (GANs) performing complex tasks. In this paper, we show that a standard model compression technique, weight pruning, cannot be applied to GANs using existing methods. We then develop a self-supervised compression technique which uses the trained discriminator to supervise the training of a compressed generator. We show that this framework has a compelling performance to high degrees of sparsity, generalizes well to new tasks and models, and enables meaningful comparisons between different pruning granularities.
reject
The paper develops a new method for pruning generators of GANs. It has received a mixed set of reviews. Basically, the reviewers agree that the problem is interesting and appreciate that the authors have tried some baseline approaches and verified/demonstrated that they do not work. Where the reviewers diverge is on whether the authors have been successful with the new method. In the opinion of the first reviewer, there is little value in achieving low levels (e.g. 50%) of fine-grained sparsity, while the authors have not managed to achieve good performance with filter-level sparsity (as evidenced by Figure 7, Table 3 as well as figures in the appendices). The authors admit that the sparsity levels achieved with their approach cannot be turned into speed improvement without future work. Furthermore, as pointed out by the first reviewer, the comparison with prior art, in particular with LIT method, which has been reported to successfully compress the same GAN, is missing and the results of LIT have been misrepresented. While the authors argue that their pruning is an "orthogonal technique", and can be applied on top of LIT, this is not verified in any way. In practice, combination of different compression techniques is known to be non-trivial, since they aim to explain the same types of redundancies. Overall, while this paper comes close, the problems highlighted by the first reviewer have not been resolved convincingly enough for acceptance.
train
[ "r1x6iL19iH", "SkeoHSycjH", "B1g3omy5jB", "BkxJ87yqjH", "SJx-uwtJjH", "Sklg5QfVFS", "HkxGS3W9FH" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your feedback and comments - these suggestions will make our submission stronger. Some responses to particular points follow:\n\n>> ... why put a GAN generator on a mobile device?\n\nAny real-time service using GANs, on a mobile device or otherwise, can benefit from model compression. General example...
[ -1, -1, -1, -1, 3, 6, 6 ]
[ -1, -1, -1, -1, 3, 4, 5 ]
[ "HkxGS3W9FH", "Sklg5QfVFS", "SJx-uwtJjH", "iclr_2020_Skl8EkSFDr", "iclr_2020_Skl8EkSFDr", "iclr_2020_Skl8EkSFDr", "iclr_2020_Skl8EkSFDr" ]
iclr_2020_S1eL4kBYwr
UNITER: Learning UNiversal Image-TExt Representations
Joint image-text embedding is the bedrock for most Vision-and-Language (V+L) tasks, where multimodality inputs are jointly processed for visual and textual understanding. In this paper, we introduce UNITER, a UNiversal Image-TExt Representation, learned through large-scale pre-training over four image-text datasets (COCO, Visual Genome, Conceptual Captions, and SBU Captions), which can power heterogeneous downstream V+L tasks with joint multimodal embeddings. We design three pre-training tasks: Masked Language Modeling (MLM), Image-Text Matching (ITM), and Masked Region Modeling (MRM, with three variants). Different from concurrent work on multimodal pre-training that apply joint random masking to both modalities, we use Conditioned Masking on pre-training tasks (i.e., masked language/region modeling is conditioned on full observation of image/text). Comprehensive analysis shows that conditioned masking yields better performance than unconditioned masking. We also conduct a thorough ablation study to find an optimal combination of pre-training tasks for UNITER. Extensive experiments show that UNITER achieves new state of the art across six V+L tasks over nine datasets, including Visual Question Answering, Image-Text Retrieval, Referring Expression Comprehension, Visual Commonsense Reasoning, Visual Entailment, and NLVR2.
reject
This submission proposes an approach to pre-train general-purpose image and text representations that can be effective on target tasks requiring embeddings for both modes. The authors propose several pre-training tasks beyond masked language modelling that are more suitable for the cross-modal context being addressed, and also investigate which dataset/pretraining task combinations are effective for given target tasks. All reviewers agree that the empirical results that were achieved were impressive. Shared points of concern were: - the novelty of the proposed pre-training schemes. - the lack of insight into the results that were obtained. These concerns were insufficiently addressed after the discussion period, particularly the limited novelty. Given the remaining concerns and the number of strong submissions to ICLR, this submission, while promising, does not meet the bar for acceptance.
train
[ "B1glRlZniH", "r1xcLWimsH", "HJlSBT5QjH", "BkllNzs7sr", "SyxP8v97sS", "B1xg8d57jS", "SJemXYqmiS", "rJl-cFcmsS", "Hyethc9QoS", "H1xtPjqmiS", "H1gyAjcmsr", "S1l5DIqXiB", "SygtFAYaKB", "S1e6rHd6YH", "BkeMz3WRFB" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank all reviewers for your reviews. We have updated the paper and the changes are in blue for easier reference. To summarize, we have added:\n 1) visualization of attention and qualitative examples;\n 2) additional analysis on conditional masking vs. joint random masking;\n 3) more recent SOTA on VCR and N...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "iclr_2020_S1eL4kBYwr", "S1e6rHd6YH", "S1e6rHd6YH", "S1e6rHd6YH", "SygtFAYaKB", "SygtFAYaKB", "SygtFAYaKB", "SygtFAYaKB", "SygtFAYaKB", "SygtFAYaKB", "SygtFAYaKB", "BkeMz3WRFB", "iclr_2020_S1eL4kBYwr", "iclr_2020_S1eL4kBYwr", "iclr_2020_S1eL4kBYwr" ]
iclr_2020_HkewNJStDr
Efficient High-Dimensional Data Representation Learning via Semi-Stochastic Block Coordinate Descent Methods
With the increase of data volume and data dimension, sparse representation learning attracts more and more attention. For high-dimensional data, randomized block coordinate descent methods perform well because they do not need to calculate the gradient along the whole dimension. Existing hard thresholding algorithms evaluate gradients followed by a hard thresholding operation to update the model parameter, which leads to slow convergence. To address this issue, we propose a novel hard thresholding algorithm, called Semi-stochastic Block Coordinate Descent Hard Thresholding Pursuit (SBCD-HTP). Moreover, we present its sparse and asynchronous parallel variants. We theoretically analyze the convergence properties of our algorithms, which show that they have a significantly lower hard thresholding complexity than existing algorithms. Our empirical evaluations on real-world datasets and face recognition tasks demonstrate the superior performance of our algorithms for sparsity-constrained optimization problems.
reject
All the reviewers reach a consensus to reject the current submission. In addition, there are two assumptions in the proof which seemed never included in Theorem conditions or verified in typical cases. 1) Between Eq (16) and (17), the authors assumed the 'extended restricted strong convexity’ given by the un-numbered equation. 2) In Eq. (25), the authors assume the existence of \sigma making the inequality true. However those assumptions are neither explicitly stated in theorem conditions, nor verified for typical cases in applications, e.g. even the square or logistic loss. The authors need to address these assumptions explicitly rather than using them from nowhere.
train
[ "BJgRgt7j5r", "HyeEITxojr", "S1lJepgjsr", "HkeVs3gisB", "Hyg-tqxRtr", "BkeUyj9z5r" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "===== Update after author response\n\nThanks for the clarifications and edits in the paper.\n\nI recommend acceptance of the paper.\n\nOther comments:\nDefinition 1 in the updated version is still too vague (\"difference of what?\" -- function values? distance in norm between iterates?) -- this should be clarified...
[ 6, -1, -1, -1, 3, 3 ]
[ 5, -1, -1, -1, 4, 3 ]
[ "iclr_2020_HkewNJStDr", "Hyg-tqxRtr", "BkeUyj9z5r", "BJgRgt7j5r", "iclr_2020_HkewNJStDr", "iclr_2020_HkewNJStDr" ]
iclr_2020_HkgYEyrFDr
Learning Good Policies By Learning Good Perceptual Models
Reinforcement learning (RL) has led to increasingly complex looking behavior in recent years. However, such complexity can be misleading and hides over-fitting. We find that visual representations may be a useful metric of complexity, and both correlates well objective optimization and causally effects reward optimization. We then propose curious representation learning (CRL) which allows us to use better visual representation learning algorithms to correspondingly increase visual representation in policy through an intrinsic objective on both simulated environments and transfer to real images. Finally, we show better visual representations induced by CRL allows us to obtain better performance on Atari without any reward than other curiosity objectives.
reject
This paper investigates using "curiosity" to improve representation learning. This paper is not ready for publication. The main issues was the reviewers found the paper did not support the claim contributions in terms of (1) evaluating the new representations and improvement due to the representation, and (2) the novelty of the method compared to the long literature in this area. In general the reviewers found the empirical evidence unconvincing, and the too many missing details. The results in this paper have many issues: claims of performance based on three runs; undefined error measures; bolding entries in tables which appear not significantly better without explanation; unclear/informal meta-parameter tuning. Finally, there are some terminology issues in this paper. I suggest an excellent paper on the topic: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3858647/
train
[ "B1lPAuj2tS", "Hyg8m3oTtr", "SkepPxT0YB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents an empirical study of using error reduction as a curiosity measure. The authors consider an auto-encoder model, a colorization model and RND as intrinsic motivation signals. I find the write up very unclear and have trouble understanding what the claims are and how they are backed up. \n\nMajor...
[ 1, 3, 1 ]
[ 4, 3, 5 ]
[ "iclr_2020_HkgYEyrFDr", "iclr_2020_HkgYEyrFDr", "iclr_2020_HkgYEyrFDr" ]
iclr_2020_BkgF4kSFPB
Hallucinative Topological Memory for Zero-Shot Visual Planning
In visual planning (VP), an agent learns to plan goal-directed behavior from observations of a dynamical system obtained offline, e.g., images obtained from self-supervised robot interaction. VP algorithms essentially combine data-driven perception and planning, and are important for robotic manipulation and navigation domains, among others. A recent and promising approach to VP is the semi-parametric topological memory (SPTM) method, where image samples are treated as nodes in a graph, and the connectivity in the graph is learned using deep image classification. Thus, the learned graph represents the topological connectivity of the data, and planning can be performed using conventional graph search methods. However, training SPTM necessitates a suitable loss function for the connectivity classifier, which requires non-trivial manual tuning. More importantly, SPTM is constricted in its ability to generalize to changes in the domain, as its graph is constructed from direct observations and thus requires collecting new samples for planning. In this paper, we propose Hallucinative Topological Memory (HTM), which overcomes these shortcomings. In HTM, instead of training a discriminative classifier we train an energy function using contrastive predictive coding. In addition, we learn a conditional VAE model that generates samples given a context image of the domain, and use these hallucinated samples for building the connectivity graph, allowing for zero-shot generalization to domain changes. In simulated domains, HTM outperforms conventional SPTM and visual foresight methods in terms of both plan quality and success in long-horizon planning.
reject
The submission presents an approach to visual planning. The work builds on semi-parametric topological memory (SPTM) and introduces ideas that facilitate zero-shot generalization to new environments. The reviews are split. While the ideas are generally perceived as interesting, there are significant concerns about presentation and experimental evaluation. In particular, the work is evaluated in extremely simple environments and scenarios that do not match the experimental settings of other comparable works in this area. The paper was discussed and all reviewers expressed their views following the authors' responses and revision. In particular, R1 posted a detailed justification of their recommendation to reject the paper. The AC agrees that the paper is not ready for publication in a first-tier venue. The AC recommends that the authors seriously consider R1's recommendations.
train
[ "SJxsM052iB", "BklEl09hoB", "HkxNg4qnjH", "H1g3bMqnjB", "HJxse3m2ir", "SkePIyMquB", "Syx34y_sYB", "SkgfxqIAtB" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Context image: The context can in principle be a scene image, camera angles, lightning variables, or any other observation that contains information about the configuration space in the domain. While our experiment are very simple, we found that even this setting of giving the context as the full map is *very chal...
[ -1, -1, -1, -1, -1, 1, 8, 6 ]
[ -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "BklEl09hoB", "SkePIyMquB", "iclr_2020_BkgF4kSFPB", "SkgfxqIAtB", "Syx34y_sYB", "iclr_2020_BkgF4kSFPB", "iclr_2020_BkgF4kSFPB", "iclr_2020_BkgF4kSFPB" ]
iclr_2020_rkesVkHtDr
Meta-Learning Runge-Kutta
Initial value problems, i.e. differential equations with specific, initial conditions, represent a classic problem within the field of ordinary differential equations(ODEs). While the simplest types of ODEs may have closed-form solutions, most interesting cases typically rely on iterative schemes for numerical integration such as the family of Runge-Kutta methods. They are, however, sensitive to the strategy the step size is adapted during integration, which has to be chosen by the experimenter. In this paper, we show how the design of a step size controller can be cast as a learning problem, allowing deep networks to learn to exploit structure in the initial value problem at hand in an automatic way. The key ingredients for the resulting Meta-Learning Runge-Kutta (MLRK) are the development of a good performance measure and the identification of suitable input features. Traditional approaches suggest the local error estimates as input to the controller. However, by studying the characteristics of the local error function we show that including the partial derivatives of the initial value problem is favorable. Our experiments demonstrate considerable benefits over traditional approaches. In particular, MLRK is able to mitigate sudden spikes in the local error function by a faster adaptation of the step size. More importantly, the additional information in the form of partial derivatives and function values leads to a substantial improvement in performance. The source code can be found at https://www.dropbox.com/sh/rkctdfhkosywnnx/AABKadysCR8-aHW_0kb6vCtSa?dl=0
reject
Summary: This paper casts the problem of step-size tuning in the Runge-Kutta method as a meta learning problem. The paper gives a review of the existing approaches to step size control in RK method. Deriving knowledge from these approaches the paper reasons about appropriate features and loss functions to use in the meta learning update. The paper shows that the proposed approach is able to generalize sufficiently enough to obtain better performance than a baseline. The paper was lacking in advocates for its merits, and needs better comparisons with other baselines before it is ready to be published.
train
[ "BJeZoWc19r", "Syl61bA3KB", "ryxVF3rhoB", "HklZ0EFFjB", "BkeMBr8usS", "HylH-GnVor", "H1xtwynNsS", "Hyll7CsNsB", "Hke_qEKjtB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "The paper proposes to learn the step size for a Runge-Kutta numerical integrator for solving ordinary differential equations initial value problems. The authors frame the stepsize control problem as a learning problem, based on different performance measures, on ODE dependent inputs and on a LSTM for predicting th...
[ 3, 3, -1, -1, -1, -1, -1, -1, 3 ]
[ 3, 1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_rkesVkHtDr", "iclr_2020_rkesVkHtDr", "HklZ0EFFjB", "BkeMBr8usS", "HylH-GnVor", "Hke_qEKjtB", "Syl61bA3KB", "BJeZoWc19r", "iclr_2020_rkesVkHtDr" ]
iclr_2020_Skx24yHFDr
Discovering Topics With Neural Topic Models Built From PLSA Loss
In this paper we present a model for unsupervised topic discovery in texts corpora. The proposed model uses documents, words, and topics lookup table embedding as neural network model parameters to build probabilities of words given topics, and probabilities of topics given documents. These probabilities are used to recover by marginalization probabilities of words given documents. For very large corpora where the number of documents can be in the order of billions, using a neural auto-encoder based document embedding is more scalable then using a lookup table embedding as classically done. We thus extended the lookup based document embedding model to continuous auto-encoder based model. Our models are trained using probabilistic latent semantic analysis (PLSA) assumptions. We evaluated our models on six datasets with a rich variety of contents. Conducted experiments demonstrate that the proposed neural topic models are very effective in capturing relevant topics. Furthermore, considering perplexity metric, conducted evaluation benchmarks show that our topic models outperform latent Dirichlet allocation (LDA) model which is classically used to address topic discovery tasks.
reject
This paper presents a neural topic model with the goal of improving topic discovery with a PLSA loss. Reviewers point out major limitations including the following: 1) Empirical comparison is done only with LDA when there are many newer models that perform much better. 2) Related work section is incomplete, especially for the newer models. 3) Writing is unclear in many parts of the paper. For these reasons, I recommend that the authors make major improvements to the paper before resubmitting to another venue.
val
[ "rJlIw4RqKB", "S1lgvje0Yr", "rJg_OocPcS", "H1xBYH9OdH", "r1xnaL5yur", "B1ebr8W2DH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public" ]
[ "First, some minor issues. I didn't understand equation (3). It seems to be a variant of equation (4), and seems to be in disagreement with equation (6). Might be better if the equation was just dropped. For equation (9), you should have brackets \"()\" around the argument to the exp.\n\nSecond, in terms of com...
[ 1, 1, 3, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1 ]
[ "iclr_2020_Skx24yHFDr", "iclr_2020_Skx24yHFDr", "iclr_2020_Skx24yHFDr", "B1ebr8W2DH", "B1ebr8W2DH", "iclr_2020_Skx24yHFDr" ]