paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2020_BygXFkSYDH
Target-Embedding Autoencoders for Supervised Representation Learning
Autoencoder-based learning has emerged as a staple for disciplining representations in unsupervised and semi-supervised settings. This paper analyzes a framework for improving generalization in a purely supervised setting, where the target space is high-dimensional. We motivate and formalize the general framework of target-embedding autoencoders (TEA) for supervised prediction, learning intermediate latent representations jointly optimized to be both predictable from features as well as predictive of targets---encoding the prior that variations in targets are driven by a compact set of underlying factors. As our theoretical contribution, we provide a guarantee of generalization for linear TEAs by demonstrating uniform stability, interpreting the benefit of the auxiliary reconstruction task as a form of regularization. As our empirical contribution, we extend validation of this approach beyond existing static classification applications to multivariate sequence forecasting, verifying their advantage on both linear and nonlinear recurrent architectures---thereby underscoring the further generality of this framework beyond feedforward instantiations.
accept-talk
The paper presents a general view of supervised learning models that are jointly trained with a model for embedding the labels (targets), which the authors dub target-embedding autoencoders (TEAs). Similar models have been studied before, but this paper unifies the idea and studies more carefully various components of it. It provides a proof for the specific case of linear models and a set of experiments on disease trajectory prediction tasks. The reviewer concerns were addressed well by the authors and I believe the paper is now strong. It would be even stronger if it included more tasks (and in particular some "typical" tasks that more of the community is focusing on), and the theoretical part is to my mind not a major contribution, or at least not as large as the paper implies, because it analyzes a much simpler model than anyone is likely to use TEAs for.
train
[ "r1x_NdV6FB", "H1gkm4ET5H", "ryx7d3w75H", "rJxQBqE2jB", "rylTtJ4IiS", "r1lGkcPqir", "Byl-KGhtir", "BJx_czntor", "HkeuGjX8jr", "H1l7eAX8jH", "rkxmqp7Lsr", "SJe1uTQUir", "Bkg1vJE8iS", "rkx24TQIjH", "SJlD_kNLsH", "ryeYS1NUiS", "SJlyEyNUsr", "rkeoH0QLiH", "BkxL-a7Lor", "rkxHb3XLoS"...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "officia...
[ "This work introduces the idea of target embedding autoencoders for supervised prediction, designed to learn intermediate latent representations jointly optimized to be both predictable from features and predictive of targets. This is meant to help with generalization and has certain theoretical guarantees. \n\nIt ...
[ 8, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 3, 4, 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_BygXFkSYDH", "iclr_2020_BygXFkSYDH", "iclr_2020_BygXFkSYDH", "H1gkm4ET5H", "H1gkm4ET5H", "iclr_2020_BygXFkSYDH", "iclr_2020_BygXFkSYDH", "Byl-KGhtir", "r1x_NdV6FB", "ryx7d3w75H", "ryx7d3w75H", "ryx7d3w75H", "H1gkm4ET5H", "ryx7d3w75H", "H1gkm4ET5H", "H1gkm4ET5H", "H1gkm4ET5...
iclr_2020_rkgNKkHtvB
Reformer: The Efficient Transformer
Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O(L2) to O(Llog⁡L), where L is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of N times, where N is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences.
accept-talk
Transformer models have proven to be quite successful when applied to a variety of ML tasks such as NLP. However, the computational and memory requirements can at times be prohibitive, such as when dealing with long sequences. This paper proposes locality-sensitive hashing to reduce the sequence-length complexity, as well as reversible residual layers to reduce storage requirements. Experimental results confirm that the performance of Transformer models can be preserved even with these new efficiencies in place, and hence, this paper will likely have significant impact within the community. Some relatively minor points notwithstanding, all reviewers voted for acceptance which is my recommendation as well. Note that this paper was also vetted by several detailed external commenters. In all cases the authors provided reasonable feedback, and the final revision of the work will surely be even stronger.
train
[ "H1e_gXhRYS", "SkxpfcEjjr", "H1g3oF4sjS", "SJxEEtVosB", "r1lXpbp3YS", "Syg6AbTp5H", "S1eMPDpGuS", "HJlyuh5zuB", "rkekrgvMOB", "SygGB-bzur", "HylsnfKbOB", "HyxXfav-_H", "Byx13YvZOB", "Hygh8BWbOS", "r1xWXg6ldH" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public", "author", "public", "public", "author", "public", "author", "public" ]
[ "This paper presents a method to make Transformer models more efficient in time and memory. The proposed approach consists mainly of three main operations: \n- Using reversible layers (inspired from RevNets) in order to prevent the need of storing the activations of all layers to be reused for back propagation; \n-...
[ 8, -1, -1, -1, 6, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, -1, -1, -1, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_rkgNKkHtvB", "r1lXpbp3YS", "H1e_gXhRYS", "Syg6AbTp5H", "iclr_2020_rkgNKkHtvB", "iclr_2020_rkgNKkHtvB", "SygGB-bzur", "rkekrgvMOB", "HylsnfKbOB", "iclr_2020_rkgNKkHtvB", "HyxXfav-_H", "Byx13YvZOB", "iclr_2020_rkgNKkHtvB", "r1xWXg6ldH", "iclr_2020_rkgNKkHtvB" ]
iclr_2020_rklr9kHFDB
Rotation-invariant clustering of neuronal responses in primary visual cortex
Similar to a convolutional neural network (CNN), the mammalian retina encodes visual information into several dozen nonlinear feature maps, each formed by one ganglion cell type that tiles the visual space in an approximately shift-equivariant manner. Whether such organization into distinct cell types is maintained at the level of cortical image processing is an open question. Predictive models building upon convolutional features have been shown to provide state-of-the-art performance, and have recently been extended to include rotation equivariance in order to account for the orientation selectivity of V1 neurons. However, generally no direct correspondence between CNN feature maps and groups of individual neurons emerges in these models, thus rendering it an open question whether V1 neurons form distinct functional clusters. Here we build upon the rotation-equivariant representation of a CNN-based V1 model and propose a methodology for clustering the representations of neurons in this model to find functional cell types independent of preferred orientations of the neurons. We apply this method to a dataset of 6000 neurons and visualize the preferred stimuli of the resulting clusters. Our results highlight the range of non-linear computations in mouse V1.
accept-talk
This paper is enthusiastically supported by all three reviewers. Thus an accept is recommended.
train
[ "r1gCIKb0KB", "SJej92NvsH", "B1lNHjJJ5B", "B1x7Z1Gwsr", "HkliVN_UiH", "ByggA5pEiS", "B1lA8c6VjH", "H1xjrBTViS", "rkxj6UTNjr", "S1eKwBtIcr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The authors present a rotation-invariant representation of a CNN modeling the V1 neurons and a pipeline to cluster these neurons to find cell types that are rotation-invariant. Experimental validation is performed on a 6K neuron dataset with promising results. \nThe paper is well postulated.\n\nBelow are comments ...
[ 8, -1, 8, -1, -1, -1, -1, -1, -1, 8 ]
[ 3, -1, 4, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_rklr9kHFDB", "B1x7Z1Gwsr", "iclr_2020_rklr9kHFDB", "HkliVN_UiH", "H1xjrBTViS", "r1gCIKb0KB", "r1gCIKb0KB", "B1lNHjJJ5B", "S1eKwBtIcr", "iclr_2020_rklr9kHFDB" ]
iclr_2020_S1g2skStPB
Causal Discovery with Reinforcement Learning
Discovering causal structure among a set of variables is a fundamental problem in many empirical sciences. Traditional score-based casual discovery methods rely on various local heuristics to search for a Directed Acyclic Graph (DAG) according to a predefined score function. While these methods, e.g., greedy equivalence search, may have attractive results with infinite samples and certain model assumptions, they are less satisfactory in practice due to finite data and possible violation of assumptions. Motivated by recent advances in neural combinatorial optimization, we propose to use Reinforcement Learning (RL) to search for the DAG with the best scoring. Our encoder-decoder model takes observable data as input and generates graph adjacency matrices that are used to compute rewards. The reward incorporates both the predefined score function and two penalty terms for enforcing acyclicity. In contrast with typical RL applications where the goal is to learn a policy, we use RL as a search strategy and our final output would be the graph, among all graphs generated during training, that achieves the best reward. We conduct experiments on both synthetic and real datasets, and show that the proposed approach not only has an improved search ability but also allows for a flexible score function under the acyclicity constraint.
accept-talk
This paper proposes an RL-based structure search method for causal discovery. The reviewers and AC think that the idea of applying reinforcement learning to causal structure discovery is novel and intriguing. While there were initially some concerns regarding presentation of the results, these have been taken care of during the discussion period. The reviewers agree that this is a very good submission, which merits acceptance to ICLR-2020.
train
[ "HklEzhj6FB", "H1eabJol9B", "SJguv9wTtB", "BklImw8Msr", "SklquiIIiH", "ryesqDUfsH", "HyesVTrGoS", "rkeDLyLGor", "HJllWOIMir" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "Update: after the revision, I have decided to increase my score to 8.\n\nOriginal comments:\n\nIn this paper, the authors proposed a new reinforcement learning based algorithm to learn causal graphical models. Simulations on real and synthetic data also shows promise.\n\nPros\n\n1. It's great to see the authors ha...
[ 8, 8, 8, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_S1g2skStPB", "iclr_2020_S1g2skStPB", "iclr_2020_S1g2skStPB", "H1eabJol9B", "iclr_2020_S1g2skStPB", "H1eabJol9B", "HklEzhj6FB", "SJguv9wTtB", "H1eabJol9B" ]
iclr_2020_rkg6sJHYDr
Intrinsically Motivated Discovery of Diverse Patterns in Self-Organizing Systems
In many complex dynamical systems, artificial or natural, one can observe self-organization of patterns emerging from local rules. Cellular automata, like the Game of Life (GOL), have been widely used as abstract models enabling the study of various aspects of self-organization and morphogenesis, such as the emergence of spatially localized patterns. However, findings of self-organized patterns in such models have so far relied on manual tuning of parameters and initial states, and on the human eye to identify interesting patterns. In this paper, we formulate the problem of automated discovery of diverse self-organized patterns in such high-dimensional complex dynamical systems, as well as a framework for experimentation and evaluation. Using a continuous GOL as a testbed, we show that recent intrinsically-motivated machine learning algorithms (POP-IMGEPs), initially developed for learning of inverse models in robotics, can be transposed and used in this novel application area. These algorithms combine intrinsically-motivated goal exploration and unsupervised learning of goal space representations. Goal space representations describe the interesting features of patterns for which diverse variations should be discovered. In particular, we compare various approaches to define and learn goal space representations from the perspective of discovering diverse spatially localized patterns. Moreover, we introduce an extension of a state-of-the-art POP-IMGEP algorithm which incrementally learns a goal representation using a deep auto-encoder, and the use of CPPN primitives for generating initialization parameters. We show that it is more efficient than several baselines and equally efficient as a system pre-trained on a hand-made database of patterns identified by human experts.
accept-talk
The authors introduce a framework for automatically detecting diverse, self-organized patterns in a continuous Game of Life environment, using compositional pattern producing networks (CPPNs) and population-based Intrinsically Motivated Goal Exploration Processes (POP-IMGEPs) to find the distribution of system parameters that produce diverse, interesting goal patterns. This work is really well-presented, both in the paper and on the associated website, which is interactive and features source code and demos. Reviewers agree that it’s well-written and seems technically sound. I also agree with R2 that this is an under-explored area and thus would add to the diversity of the program. In terms of weaknesses, reviewers noted that it’s quite long, with a lengthy appendix, and could be a bit confusing in areas. Authors were responsive to this in the rebuttal and have trimmed it, although it’s still 29 pages. My assessment is well-aligned with those of R2 and thus I’m recommending accept. In the rebuttal, the authors mentioned several interesting possible applications for this work; it’d be great if these could be included in the discussion. Given the impressive presentation and amazing visuals, I think it could make for a fun talk.
train
[ "rJgB3xcBjB", "S1gprgqSor", "rkeklxqBir", "BkgcIJ9HiB", "Hye27k9HsB", "ryg4ZdOhtH", "BJxu9RJAYH", "Hyez9WiNcr" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to thank Reviewer 2 for his time to provide us feedback, and for providing encouraging comments. \n\nFirst, as also stated in our response to R1, we agree (and apologize) that the Appendix in our initial submission was too long, not sufficiently well structured, and mixed materials that usefully comp...
[ -1, -1, -1, -1, -1, 8, 6, 6 ]
[ -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "ryg4ZdOhtH", "rkeklxqBir", "BJxu9RJAYH", "Hye27k9HsB", "Hyez9WiNcr", "iclr_2020_rkg6sJHYDr", "iclr_2020_rkg6sJHYDr", "iclr_2020_rkg6sJHYDr" ]
iclr_2020_S1xWh1rYwB
Restricting the Flow: Information Bottlenecks for Attribution
Attribution methods provide insights into the decision-making of machine learning models like artificial neural networks. For a given input sample, they assign a relevance score to each individual input variable, such as the pixels of an image. In this work, we adopt the information bottleneck concept for attribution. By adding noise to intermediate feature maps, we restrict the flow of information and can quantify (in bits) how much information image regions provide. We compare our method against ten baselines using three different metrics on VGG-16 and ResNet-50, and find that our methods outperform all baselines in five out of six settings. The method’s information-theoretic foundation provides an absolute frame of reference for attribution values (bits) and a guarantee that regions scored close to zero are not necessary for the network's decision.
accept-talk
All three reviewers strongly recommend accepting this paper. It is clear, novel, and a significant contribution to the field. Please take their suggestions into account in a camera ready version. Thanks!
train
[ "S1xbOKTJ9S", "BJeXnpJAKB", "r1xdC7Dnjr", "H1eii7P2jr", "ryx1P7wnsB", "SyxnBRLhoB", "BJxHG52nFB", "BJgJBmM7YH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author" ]
[ "\nSummary\n---\n\n(motivation)\nLots of methods produce attribution maps (heat maps, saliency maps, visual explantions) that aim to highlight input regions with respect to a given CNN.\nThese methods produce scores that highlight regions that are in a vague sense \"important.\"\nWhile that's useful (relative impor...
[ 8, 8, -1, -1, -1, -1, 8, -1 ]
[ 4, 3, -1, -1, -1, -1, 4, -1 ]
[ "iclr_2020_S1xWh1rYwB", "iclr_2020_S1xWh1rYwB", "BJxHG52nFB", "BJeXnpJAKB", "S1xbOKTJ9S", "iclr_2020_S1xWh1rYwB", "iclr_2020_S1xWh1rYwB", "iclr_2020_S1xWh1rYwB" ]
iclr_2020_BJgNJgSFPS
Building Deep Equivariant Capsule Networks
Capsule networks are constrained by the parameter-expensive nature of their layers, and the general lack of provable equivariance guarantees. We present a variation of capsule networks that aims to remedy this. We identify that learning all pair-wise part-whole relationships between capsules of successive layers is inefficient. Further, we also realise that the choice of prediction networks and the routing mechanism are both key to equivariance. Based on these, we propose an alternative framework for capsule networks that learns to projectively encode the manifold of pose-variations, termed the space-of-variation (SOV), for every capsule-type of each layer. This is done using a trainable, equivariant function defined over a grid of group-transformations. Thus, the prediction-phase of routing involves projection into the SOV of a deeper capsule using the corresponding function. As a specific instantiation of this idea, and also in order to reap the benefits of increased parameter-sharing, we use type-homogeneous group-equivariant convolutions of shallower capsules in this phase. We also introduce an equivariant routing mechanism based on degree-centrality. We show that this particular instance of our general model is equivariant, and hence preserves the compositional representation of an input under transformations. We conduct several experiments on standard object-classification datasets that showcase the increased transformation-robustness, as well as general performance, of our model to several capsule baselines.
accept-talk
This paper combine recent ideas from capsule networks and group-equivariant neural networks to form equivariant capsules, which is a great idea. The exposition is clear and the experiments provide a very interesting analysis and results. I believe this work will be very well received by the ICLR community.
train
[ "HJx4P6rAFS", "H1lVyiP3sH", "ByxKoKw3jS", "Bygaqb7BjH", "HyeH-bq9FB", "HJe6fI0tsr", "HJxncZrvoH", "Skgl6ZBPjB", "HJxO4WBwiH", "rylTAlQHsB", "HygxpbmHjr", "HkxsV-7rjH" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author" ]
[ "This paper combines CapsuleNetworks and GCNNs with a novel formulation. First they modify the CapsNet formulation by replacing the linear transformation between two capsule layers with a group convolution. Second they share the group equivarient convolution filters per all capsules of the lower layer. Third, they...
[ 8, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_BJgNJgSFPS", "HJx4P6rAFS", "HyeH-bq9FB", "HkxsV-7rjH", "iclr_2020_BJgNJgSFPS", "iclr_2020_BJgNJgSFPS", "HJxO4WBwiH", "HJxncZrvoH", "HyeH-bq9FB", "HJx4P6rAFS", "Bygaqb7BjH", "rylTAlQHsB" ]
iclr_2020_Bkl5kxrKDr
A Generalized Training Approach for Multiagent Learning
This paper investigates a population-based training regime based on game-theoretic principles called Policy-Spaced Response Oracles (PSRO). PSRO is general in the sense that it (1) encompasses well-known algorithms such as fictitious play and double oracle as special cases, and (2) in principle applies to general-sum, many-player games. Despite this, prior studies of PSRO have been focused on two-player zero-sum games, a regime where in Nash equilibria are tractably computable. In moving from two-player zero-sum games to more general settings, computation of Nash equilibria quickly becomes infeasible. Here, we extend the theoretical underpinnings of PSRO by considering an alternative solution concept, α-Rank, which is unique (thus faces no equilibrium selection issues, unlike Nash) and applies readily to general-sum, many-player settings. We establish convergence guarantees in several games classes, and identify links between Nash equilibria and α-Rank. We demonstrate the competitive performance of α-Rank-based PSRO against an exact Nash solver-based PSRO in 2-player Kuhn and Leduc Poker. We then go beyond the reach of prior PSRO applications by considering 3- to 5-player poker games, yielding instances where α-Rank achieves faster convergence than approximate Nash solvers, thus establishing it as a favorable general games solver. We also carry out an initial empirical validation in MuJoCo soccer, illustrating the feasibility of the proposed approach in another complex domain.
accept-talk
This paper analyzes and extends learning methods based on Policy-Spaced Response Oracles (PSRO) through the application of alpha-rank. In doing so, the paper explores connections with Nash equilibria, establishes convergence guarantees in multiple settings, and presents promising empirical results on (among other things) 3-to-5 player poker games. Although this paper originally received mixed scores, after the rebuttal period all reviewers converged to a consensus. A revised version also includes new experiments from the MuJoCo soccer domain, and new poker results as well. Overall, this paper provides a nice balance of theoretical support and practical relevance that should be of high impact to the RL community.
train
[ "B1lMGBlhtH", "r1xBh9CaYS", "S1eXzEgTtS", "H1g_mNc2iB", "S1xF279njH", "Syl9uX92jB", "r1xXx753jS", "SJxAQ3QqiB", "BkeT857cir", "SkgOTU3Yjr", "BkgXrUhtsr", "rkx3X82tjr", "Byxvy8hKjS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper studies α-Rank, a scalable alternative to Nash equilibrium, across a number of areas. Specifically the paper establishes connections between Nash and α-Rank in specific instances, presents a novel construction of best response that guarantees convergence to the α-Rank in several games, and demonstrates e...
[ 8, 8, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_Bkl5kxrKDr", "iclr_2020_Bkl5kxrKDr", "iclr_2020_Bkl5kxrKDr", "B1lMGBlhtH", "BkeT857cir", "SJxAQ3QqiB", "r1xBh9CaYS", "BkgXrUhtsr", "BkgXrUhtsr", "B1lMGBlhtH", "rkx3X82tjr", "S1eXzEgTtS", "r1xBh9CaYS" ]
iclr_2020_r1gfQgSFDr
High Fidelity Speech Synthesis with Adversarial Networks
Generative adversarial networks have seen rapid development in recent years and have led to remarkable improvements in generative modelling of images. However, their application in the audio domain has received limited attention, and autoregressive models, such as WaveNet, remain the state of the art in generative modelling of audio signals such as human speech. To address this paucity, we introduce GAN-TTS, a Generative Adversarial Network for Text-to-Speech. Our architecture is composed of a conditional feed-forward generator producing raw speech audio, and an ensemble of discriminators which operate on random windows of different sizes. The discriminators analyse the audio both in terms of general realism, as well as how well the audio corresponds to the utterance that should be pronounced. To measure the performance of GAN-TTS, we employ both subjective human evaluation (MOS - Mean Opinion Score), as well as novel quantitative metrics (Fréchet DeepSpeech Distance and Kernel DeepSpeech Distance), which we find to be well correlated with MOS. We show that GAN-TTS is capable of generating high-fidelity speech with naturalness comparable to the state-of-the-art models, and unlike autoregressive models, it is highly parallelisable thanks to an efficient feed-forward generator. Listen to GAN-TTS reading this abstract at https://storage.googleapis.com/deepmind-media/research/abstract.wav
accept-talk
The authors design a GAN-based text-to-speech synthesis model that performs competitively with state-of-the-art synthesizers. The reviewers and I agree that this appears to be the first really successful effort at GAN-based synthesis. Additional positives are that the model is designed to be highly parallelisable, and that the authors also propose several automatic measures of performance in addition to reporting human mean opinion scores. The automatic measures correlate well (though far from perfectly) with human judgments, and in any case are a nice contribution to the area of evaluation of generative models. It would be even more convincing if the authors presented human A/B forced-choice test results (in addition to the mean opinion scores), which are often included in speech synthesis evaluation, but this is a minor quibble.
train
[ "rklAPQJOKS", "rkg0V5P9iB", "r1gpl85tjH", "HylKa4cYjH", "SJglWB9FoS", "S1gKvNctjS", "BJlJ-YsaKS", "Hyg3Zr60FS", "Hygn03xGFB", "rkxBtBvCuB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "I want thank the authors for solving this long-standing GAN challenge in raw waveform synthesis. With all due respect, previous GAN trials for audio synthesis are inspiring, but their audio qualities are far away from the state-of-the-art results. Although the speech fidelity of GAN-TTS is still worse than WaveNet...
[ 8, -1, -1, -1, -1, -1, 6, 8, -1, -1 ]
[ 5, -1, -1, -1, -1, -1, 4, 3, -1, -1 ]
[ "iclr_2020_r1gfQgSFDr", "iclr_2020_r1gfQgSFDr", "rklAPQJOKS", "Hyg3Zr60FS", "BJlJ-YsaKS", "iclr_2020_r1gfQgSFDr", "iclr_2020_r1gfQgSFDr", "iclr_2020_r1gfQgSFDr", "rkxBtBvCuB", "iclr_2020_r1gfQgSFDr" ]
iclr_2020_rkgvXlrKwH
SEED RL: Scalable and Efficient Deep-RL with Accelerated Central Inference
We present a modern scalable reinforcement learning agent called SEED (Scalable, Efficient Deep-RL). By effectively utilizing modern accelerators, we show that it is not only possible to train on millions of frames per second but also to lower the cost. of experiments compared to current methods. We achieve this with a simple architecture that features centralized inference and an optimized communication layer. SEED adopts two state-of-the-art distributed algorithms, IMPALA/V-trace (policy gradients) and R2D2 (Q-learning), and is evaluated on Atari-57, DeepMind Lab and Google Research Football. We improve the state of the art on Football and are able to reach state of the art on Atari-57 twice as fast in wall-time. For the scenarios we consider, a 40% to 80% cost reduction for running experiments is achieved. The implementation along with experiments is open-sourced so results can be reproduced and novel ideas tried out.
accept-talk
The paper presents a framework for scalable Deep-RL on really large-scale architecture, which addresses several problems on multi-machine training of such systems with many actors and learners running. Large-scale experiments and impovements over IMPALA are presented, leading to new SOTA results. The reviewers are very positive over this work, and I think this is an important contribution to the overall learning / RL community.
train
[ "B1lAv2N2jr", "rJlXiabPsS", "Skxw_a-vjr", "ryeuWTbPsH", "S1g1dnlAYS", "S1lmYz5Ycr", "Byg1qYe5qS" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank again the reviewer for their positive comments.\n\nWe have updated the paper with an \"apple-to-apple\" comparison by running both agents on an Nvidia P100 GPU. See table 1 for update figures, as well as additional analysis in section 4.1.2 and additional cost comparison in section A.6.\n", "We thank th...
[ -1, -1, -1, -1, 8, 6, 8 ]
[ -1, -1, -1, -1, 3, 1, 3 ]
[ "ryeuWTbPsH", "S1g1dnlAYS", "S1lmYz5Ycr", "Byg1qYe5qS", "iclr_2020_rkgvXlrKwH", "iclr_2020_rkgvXlrKwH", "iclr_2020_rkgvXlrKwH" ]
iclr_2020_rkeiQlBFPB
Meta-Learning with Warped Gradient Descent
Learning an efficient update rule from data that promotes rapid learning of new tasks from the same distribution remains an open problem in meta-learning. Typically, previous works have approached this issue either by attempting to train a neural network that directly produces updates or by attempting to learn better initialisations or scaling factors for a gradient-based update rule. Both of these approaches pose challenges. On one hand, directly producing an update forgoes a useful inductive bias and can easily lead to non-converging behaviour. On the other hand, approaches that try to control a gradient-based update rule typically resort to computing gradients through the learning process to obtain their meta-gradients, leading to methods that can not scale beyond few-shot task adaptation. In this work, we propose Warped Gradient Descent (WarpGrad), a method that intersects these approaches to mitigate their limitations. WarpGrad meta-learns an efficiently parameterised preconditioning matrix that facilitates gradient descent across the task distribution. Preconditioning arises by interleaving non-linear layers, referred to as warp-layers, between the layers of a task-learner. Warp-layers are meta-learned without backpropagating through the task training process in a manner similar to methods that learn to directly produce updates. WarpGrad is computationally efficient, easy to implement, and can scale to arbitrarily large meta-learning problems. We provide a geometrical interpretation of the approach and evaluate its effectiveness in a variety of settings, including few-shot, standard supervised, continual and reinforcement learning.
accept-talk
A strong paper reporting improved approaches to meta-learning.
train
[ "B1eAh22icH", "SylLDPVXiS", "HkecPLN7iB", "SJgNWQEXjB", "HkgHvrYptS", "r1g_Ddk0FS", "BkxBqe7GFB", "B1lvHK3Adr", "r1g3apzRdr", "rJx5cqjaOH" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public", "author", "public" ]
[ "Summary:\nThe current paper deals with meta-learning and essentially proposes a generalization of MAML (a popular gradient-based meta-learning algorithm) that mostly builds upon two main recent advances in meta-learning: 1) an architectural one (see e.g. T-Nets), which consists in optimizing the parameters of addi...
[ 8, -1, -1, -1, 8, 8, -1, -1, -1, -1 ]
[ 4, -1, -1, -1, 3, 3, -1, -1, -1, -1 ]
[ "iclr_2020_rkeiQlBFPB", "r1g_Ddk0FS", "B1eAh22icH", "HkgHvrYptS", "iclr_2020_rkeiQlBFPB", "iclr_2020_rkeiQlBFPB", "B1lvHK3Adr", "r1g3apzRdr", "rJx5cqjaOH", "iclr_2020_rkeiQlBFPB" ]
iclr_2020_Skey4eBYPS
Convolutional Conditional Neural Processes
We introduce the Convolutional Conditional Neural Process (ConvCNP), a new member of the Neural Process family that models translation equivariance in the data. Translation equivariance is an important inductive bias for many learning problems including time series modelling, spatial data, and images. The model embeds data sets into an infinite-dimensional function space, as opposed to finite-dimensional vector spaces. To formalize this notion, we extend the theory of neural representations of sets to include functional representations, and demonstrate that any translation-equivariant embedding can be represented using a convolutional deep-set. We evaluate ConvCNPs in several settings, demonstrating that they achieve state-of-the-art performance compared to existing NPs. We demonstrate that building in translation equivariance enables zero-shot generalization to challenging, out-of-domain tasks.
accept-talk
This paper presents Convolutional Conditional Neural Process (ConvCNP), a new member of the neural process family that models translation equivariance. Current models must learn translation equivariance from the data, and the authors show that ConvCNP can learn this as part of the model, which is much more generalisable and efficient. They evaluate the ConvCNP on several benchmarks, including an astronomical time-series modelling experiment, a sim2real experiment, and several image completion experiments and show excellent results. The authors wrote extensive responses the the reviewers, uploading a revised version of the paper, and there was some further discussion. This is a strong paper worthy of inclusion in ICLR and could have a large impact on many fields in ML/AI.
train
[ "BJeBl3sioB", "BkxWI4ejoS", "r1eRXZGDjS", "HJx52ezvoS", "BJgS5eMDiB", "SygYz1Gwir", "BkxdHJfvoB", "r1erPYdXFH", "SkeY6z_6tH", "S1eZ24ug9r" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I have read the rebuttal from the authors and am satisfied with their answers! I will maintain my initial assessment.", "\nWe thank the reviewers for their detailed reviews, and many helpful comments. We have now uploaded a revised version of the manuscript, reflecting the suggestions. The main revisions are sum...
[ -1, -1, -1, -1, -1, -1, -1, 8, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 1, 1 ]
[ "r1eRXZGDjS", "iclr_2020_Skey4eBYPS", "SkeY6z_6tH", "BJgS5eMDiB", "r1erPYdXFH", "S1eZ24ug9r", "SygYz1Gwir", "iclr_2020_Skey4eBYPS", "iclr_2020_Skey4eBYPS", "iclr_2020_Skey4eBYPS" ]
iclr_2020_SJeLIgBKPS
Gradient Descent Maximizes the Margin of Homogeneous Neural Networks
In this paper, we study the implicit regularization of the gradient descent algorithm in homogeneous neural networks, including fully-connected and convolutional neural networks with ReLU or LeakyReLU activations. In particular, we study the gradient descent or gradient flow (i.e., gradient descent with infinitesimal step size) optimizing the logistic loss or cross-entropy loss of any homogeneous model (possibly non-smooth), and show that if the training loss decreases below a certain threshold, then we can define a smoothed version of the normalized margin which increases over time. We also formulate a natural constrained optimization problem related to margin maximization, and prove that both the normalized margin and its smoothed version converge to the objective value at a KKT point of the optimization problem. Our results generalize the previous results for logistic regression with one-layer or multi-layer linear networks, and provide more quantitative convergence results with weaker assumptions than previous results for homogeneous smooth neural networks. We conduct several experiments to justify our theoretical finding on MNIST and CIFAR-10 datasets. Finally, as margin is closely related to robustness, we discuss potential benefits of training longer for improving the robustness of the model.
accept-talk
This paper studies the implicit regularization of the gradient descent in homogeneous and shows that when the training loss falls below a threshold, then the smoothed. This study generalizes some of the earlier related works by relying on weaker assumptions. Experiments on MNIST and CIFAR-10 are provided to backup the theoretical findings of the paper. R2 had some concern about one of the assumptions in this work (A4). While authors admitted that (A4) may not hold for all neural networks and all datasets, they stressed that this assumptions is reasonable when the network is overparameterized and can perfectly fit the training data. Overall, all reviewers are very positive about this submission and find a valuable step toward understanding implicit regularization.
train
[ "rJegO1tfKB", "BylrCxVAFB", "ByeXHx5KoS", "BklZZeqtjB", "BygiThKFsr", "SJgkPWhoYH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper studies the implicit regularization phenomenon. More precisely, given separable data the authors ask whether homogenous functions (including neural networks) trained by gradient flow/descent converge to the max-margin solution. The authors show that the limit points of gradient descent are KKT points o...
[ 6, 8, -1, -1, -1, 8 ]
[ 1, 5, -1, -1, -1, 3 ]
[ "iclr_2020_SJeLIgBKPS", "iclr_2020_SJeLIgBKPS", "BylrCxVAFB", "rJegO1tfKB", "SJgkPWhoYH", "iclr_2020_SJeLIgBKPS" ]
iclr_2020_SJxSDxrKDr
Adversarial Training and Provable Defenses: Bridging the Gap
We present COLT, a new method to train neural networks based on a novel combination of adversarial training and provable defenses. The key idea is to model neural network training as a procedure which includes both, the verifier and the adversary. In every iteration, the verifier aims to certify the network using convex relaxation while the adversary tries to find inputs inside that convex relaxation which cause verification to fail. We experimentally show that this training method, named convex layerwise adversarial training (COLT), is promising and achieves the best of both worlds -- it produces a state-of-the-art neural network with certified robustness of 60.5% and accuracy of 78.4% on the challenging CIFAR-10 dataset with a 2/255 L-infinity perturbation. This significantly improves over the best concurrent results of 54.0% certified robustness and 71.5% accuracy.
accept-talk
The reviewers develop a novel technique for training neural networks that are provably robust to adversarial attacks, by combining provable defenses using convex relaxations with latent adversarial attacks that lie in the gap between the convex relaxation and the true realizable set of activations at a layer of the network. The authors show that the resulting procedure is computationally efficient and able to train neural networks to attain SOTA provable robustness to adversarial attacks. The paper is well written and clearly explains an interesting idea, backed by thorough experiments. The reviewers were in consensus on acceptance and relatively minor concerns were clearly addressed in the rebuttal phase. Hence, I strongly recommend acceptance.
train
[ "ByxXM0cuKH", "B1g7RvSniH", "HkgbVDojiB", "SJlJOp15sH", "rJli321ciH", "ryesunycsr", "B1gfN2J9sB", "rkeFKsk5or", "S1xH9B9Gor", "B1xSsNgtKS", "BJgNNQyF5r", "BJecVs_6KS", "rJe1s3zruS", "SkxwmsGB_H", "HJlVRSuADr", "HJeKpAyTvH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "public" ]
[ "Summary: the paper introduces a novel protocol for training neural networks that aims at leveraging the empirical benefits of adversarial training while allowing to certify the robustness of the network using the convex relation approach introduced by Wong & Kolter. The key ingredient is a novel algorithm for laye...
[ 8, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, -1, -1, -1, -1, -1 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2020_SJxSDxrKDr", "rJli321ciH", "B1gfN2J9sB", "S1xH9B9Gor", "ByxXM0cuKH", "B1xSsNgtKS", "BJgNNQyF5r", "iclr_2020_SJxSDxrKDr", "iclr_2020_SJxSDxrKDr", "iclr_2020_SJxSDxrKDr", "iclr_2020_SJxSDxrKDr", "HJlVRSuADr", "HJeKpAyTvH", "HJlVRSuADr", "HJeKpAyTvH", "iclr_2020_SJxSDxrKDr" ]
iclr_2020_SJxstlHFPH
Differentiable Reasoning over a Virtual Knowledge Base
We consider the task of answering complex multi-hop questions using a corpus as a virtual knowledge base (KB). In particular, we describe a neural module, DrKIT, that traverses textual data like a KB, softly following paths of relations between mentions of entities in the corpus. At each step the module uses a combination of sparse-matrix TFIDF indices and a maximum inner product search (MIPS) on a special index of contextual representations of the mentions. This module is differentiable, so the full system can be trained end-to-end using gradient based methods, starting from natural language inputs. We also describe a pretraining scheme for the contextual representation encoder by generating hard negative examples using existing knowledge bases. We show that DrKIT improves accuracy by 9 points on 3-hop questions in the MetaQA dataset, cutting the gap between text-based and KB-based state-of-the-art by 70%. On HotpotQA, DrKIT leads to a 10% improvement over a BERT-based re-ranking approach to retrieving the relevant passages required to answer a question. DrKIT is also very efficient, processing up to 10-100x more queries per second than existing multi-hop systems.
accept-talk
This paper proposes a novel architecture for question-answering, which is trained in an end-to-end fashion. The reviewers were unanimous in their vote to accept. Authors are encouraged to revise addressing reviewer comments.
train
[ "rJe4bPFssB", "BJl6SdgciH", "H1lvMjxAFB", "rkxu5o6KiS", "HylgRyRYiB", "rJl8_pTtjr", "B1xx0h6toS", "SJx1TwS5YH", "BygDHP9oKB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "I thank the authors for their responses. I'm satisfied with the rebuttal, and will stand by my positive rating.", "Thank you for the extra analysis, new experiments and clarifications! I decided to increase the (already positive) score.", "The paper studies scaling multi-hop QA to large document collections, ...
[ -1, -1, 8, -1, -1, -1, -1, 8, 8 ]
[ -1, -1, 4, -1, -1, -1, -1, 3, 4 ]
[ "rJl8_pTtjr", "rkxu5o6KiS", "iclr_2020_SJxstlHFPH", "H1lvMjxAFB", "B1xx0h6toS", "SJx1TwS5YH", "BygDHP9oKB", "iclr_2020_SJxstlHFPH", "iclr_2020_SJxstlHFPH" ]
iclr_2020_BkluqlSFDS
Federated Learning with Matched Averaging
Federated learning allows edge devices to collaboratively learn a shared model while keeping the training data on device, decoupling the ability to do model training from the need to store the data in the cloud. We propose Federated matched averaging (FedMA) algorithm designed for federated learning of modern neural network architectures e.g. convolutional neural networks (CNNs) and LSTMs. FedMA constructs the shared global model in a layer-wise manner by matching and averaging hidden elements (i.e. channels for convolution layers; hidden states for LSTM; neurons for fully connected layers) with similar feature extraction signatures. Our experiments indicate that FedMA not only outperforms popular state-of-the-art federated learning algorithms on deep CNN and LSTM architectures trained on real world datasets, but also reduces the overall communication burden.
accept-talk
The authors presented a Federate Learning algorithm which constructs the global model layer-wise by matching and averaging hidden representations. They empirically demonstrate their method outperforms existing federated learning algorithms This paper has received largely positive reviews. Unfortunately one reviewer wrote a very short review but was generally appreciative of the work. Fortunately, R1 wrote a detailed review with very specific questions and suggestions. The authors have addresses most of the concerns of the reviewers and I have no hesitation in recommending that this paper should be accepted. I request the authors to incorporate all suggestions made by the reviewers.
test
[ "rkgJ17ZpFH", "HJlE_doTKH", "HJgiqtU9sr", "rke7FaeciB", "BJxZaDnwsH", "B1xIeO3viS", "Syg7FvhwiH", "SkexqR4GoB", "SJlZzmfeiB", "BygfWezloB" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "public", "public", "official_reviewer" ]
[ "Post Rebuttal Summary\n---------------------------------\nI have nudged my score up to an \"Accept\", based on my comments to the rebuttal below. I hope the authors continue to improve the readability of Sec. 2.1\n\nReview Summary\n--------------\nOverall I think this is almost above the bar to be accepted, and I ...
[ 8, 8, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 3, 1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2020_BkluqlSFDS", "iclr_2020_BkluqlSFDS", "rke7FaeciB", "B1xIeO3viS", "HJlE_doTKH", "rkgJ17ZpFH", "iclr_2020_BkluqlSFDS", "BygfWezloB", "BygfWezloB", "iclr_2020_BkluqlSFDS" ]
iclr_2020_ryeRwlSYPH
Learning transitional skills with intrinsic motivation
By maximizing an information theoretic objective, a few recent methods empower the agent to explore the environment and learn useful skills without supervision. However, when considering to use multiple consecutive skills to complete a specific task, the transition from one to another cannot guarantee the success of the process due to the evident gap between skills. In this paper, we propose to learn transitional skills (LTS) in addition to creating diverse primitive skills without a reward function. By introducing an extra latent variable for transitional skills, our LTS method discovers both primitive and transitional skills by minimizing the difference of mutual information and the similarity of skills. By considering various simulated robotic tasks, our results demonstrate the effectiveness of LTS on learning both diverse primitive skills and transitional skills, and show its superiority in smooth transition of skills over the state-of-the-art baseline DIAYN.
reject
The submission has two issues, identified by the reviewers; (1) the description of the proposed method was found to be confusing at times and could be improved, and (2) the proposed transitional skills were not well motivated/justified as a solution to the problem the authors propose to solve.
train
[ "B1ghdRtJcr", "HkefttE2iH", "SkgFrtNhoS", "rye7zKEnoH", "SJeB8gyy5r", "Hyx7VMW19B" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "What is the specific question/problem tackled by the paper?\nThis paper addresses the problem of learning and using useful skills through unsupervised RL. While past work (VIC, DIAYN) found skills by maximizing the mutual information between skills and states, this paper (LTS) aims to find transitional skills to m...
[ 3, -1, -1, -1, 3, 6 ]
[ 5, -1, -1, -1, 4, 1 ]
[ "iclr_2020_ryeRwlSYPH", "SJeB8gyy5r", "Hyx7VMW19B", "B1ghdRtJcr", "iclr_2020_ryeRwlSYPH", "iclr_2020_ryeRwlSYPH" ]
iclr_2020_BklSv34KvB
Carpe Diem, Seize the Samples Uncertain "at the Moment" for Adaptive Batch Selection
The performance of deep neural networks is significantly affected by how well mini-batches are constructed. In this paper, we propose a novel adaptive batch selection algorithm called Recency Bias that exploits the uncertain samples predicted inconsistently in recent iterations. The historical label predictions of each sample are used to evaluate its predictive uncertainty within a sliding window. By taking advantage of this design, Recency Bias not only accelerates the training step but also achieves a more accurate network. We demonstrate the superiority of Recency Bias by extensive evaluation on two independent tasks. Compared with existing batch selection methods, the results showed that Recency Bias reduced the test error by up to 20.5% in a fixed wall-clock training time. At the same time, it improved the training time by up to 59.3% to reach the same test error.
reject
The authors propose a new mini-batch selection method for training deep NNs. Rather than random sampling, selection is based on a sliding window of past model predictions for each sample and uncertainty about those samples. Results are presented on MNIST and CIFAR. The reviewers agreed that this is an interesting idea which was clearly presented, but had concerns about the strength of the experimental results, which showed only a modest benefit on relatively simple datasets. In the rebuttal period, the authors added an ablation study and additional results on Tiny-ImageNet. However, the results on the new dataset seem very marginal, and R1 did not feel that all of their concerns were addressed. I’m inclined to agree that more work is required to prove the generalizability of this approach before it’s suitable for acceptance.
train
[ "B1xaxyLZ9r", "rJlJnub2sS", "H1l2pDbhjH", "BygSpUbnjH", "r1lfkyNrtS", "BJeM3ENTYB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes Recency Bias, an adaptive mini batch selection method for training deep neural networks. To select informative minibatches for training, the proposed method maintains a fixed size sliding window of past model predictions for each data sample. At a given iteration, samples which have highly inco...
[ 3, -1, -1, -1, 6, 3 ]
[ 5, -1, -1, -1, 1, 3 ]
[ "iclr_2020_BklSv34KvB", "r1lfkyNrtS", "BJeM3ENTYB", "B1xaxyLZ9r", "iclr_2020_BklSv34KvB", "iclr_2020_BklSv34KvB" ]
iclr_2020_BklSwn4tDH
Prestopping: How Does Early Stopping Help Generalization Against Label Noise?
Noisy labels are very common in real-world training data, which lead to poor generalization on test data because of overfitting to the noisy labels. In this paper, we claim that such overfitting can be avoided by "early stopping" training a deep neural network before the noisy labels are severely memorized. Then, we resume training the early stopped network using a "maximal safe set," which maintains a collection of almost certainly true-labeled samples at each epoch since the early stop point. Putting them all together, our novel two-phase training method, called Prestopping, realizes noise-free training under any type of label noise for practical use. Extensive experiments using four image benchmark data sets verify that our method significantly outperforms four state-of-the-art methods in test error by 0.4–8.2 percent points under existence of real-world noise.
reject
This paper focuses on avoiding overfitting in the presence of noisy labels. The authors develop a two phase method called pre-stopping based on a combination of early stopping and a maximal safe set. The reviewers raised some concern about illustrating maximal safe set for all data sets and suggest comparisons with more baselines. The reviewers also indicated that the paper is missing key relevant publications. In the response the authors have done a rather through job of addressing the reviewers comments. I thank them for this. However, given the limited time some of the reviewers comments regarding adding new baselines could not be addressed. As a result I can not recommend acceptance because I think this is key to making a proper assessment. That said, I think this is an interesting with good potential if it can outperform other baselines and would recommend that the authors revise and resubmit in a future venue.
val
[ "Bkx69x5qoS", "BklzkXwLiH", "S1g68fP8ir", "HJl6Ilc5sS", "HyxY4gccjr", "rJxwlfDLjS", "Hyx_Ze55iH", "B1gIN-DLir", "rkxHUxPIor", "r1xfP7FKFS", "r1gL1Hq6Yr", "Syem3k3LqH", "Byx-7fCU5S", "HklU67zxKr", "SkeYl6CktH", "r1g23-JswH" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public", "author" ]
[ "We were very happy to have the opportunity to reflect your insightful comments. During the remaining rebuttal period, we are willing to reflect your additional comments if you have. Thank you again for your valuable comments.", "Thank you for raising your insightful and detailed comments. We have revised our pap...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 6, 3, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3, 3, -1, -1, -1 ]
[ "r1xfP7FKFS", "r1xfP7FKFS", "r1gL1Hq6Yr", "r1gL1Hq6Yr", "Syem3k3LqH", "Syem3k3LqH", "Byx-7fCU5S", "Byx-7fCU5S", "iclr_2020_BklSwn4tDH", "iclr_2020_BklSwn4tDH", "iclr_2020_BklSwn4tDH", "iclr_2020_BklSwn4tDH", "iclr_2020_BklSwn4tDH", "SkeYl6CktH", "iclr_2020_BklSwn4tDH", "iclr_2020_BklSw...
iclr_2020_BJlLvnEtDB
Analysis and Interpretation of Deep CNN Representations as Perceptual Quality Features
Pre-trained Deep Convolutional Neural Network (CNN) features have popularly been used as full-reference perceptual quality features for CNN based image quality assessment, super-resolution, image restoration and a variety of image-to-image translation problems. In this paper, to get more insight, we link basic human visual perception to characteristics of learned deep CNN representations as a novel and first attempt to interpret them. We characterize the frequency and orientation tuning of channels in trained object detection deep CNNs (e.g., VGG-16) by applying grating stimuli of different spatial frequencies and orientations as input. We observe that the behavior of CNN channels as spatial frequency and orientation selective filters can be used to link basic human visual perception models to their characteristics. Doing so, we develop a theory to get more insight into deep CNN representations as perceptual quality features. We conclude that sensitivity to spatial frequencies that have lower contrast masking thresholds in human visual perception and a definite and strong orientation selectivity are important attributes of deep CNN channels that deliver better perceptual quality features.
reject
This paper aims to analyze CNN representations in terms of how well they measure the perceptual severity of image distortions. In particularly, (a) sensitivity to changes in visual frequency and (b) orientation selectivity was used. Although the reviewers agree that this paper presents some interesting initial findings with a promising direction, the majority of the reviewers (three out of four) find that the paper is incomplete, raising concerns in terms of experimental settings and results. Multiple reviewers explicitly asked for additional experiments to confirm whether the presented empirical results can be used to improve results of an image generation. Responding to the reviews, the authors added a super-resolution experiment in the appendix, which the reviewers believe is the right direction but is still preliminary. Overall, we believe the paper reports interesting findings but it will require a series of additional work to make it ready for the publication.
test
[ "rJltQvNAFH", "Byg1BMTxsS", "ryeWrVMojB", "HkesApRejB", "HJlaq0oejr", "r1gDmXJMiH", "B1la2wI-jr", "HJxkTxRljr", "rkgFHuWGtH", "HylO0Xs75H", "HkxhCSGC9B" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I thank the authors for their detailed response. Some of my questions have been addressed in the rebuttal, but as far as I can tell very few modifications have been made to the paper: mainly an additional super-resolution experiment in the appendix, which goes in a good direction, but currently comes across as qui...
[ 3, -1, -1, -1, -1, -1, -1, -1, 6, 3, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "iclr_2020_BJlLvnEtDB", "HylO0Xs75H", "HkesApRejB", "rJltQvNAFH", "HkxhCSGC9B", "B1la2wI-jr", "HJxkTxRljr", "rkgFHuWGtH", "iclr_2020_BJlLvnEtDB", "iclr_2020_BJlLvnEtDB", "iclr_2020_BJlLvnEtDB" ]
iclr_2020_SJlDDnVKwS
Improving Evolutionary Strategies with Generative Neural Networks
Evolutionary Strategies (ES) are a popular family of black-box zeroth-order optimization algorithms which rely on search distributions to efficiently optimize a large variety of objective functions. This paper investigates the potential benefits of using highly flexible search distributions in ES algorithms, in contrast to standard ones (typically Gaussians). We model such distributions with Generative Neural Networks (GNNs) and introduce a new ES algorithm that leverages their expressiveness to accelerate the stochastic search. Because it acts as a plug-in, our approach allows to augment virtually any standard ES algorithm with flexible search distributions. We demonstrate the empirical advantages of this method on a diversity of objective functions.
reject
Evolutionary strategies are a popular class of method for black-box gradient-free optimization and involve iteratively fitting a distribution from which to sample promising input candidates to evaluate. CMA-ES involves fitting a Gaussian distribution and has achieved state-of-the-art performance on a variety of black-box optimization benchmarks when the underlying function is cheap to evaluate. In this work the authors replace this distribution instead with a much more flexible deep generative model (i.e. NICE). They demonstrate empirically that this method is effective on a number of synthetic global optimization benchmarks (e.g. Rosenbrock) and three direct policy search reinforcement learning problems. The reviewers all believe the paper is above borderline for acceptance. However, two of the reviewers said they were on the low end of their respective scores (i.e. one wanted to give a 5 instead of a 6 and another a 7 instead of 8.) A major issue among the reviewers was the experiments, which they noted were simple and not very convincing (with one reviewer disagreeing). The synthetic global optimization problems do seem somewhat simple. In the RL problems, it's not obvious that the proposed method is statistically significantly better, i.e. the error bars are overlapping considerably. Thus the recommendation is to reject. Hopefully stronger experiments and incorporating the reviewer comments in the manuscript will make this a stronger paper for a future conference.
val
[ "SJlkqN8Vsr", "HklWrEIEsr", "BkxsfNLNjr", "rkgmrg-iFr", "SJg32RMstH", "BkljioPaFB" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for his supportive feedback. \n\nAbout the experiments: The experiments were indeed chosen to highlight situations were flexibility improves the ES procedure. Even if the synthethic examples look easy, they capture prototypical difficulties in (zeroth-order) optimization, such as high conditi...
[ -1, -1, -1, 8, 6, 6 ]
[ -1, -1, -1, 4, 5, 5 ]
[ "rkgmrg-iFr", "SJg32RMstH", "BkljioPaFB", "iclr_2020_SJlDDnVKwS", "iclr_2020_SJlDDnVKwS", "iclr_2020_SJlDDnVKwS" ]
iclr_2020_rJxvD3VKvr
Wide Neural Networks are Interpolating Kernel Methods: Impact of Initialization on Generalization
The recently developed link between strongly overparametrized neural networks (NNs) and kernel methods has opened a new way to understand puzzling features of NNs, such as their convergence and generalization behaviors. In this paper, we make the bias of initialization on strongly overparametrized NNs under gradient descent explicit. We prove that fully-connected wide ReLU-NNs trained with squared loss are essentially a sum of two parts: The first is the minimum complexity solution of an interpolating kernel method, while the second contributes to the test error only and depends heavily on the initialization. This decomposition has two consequences: (a) the second part becomes negligible in the regime of small initialization variance, which allows us to transfer generalization bounds from minimum complexity interpolating kernel methods to NNs; (b) in the opposite regime, the test error of wide NNs increases significantly with the initialization variance, while still interpolating the training data perfectly. Our work shows that -- contrary to common belief -- the initialization scheme has a strong effect on generalization performance, providing a novel criterion to identify good initialization strategies.
reject
This paper proves that fully-connected wide ReLU-NNs trained with squared loss can be decomposed into two parts: (1) the minimum complexity solution of an interpolating kernel method, and (2) a term depends heavily on the initialization. The main concerns of the reviewers include (1) the contribution are not significant at all given prior work; (2) flawed proof, and (3) lack the comparison with prior work. Even the authors addressed some of the concerns in the revision, it still does not gather sufficient support from the reviewers after author response. Thus I recommend reject.
train
[ "HJerMupf5r", "SkxT3DtpKH", "B1gwJs43sr", "Hkgs_54hjS", "rJeLKtEhjB", "Hkgp9uN3iS", "H1g_B4n6FS", "BJxSSJytqS", "HyxglM3VcB", "SyxxVJ5AYr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "This paper studies the solution of neural network training in the NTK regime. The trained network can be written as the sum of two terms --- the first is the minimum RKHS norm interpolating solution, and the second term depends on the initialization. When the initialization scale is small, the second term almost v...
[ 1, 1, -1, -1, -1, -1, 6, 3, -1, -1 ]
[ 4, 4, -1, -1, -1, -1, 3, 4, -1, -1 ]
[ "iclr_2020_rJxvD3VKvr", "iclr_2020_rJxvD3VKvr", "BJxSSJytqS", "H1g_B4n6FS", "HJerMupf5r", "SkxT3DtpKH", "iclr_2020_rJxvD3VKvr", "iclr_2020_rJxvD3VKvr", "SyxxVJ5AYr", "iclr_2020_rJxvD3VKvr" ]
iclr_2020_HkeuD34KPH
SSE-PT: Sequential Recommendation Via Personalized Transformer
Temporal information is crucial for recommendation problems because user preferences are naturally dynamic in the real world. Recent advances in deep learning, especially the discovery of various attention mechanisms and newer architectures in addition to widely used RNN and CNN in natural language processing, have allowed for better use of the temporal ordering of items that each user has engaged with. In particular, the SASRec model, inspired by the popular Transformer model in natural languages processing, has achieved state-of-the-art results. However, SASRec, just like the original Transformer model, is inherently an un-personalized model and does not include personalized user embeddings. To overcome this limitation, we propose a Personalized Transformer (SSE-PT) model, outperforming SASRec by almost 5% in terms of NDCG@10 on 5 real-world datasets. Furthermore, after examining some random users' engagement history, we find our model not only more interpretable but also able to focus on recent engagement patterns for each user. Moreover, our SSE-PT model with a slight modification, which we call SSE-PT++, can handle extremely long sequences and outperform SASRec in ranking results with comparable training speed, striking a balance between performance and speed requirements. Our novel application of the Stochastic Shared Embeddings (SSE) regularization is essential to the success of personalization. Code and data are open-sourced at https://github.com/SSE-PT/SSE-PT.
reject
The paper proposes to improve sequential recommendation by extending SASRec (from prior work) by adding user embedding with SSE regularization. The authors show that the proposed method outperforms several baselines on five datasets. The paper received two weak accepts and one reject. Reviewers expressed concerns about the limited/scattered technical contribution. Reviewers were also concerned about the quality of the experiment results and need to compare against more baselines. After examining some related work, the AC agrees with the reviewers that there is also many recent relevant work such as BERT4Rec that should be cited and discussed. It would make the paper stronger if the authors can demonstrate that adding the user embedding to another method such as BERT4Rec can improve the performance of that model. Regarding R3's concerns about the comparison against HGN, the authors indicates there are differences in the length of sequences considered and that some method may work better for shorter sequences while their method works better for longer sequences. These details seems important to include in the paper. In the AC's opinion, the paper quality is borderline and the work is of limited interest to the ICLR community. Such would would be more appreciated in the recommender systems community. The authors are encouraged to improve the paper with improved discussion of more recent work such as BERT4Rec, add comparisons against these more recent work, incorporate various suggestions from the reviewers, and resubmit to an appropriate venue.
train
[ "SJenAyg6tH", "S1ej5Lf7jB", "rylW9BGXoS", "HJeSNEfXoH", "B1xkMwxJYB", "Hyg-LKcJcS" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The manuscript proposes SSE-PT, a sequential recommendation model based on transformer and stochastic shared embedding (SST). Experiments on several datasets show that SSE-PT outperforms a number of baseline methods. Some analytical results are also provided. Overall, I think this work is not suitable for ICLR due...
[ 1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, 4, 4 ]
[ "iclr_2020_HkeuD34KPH", "B1xkMwxJYB", "Hyg-LKcJcS", "SJenAyg6tH", "iclr_2020_HkeuD34KPH", "iclr_2020_HkeuD34KPH" ]
iclr_2020_HyeuP2EtDB
Scoring-Aggregating-Planning: Learning task-agnostic priors from interactions and sparse rewards for zero-shot generalization
Humans can learn task-agnostic priors from interactive experience and utilize the priors for novel tasks without any finetuning. In this paper, we propose Scoring-Aggregating-Planning (SAP), a framework that can learn task-agnostic semantics and dynamics priors from arbitrary quality interactions as well as the corresponding sparse rewards and then plan on unseen tasks in zero-shot condition. The framework finds a neural score function for local regional state and action pairs that can be aggregated to approximate the quality of a full trajectory; moreover, a dynamics model that is learned with self-supervision can be incorporated for planning. Many of previous works that leverage interactive data for policy learning either need massive on-policy environmental interactions or assume access to expert data while we can achieve a similar goal with pure off-policy imperfect data. Instantiating our framework results in a generalizable policy to unseen tasks. Experiments demonstrate that the proposed method can outperform baseline methods on a wide range of applications including gridworld, robotics tasks and video games.
reject
The paper proposes an algorithm for zero-shot generalization in RL via learning a scoring a function from. The reviewers had mixed feelings, and many were not from the area. A shared theme was doubts about the significance of the experimental setting, and also the generality of the approach. As this is my field, I read the paper, and recommend rejection at this time. The proposed method is quite laborious and requires quite a bit of assumptions on the environments to work, as well as fine tuning parameters for each considered task (number of regions, etc). I also agree that the evaluation is not convincing -- stronger baselines need to be considered and the experiments to better address the zero-shot transfer aspect that the paper is motivated by. I encourage the authors to take the review feedback into account and submit a future version to another venue.
test
[ "r1lfJzG3qS", "rJlU4taNsr", "S1lkGHaNjB", "HyxxmupVsB", "SJxcRM64jS", "HklUOIpEjH", "SkxYYEW4qr", "HyghHQBr9S" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a framework (Scoring-Aggregating-Planning (SAP)) for learning task-agnostic priors that allow generalization to new tasks without finetuning. The motivation for this is very clear - humans can perform much better than machines in zero-shot conditions because humans have learned priors about obje...
[ 3, -1, -1, -1, -1, -1, 6, 6 ]
[ 1, -1, -1, -1, -1, -1, 1, 1 ]
[ "iclr_2020_HyeuP2EtDB", "HyghHQBr9S", "r1lfJzG3qS", "HyghHQBr9S", "r1lfJzG3qS", "SkxYYEW4qr", "iclr_2020_HyeuP2EtDB", "iclr_2020_HyeuP2EtDB" ]
iclr_2020_rJgFDnEYPr
Count-guided Weakly Supervised Localization Based on Density Map
Weakly supervised localization (WSL) aims at training a model to find the positions of objects by providing it with only abstract labels. For most of the existing WSL methods, the labels are the class of the main object in an image. In this paper, we generalize WSL to counting machines that apply convolutional neural networks (CNN) and density maps for counting. We show that given only ground-truth count numbers, the density map as a hidden layer can be trained for localizing objects and detecting features. Convolution and pooling are the two major building blocks of CNNs. This paper discusses their impacts on an end-to-end WSL network. The learned features in a density map present in the form of dots. In order to make these features interpretable for human beings, this paper proposes a Gini impurity penalty to regularize the density map. Furthermore, it will be shown that this regularization is similar to the variational term of the β-variational autoencoder. The details of this algorithm are demonstrated through a simple bubble counting task. Finally, the proposed methods are applied to the widely used crowd counting dataset the Mall to learn discriminative features of human figures.
reject
This work proposes a new regularization method for weakly supervised localization based on counting. Reviewers agree that this is an interesting topic but the experimental validation is weak (qualitative, lack of baselines), and the contribution too incremental. Therefore, we recommend rejection.
train
[ "HJxfMpScdB", "BygI0NATYB", "rylrszTH9H" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This article proposes a method for object counting which can be trained with weak supervision. Object counting methods are often trained with point annotations, i.e., one click-point per object. In this article, a weaker way of annotation is used: count-based annotation, i.e., the number of objects of each class p...
[ 1, 3, 3 ]
[ 5, 3, 3 ]
[ "iclr_2020_rJgFDnEYPr", "iclr_2020_rJgFDnEYPr", "iclr_2020_rJgFDnEYPr" ]
iclr_2020_BylKwnEYvS
Star-Convexity in Non-Negative Matrix Factorization
Non-negative matrix factorization (NMF) is a highly celebrated algorithm for matrix decomposition that guarantees strictly non-negative factors. The underlying optimization problem is computationally intractable, yet in practice gradient descent based solvers often find good solutions. This gap between computational hardness and practical success mirrors recent observations in deep learning, where it has been the focus of extensive discussion and analysis. In this paper we revisit the NMF optimization problem and analyze its loss landscape in non-worst-case settings. It has recently been observed that gradients in deep networks tend to point towards the final minimizer throughout the optimization. We show that a similar property holds (with high probability) for NMF, provably in a non-worst case model with a planted solution, and empirically across an extensive suite of real-world NMF problems. Our analysis predicts that this property becomes more likely with growing number of parameters, and experiments suggest that a similar trend might also hold for deep neural networks --- turning increasing data sets and models into a blessing from an optimization perspective.
reject
The paper derives results for nonnegative-matrix factorization along the lines of recent results on SGD for DNNs, showing that the loss is star-convex towards randomized planted solutions. Overall, the paper is relatively well written and fairly clear. The reviewers agree that the theoretical contribution of the paper could be improved (tighten bounds) and that the experiments can be improved as well. In the context of other papers submitted to ICLR I therefore recommend to reject the paper.
train
[ "Hkg65Au3or", "HyejYpYOsB", "ByearTFOiS", "rkxZEaKOjB", "rJeof7cDsH", "SkgA3f9wiB", "HkxqCZqPjH", "S1gZu92OtS", "rkg28BBhtB", "rkeCQ6JaKB" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "* we've clarified constant 2 from fact 10.", "* we have clarified how r grows.\n* we have made a clarification that one can get slightly stronger results at the cost of a more complicated expression.\n* we have changed “conjecture” to “hypothesise” and have clarified it.\n* “lemma 9” was supposed to be “fact 9”....
[ -1, -1, -1, -1, -1, -1, -1, 3, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "HkxqCZqPjH", "rJeof7cDsH", "SkgA3f9wiB", "HkxqCZqPjH", "S1gZu92OtS", "rkg28BBhtB", "rkeCQ6JaKB", "iclr_2020_BylKwnEYvS", "iclr_2020_BylKwnEYvS", "iclr_2020_BylKwnEYvS" ]
iclr_2020_rygtPhVtDS
Noise Regularization for Conditional Density Estimation
Modelling statistical relationships beyond the conditional mean is crucial in many settings. Conditional density estimation (CDE) aims to learn the full conditional probability density from data. Though highly expressive, neural network based CDE models can suffer from severe over-fitting when trained with the maximum likelihood objective. Due to the inherent structure of such models, classical regularization approaches in the parameter space are rendered ineffective. To address this issue, we develop a model-agnostic noise regularization method for CDE that adds random perturbations to the data during training. We demonstrate that the proposed approach corresponds to a smoothness regularization and prove its asymptotic consistency. In our experiments, noise regularization significantly and consistently outperforms other regularization methods across seven data sets and three CDE models. The effectiveness of noise regularization makes neural network based CDE the preferable method over previous non- and semi-parametric approaches, even when training data is scarce.
reject
This paper has a mixture of weak reviews, the majority of which lean towards reject. All reviews mention a lack of novelty, and 2 of 3 a lack of support in experiments. While the authors argue, perhaps legitimately, for the novelty of the paper with respect to current literature, this is not convincing in the exposition. I recommend that the authors improve the justification for the novelty of their methodology, and strengthen the experiments to convince reviewers. As it stands, this paper is not quite ready for publication.
train
[ "rkxKjxxvoB", "HkxtfexvoB", "SylRCylPjS", "rJevBN3g9H", "BkgzQj3b9S", "Syg9trEf5r" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the detailed elaborations and helpful feedback w.r.t. the technical part of the paper. In the following, we attempt to address the reviewer's concerns:\n\nC: The authors claim that this idea is novel in the space of parametric density estimation [...]. It would surprise that this very nat...
[ -1, -1, -1, 6, 3, 3 ]
[ -1, -1, -1, 1, 3, 4 ]
[ "rJevBN3g9H", "BkgzQj3b9S", "Syg9trEf5r", "iclr_2020_rygtPhVtDS", "iclr_2020_rygtPhVtDS", "iclr_2020_rygtPhVtDS" ]
iclr_2020_HJxcP2EFDS
Amharic Negation Handling
User generated content contains opinionated texts not only in dominant languages (like English) but also less dominant languages( like Amharic). However, negation handling techniques that supports for sentiment detection is not developed in such less dominant language(i.e. Amharic). Negation handling is one of the challenging tasks for sentiment classification. Thus, this work builds negation handling schemes which enhances Amharic Sentiment classification. The proposed Negation Handling framework combines the lexicon based approach and character ngram based machine learning model. The performance of framework is evaluated using the annotated Amharic News Comments. The system is outperforming the best of all models and the baselines by an accuracy of 98.0. The result is compared with the baselines (without negation handling and word level ngram model).
reject
Main content: This paper presents negation handling approaches for Amharic sentiment classification. -- Discussion: All reviewers agree the paper is poorly written, uses outdated approaches, and requires better organization and formatting. -- Recommendation and justification: This paper after more work might be better submitted in an NLP workshop on low resource languages, rather than ICLR which is more focused on new machine learning methods.
train
[ "SJlfBvGgjB", "B1lddR7l5S", "SyetO1uNqB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "While this paper tackles an interesting problem. The technical approach is unfortunately too outdated and obvious and not quite the level of ICLR. \nThe dataset is likely too easy given the high accuracy. \n ...
[ 1, 1, 1 ]
[ 5, 5, 3 ]
[ "iclr_2020_HJxcP2EFDS", "iclr_2020_HJxcP2EFDS", "iclr_2020_HJxcP2EFDS" ]
iclr_2020_BJgcwh4FwS
Neural Maximum Common Subgraph Detection with Guided Subgraph Extraction
Maximum Common Subgraph (MCS) is defined as the largest subgraph that is commonly present in both graphs of a graph pair. Exact MCS detection is NP-hard, and its state-of-the-art exact solver based on heuristic search is slow in practice without any time complexity guarantee. Given the huge importance of this task yet the lack of fast solver, we propose an efficient MCS detection algorithm, NeuralMCS, consisting of a novel neural network model that learns the node-node correspondence from the ground-truth MCS result, and a subgraph extraction procedure that uses the neural network output as guidance for final MCS prediction. The whole model guarantees polynomial time complexity with respect to the number of the nodes of the larger of the two input graphs. Experiments on four real graph datasets show that the proposed model is 48.1x faster than the exact solver and more accurate than all the existing competitive approximate approaches to MCS detection.
reject
This paper proposed graph neural networks based approach for subgraph detection. The reviewers find that the overall the paper is interesting, however further improvements are needed to meet ICLR standard: 1. Experiments on larger graph. Slight speedup in small graphs are less exciting. 2. It seems there's a mismatch between training and inference. 3. The stopping criterion is quite heuristic.
train
[ "BkekDiAwiH", "S1lnT-8cjB", "BJl9nSpYiH", "r1e2Br6FoS", "BkeW__2EiB", "rkgnetGSjH", "ByeiVG3PoB", "ryl-WM3vjB", "H1lPLb3Por", "B1xWJnUGiH", "H1xE5kL6Yr", "Syl40jCRcH" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1b) Regarding the performance of our model, we have implemented an easy-to-implement strategy to further boost our performance. During GSE (Algorithm 1 and Section 3.3), instead of selecting just the node with the highest matching score (argmax of the masked matching matrix; Equation 7), we check all the unmatched...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 5 ]
[ "ByeiVG3PoB", "iclr_2020_BJgcwh4FwS", "r1e2Br6FoS", "BkekDiAwiH", "Syl40jCRcH", "H1xE5kL6Yr", "ryl-WM3vjB", "H1lPLb3Por", "B1xWJnUGiH", "iclr_2020_BJgcwh4FwS", "iclr_2020_BJgcwh4FwS", "iclr_2020_BJgcwh4FwS" ]
iclr_2020_HyloPnEKPr
Context-aware Attention Model for Coreference Resolution
Coreference resolution is an important task for gaining more complete understanding about texts by artificial intelligence. The state-of-the-art end-to-end neural coreference model considers all spans in a document as potential mentions and learns to link an antecedent with each possible mention. However, for the verbatim same mentions, the model tends to get similar or even identical representations based on the features, and this leads to wrongful predictions. In this paper, we propose to improve the end-to-end system by building an attention model to reweigh features around different contexts. The proposed model substantially outperforms the state-of-the-art on the English dataset of the CoNLL 2012 Shared Task with 73.45% F1 score on development data and 72.84% F1 score on test data.
reject
Main content: Blind review #2 summarizes it well: This paper extends the neural coreference resolution model in Lee et al. (2018) by 1) introducing an additional mention-level feature (grammatical numbers), and 2) letting the mention/pair scoring functions attend over multiple mention-level features. The proposed model achieves marginal improvement (0.2 avg. F1 points) over Lee et al., 2018, on the CoNLL 2012 English test set. -- Discussion: All reviewers rejected. -- Recommendation and justification: The paper must be rejected due to its violation of blind submission (the authors reveal themselves in the Acknowledgments). For information, blind review #2 also summarized well the following justifications for rejection: I recommend rejection for this paper due to the following reasons: - The technical contribution is very incremental (introducing one more features, and adding an attention layer over the feature vectors). - The experiment results aren't strong enough. And the experiments are done on only one dataset. - I am not convinced that adding the grammatical numbers features and the attention mechanism makes the model more context-aware.
train
[ "rkgHD5LntS", "BJgdQY86KH", "Skgu_zsaKH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes to use an extra feature (grammatical number) for context-aware coreference resolution and an attention-based weighting mechanism. The approach proposed is built on top of a recent well performing model by Lee et al. The improvement is rather minor in my view: 72.64 to 72.84 in the test set. \n\...
[ 1, 1, 1 ]
[ 3, 5, 4 ]
[ "iclr_2020_HyloPnEKPr", "iclr_2020_HyloPnEKPr", "iclr_2020_HyloPnEKPr" ]
iclr_2020_HkenPn4KPH
When Does Self-supervision Improve Few-shot Learning?
We present a technique to improve the generalization of deep representations learned on small labeled datasets by introducing self-supervised tasks as auxiliary loss functions. Although recent research has shown benefits of self-supervised learning (SSL) on large unlabeled datasets, its utility on small datasets is unknown. We find that SSL reduces the relative error rate of few-shot meta-learners by 4%-27%, even when the datasets are small and only utilizing images within the datasets. The improvements are greater when the training set is smaller or the task is more challenging. Though the benefits of SSL may increase with larger training sets, we observe that SSL can have a negative impact on performance when there is a domain shift between distribution of images used for meta-learning and SSL. Based on this analysis we present a technique that automatically select images for SSL from a large, generic pool of unlabeled images for a given dataset using a domain classifier that provides further improvements. We present results using several meta-learners and self-supervised tasks across datasets with varying degrees of domain shifts and label sizes to characterize the effectiveness of SSL for few-shot learning.
reject
Reviewers agree that this paper contains interesting results and simple, but good ideas. However, a few severe concerns were raised by reviewers. Most prominent one was the experiment set up - authors use a pre trained ResNet101 (which has seen many classes of Imagenet) for testing which makes is unclear how well their proposed method would work for unlabeled pool of dataset that classifiers has never seen. While authors claim that their the dataset used for testing was disjoint from Imagenet, a reviewer pointed out that dogs dataset, bird datasets both state that they overlap with Imagenet. A few other concerns are raised (need more meaningful metric in Figure 4d, which wasn’t addressed in rebuttal). We look forward to seeing an improved version of the paper in your future submissions.
train
[ "B1gCrCi3oS", "HJls9GKMjS", "HJecjbFfsB", "rkeIZWYzjH", "H1gxFLNtYS", "rygzcR7qYB", "S1eEEC9lqS" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your comments and updates.", "\nWe thank R2 for noting the strengths of the paper. We wish to clarify some of the issues related to the experiments.\n\n** Concern 1-1 the definition of domain distance is not quite meaningful **\nThe trend lines in 4d are indeed not meaningful. We will update this figu...
[ -1, -1, -1, -1, 3, 3, 8 ]
[ -1, -1, -1, -1, 4, 3, 5 ]
[ "rkeIZWYzjH", "H1gxFLNtYS", "rygzcR7qYB", "S1eEEC9lqS", "iclr_2020_HkenPn4KPH", "iclr_2020_HkenPn4KPH", "iclr_2020_HkenPn4KPH" ]
iclr_2020_HkehD3VtvS
Deep Reasoning Networks: Thinking Fast and Slow, for Pattern De-mixing
We introduce Deep Reasoning Networks (DRNets), an end-to-end framework that combines deep learning with reasoning for solving pattern de-mixing problems, typically in an unsupervised or weakly-supervised setting. DRNets exploit problem structure and prior knowledge by tightly combining logic and constraint reasoning with stochastic-gradient-based neural network optimization. We illustrate the power of DRNets on de-mixing overlapping hand-written Sudokus (Multi-MNIST-Sudoku) and on a substantially more complex task in scientific discovery that concerns inferring crystal structures of materials from X-ray diffraction data (Crystal-Structure-Phase-Mapping). DRNets significantly outperform the state of the art and experts' capabilities on Crystal-Structure-Phase-Mapping, recovering more precise and physically meaningful crystal structures. On Multi-MNIST-Sudoku, DRNets perfectly recovered the mixed Sudokus' digits, with 100% digit accuracy, outperforming the supervised state-of-the-art MNIST de-mixing models.
reject
The paper received mixed reviews of WR (R1), WR (R2) and WA (R3). AC has carefully read all the reviews/rebuttal/comments and examined the paper. AC agrees with R1 and R2's concerns, specifically around overclaiming around reasoning. Also AC was unnerved, as was R2 and R3, by the notion of continuing to train on the test set (and found the rebuttal unconvincing on this point). Overall, the AC feels this paper cannot be accepted. The authors should remove the unsupported/overly bold claims in their paper and incorporate the constructive suggestions from the reviewers in a revised version of the paper.
train
[ "rJeunhge9r", "rJeDGGXzir", "SygAvwe-iB", "B1x1WllbsH", "Bkg7_fyWoB", "rkeTAUPAYH", "ByxENYaL9r" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This work proposes a framework for solving de-mixing problems. The hard constraints from human inputs about a specific problem are relaxed into continuous constraints (the \"slow\" reasoning part), and a reconstruction loss measures the fitness of the inferred labels with the observations (the \"fast\" pattern rec...
[ 3, -1, -1, -1, -1, 6, 3 ]
[ 1, -1, -1, -1, -1, 4, 1 ]
[ "iclr_2020_HkehD3VtvS", "iclr_2020_HkehD3VtvS", "rkeTAUPAYH", "rJeunhge9r", "ByxENYaL9r", "iclr_2020_HkehD3VtvS", "iclr_2020_HkehD3VtvS" ]
iclr_2020_rkg6PhNKDr
HOW IMPORTANT ARE NETWORK WEIGHTS? TO WHAT EXTENT DO THEY NEED AN UPDATE?
In the context of optimization, a gradient of a neural network indicates the amount a specific weight should change with respect to the loss. Therefore, small gradients indicate a good value of the weight that requires no change and can be kept frozen during training. This paper provides an experimental study on the importance of a neural network weights, and to which extent do they need to be updated. We wish to show that starting from the third epoch, freezing weights which have no informative gradient and are less likely to be changed during training, results in a very slight drop in the overall accuracy (and in sometimes better). We experiment on the MNIST, CIFAR10 and Flickr8k datasets using several architectures (VGG19, ResNet-110 and DenseNet-121). On CIFAR10, we show that freezing 80% of the VGG19 network parameters from the third epoch onwards results in 0.24% drop in accuracy, while freezing 50% of Resnet-110 parameters results in 0.9% drop in accuracy and finally freezing 70% of Densnet-121 parameters results in 0.57% drop in accuracy. Furthermore, to experiemnt with real-life applications, we train an image captioning model with attention mechanism on the Flickr8k dataset using LSTM networks, freezing 60% of the parameters from the third epoch onwards, resulting in a better BLEU-4 score than the fully trained model. Our source code can be found in the appendix.
reject
The authors demonstrate that starting from the 3rd epoch, freezing a large fraction of the weights (based on gradient information), but not entire layers, results in slight drops in performance. Given existing literature, the reviewers did not find this surprising, even though freezing only some of a layers weights has not been explicitly analyzed before. Although this is an interesting observation, the authors did not explain why this finding is important and it is unclear what the impact of such a finding will be. The authors are encouraged to expand on the implications of their finding and theoretical basis for it. Furthermore, reviewers raised concerns about the extensiveness of the empirical evaluation. This paper falls below the bar for ICLR, so I recommend rejection.
train
[ "HkxqGH95Yr", "ryeXJWZBiB", "H1xb9heHjB", "BJgInolBsS", "HylH3mj2Kr", "SkgTDt7pYS" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nIn this paper, the authors performed an empirical study on the importance of neural network weights and to which extent they need to be updated. Some observations are obtained such as from the third epoch on, a large proportion of weights do not need to be updated and the performance of the network is not signif...
[ 1, -1, -1, -1, 3, 1 ]
[ 5, -1, -1, -1, 5, 4 ]
[ "iclr_2020_rkg6PhNKDr", "HkxqGH95Yr", "HylH3mj2Kr", "SkgTDt7pYS", "iclr_2020_rkg6PhNKDr", "iclr_2020_rkg6PhNKDr" ]
iclr_2020_rygRP2VYwB
Stochastically Controlled Compositional Gradient for the Composition problem
We consider composition problems of the form 1n∑i=1nFi(1n∑j=1nGj(x)). Composition optimization arises in many important machine learning applications: reinforcement learning, variance-aware learning, nonlinear embedding, and many others. Both gradient descent and stochastic gradient descent are straightforward solution, but both require to compute 1n∑j=1nGj(x) in each single iteration, which is inefficient-especially when n is large. Therefore, with the aim of significantly reducing the query complexity of such problems, we designed a stochastically controlled compositional gradient algorithm that incorporates two kinds of variance reduction techniques, and works in both strongly convex and non-convex settings. The strategy is also accompanied by a mini-batch version of the proposed method that improves query complexity with respect to the size of the mini-batch. Comprehensive experiments demonstrate the superiority of the proposed method over existing methods.
reject
All the reivewers find the similarity between this paper and the references in terms of the algorithm and the proof. The theoretical results may not better than the existing results.
train
[ "S1gA4PuTYB", "Bkxe6xAfiH", "BylbXbAGir", "H1l-_gRfjr", "SygFY6kpKS", "HklwLDlucr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a new method for empirical composition problems to which the vanilla SGD is not applicable because it has a finite-sum structure inside non-linear loss functions. A proposed method (named SCCG) is a combination of stochastic compositional gradient descent (SCGD) and stochastically controlled st...
[ 6, -1, -1, -1, 3, 3 ]
[ 3, -1, -1, -1, 5, 3 ]
[ "iclr_2020_rygRP2VYwB", "S1gA4PuTYB", "SygFY6kpKS", "HklwLDlucr", "iclr_2020_rygRP2VYwB", "iclr_2020_rygRP2VYwB" ]
iclr_2020_ByxJO3VFwB
Probabilistic modeling the hidden layers of deep neural networks
In this paper, we demonstrate that the parameters of Deep Neural Networks (DNNs) cannot satisfy the i.i.d. prior assumption and activations being i.i.d. is not valid for all the hidden layers of DNNs. Hence, the Gaussian Process cannot correctly explain all the hidden layers of DNNs. Alternatively, we introduce a novel probabilistic representation for the hidden layers of DNNs in two aspects: (i) a hidden layer formulates a Gibbs distribution, in which neurons define the energy function, and (ii) the connection between two adjacent layers can be modeled by a product of experts model. Based on the probabilistic representation, we demonstrate that the entire architecture of DNNs can be explained as a Bayesian hierarchical model. Moreover, the proposed probabilistic representation indicates that DNNs have explicit regularizations defined by the hidden layers serving as prior distributions. Based on the Bayesian explanation for the regularization of DNNs, we propose a novel regularization approach to improve the generalization performance of DNNs. Simulation results validate the proposed theories.
reject
This paper makes a claim that the iid assumption for NN parameters does not hold. The paper then expresses the joint distribution as a Gibbs distribution and PoE. Finally, there are some results on SGD as VI. Reviewers have mixed opinion about the paper and it is clear that the starting point of the paper (regarding iid assumption) is unclear. I myself read through the paper and discussed this with the reviewer, and it is clear that there are many issues with this paper. Here are my concerns: - The parameters of DNN are not iid *after* training. They are not supposed to be. So the empirical results where the correlation matrix is shown does not make the point that the paper is trying to make. - I agree with R2 that the prior is subjective and can be anything, and it is true that the "trained" NN may not correspond to a GP. This is actually well known which is why it is difficult to match the performance of a trained GP and trained NN. - The whole contribution about connection to Gibbs distribution and PoE is not insightful. These things are already known, so I don't know why this is a contribution. - Regarding connection between SGD and VI, they do *not* really prove anything. The derivation is *wrong*. In eq 85 in Appendix J2, the VI problem is written as KL(P||Q), but it should be KL(Q||P). Then this is argued to be the same as Eq. 88 obtained with SGD. This is not correct. Given these issues and based on reviewers' reaction to the content, I recommend to reject this paper.
train
[ "BJgMJnNvFH", "rJx3GcA6KS", "H1ePYbcssH", "H1lmEnZHiB", "ByxWPxfriB", "rJepUczHiB", "ryg3jE8soS", "r1giw_Yssr", "SkguKe5OoS", "SygEQLzHoH", "rylK6ur15r", "SklvKpFJOS", "BkxI83I1uS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "public" ]
[ "Summary of the Paper:\n \nThe authors claim the that i.i.d hypothesis that is often used in the prior when looking for the equivalence between neural networks and GP is not valid. Then, they propose a new interpretation of neural networks as Gibbs distributions (in the case of fully connected layers) and a MRF in...
[ 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, 8, -1, -1 ]
[ 1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1 ]
[ "iclr_2020_ByxJO3VFwB", "iclr_2020_ByxJO3VFwB", "iclr_2020_ByxJO3VFwB", "iclr_2020_ByxJO3VFwB", "rylK6ur15r", "BJgMJnNvFH", "ByxWPxfriB", "SygEQLzHoH", "SygEQLzHoH", "rJx3GcA6KS", "iclr_2020_ByxJO3VFwB", "BkxI83I1uS", "iclr_2020_ByxJO3VFwB" ]
iclr_2020_BylldnNFwS
On the Decision Boundaries of Deep Neural Networks: A Tropical Geometry Perspective
This work tackles the problem of characterizing and understanding the decision boundaries of neural networks with piece-wise linear non-linearity activations. We use tropical geometry, a new development in the area of algebraic geometry, to provide a characterization of the decision boundaries of a simple neural network of the form (Affine, ReLU, Affine). Specifically, we show that the decision boundaries are a subset of a tropical hypersurface, which is intimately related to a polytope formed by the convex hull of two zonotopes. The generators of the zonotopes are precise functions of the neural network parameters. We utilize this geometric characterization to shed light and new perspective on three tasks. In doing so, we propose a new tropical perspective for the lottery ticket hypothesis, where we see the effect of different initializations on the tropical geometric representation of the decision boundaries. Also, we leverage this characterization as a new set of tropical regularizers, which deal directly with the decision boundaries of a network. We investigate the use of these regularizers in neural network pruning (removing network parameters that do not contribute to the tropical geometric representation of the decision boundaries) and in generating adversarial input attacks (with input perturbations explicitly perturbing the decision boundaries geometry to change the network prediction of the input).
reject
This paper studies the decision boundaries of a certain class of neural networks (piecewise linear, non-linear activation functions) using tropical geometry, a subfield of algebraic geometry that leverages piece-wise linear structures. Building on earlier work, such piecewise linear networks are shown to be represented as a tropical rational function. This characterisation is used to explain different phenomena of neural network training, such as the 'lottery ticket hypothesis', network pruning, and adversarial attacks. This paper received mixed reviews, owing to its very specialized area. Whereas R1 championed the submission for its technical novelty, the other reviewers felt the current exposition is too inaccessible and some application areas are not properly addressed. The AC shares these concerns, recommends rejection and strongly encourages the authors to address the reviewers concerns in the next iteration.
train
[ "Hyg__4H0KH", "Hkg1J89ptH", "H1ei8GCjsr", "HkeoFI5_sH", "BJlntK9diB", "BygFvF9_oS", "rkeC2uqOjr", "HygZ2P5OoS", "H1l2EucdsS", "BygZMuq_sS", "HygOhI9diS", "rylL75J0FH", "SyljaBIAPr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author" ]
[ "This paper proposed a framework based on a mathematical tool of tropical geometry to characterize the decision boundary of neural networks. The analysis is applied to network pruning, lottery ticket hypothesis and adversarial attacks.\n\nI have some questions:\n\nQ1: What benefit does introducing tropical geometry...
[ 1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1 ]
[ 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1 ]
[ "iclr_2020_BylldnNFwS", "iclr_2020_BylldnNFwS", "BJlntK9diB", "Hyg__4H0KH", "BygFvF9_oS", "rkeC2uqOjr", "Hkg1J89ptH", "rylL75J0FH", "BygZMuq_sS", "HygZ2P5OoS", "HkeoFI5_sH", "iclr_2020_BylldnNFwS", "iclr_2020_BylldnNFwS" ]
iclr_2020_BJe-unNYPr
Accelerated Information Gradient flow
We present a systematic framework for the Nesterov's accelerated gradient flows in the spaces of probabilities embedded with information metrics. Here two metrics are considered, including both the Fisher-Rao metric and the Wasserstein-2 metric. For the Wasserstein-2 metric case, we prove the convergence properties of the accelerated gradient flows, and introduce their formulations in Gaussian families. Furthermore, we propose a practical discrete-time algorithm in particle implementations with an adaptive restart technique. We formulate a novel bandwidth selection method, which learns the Wasserstein-2 gradient direction from Brownian-motion samples. Experimental results including Bayesian inference show the strength of the current method compared with the state-of-the-art.
reject
The paper makes its contribution by deriving an accelerated gradient flow for the Wasserstein distances. It is technically strong and demonstrates it applicability using examples fo Gaussian distributions and logistic regression. Reviewer 3 provided a deep technical assessment, pointing out the relevance to our ML community since these ideas are not yet widespread, but had concerns about the clarity of the paper. Reviewer 2 had similar concerns about clarity, and was also positive about its relevance to the ML community. The authors provided details responses to the technical questions posed by the reviewers. The AC believes that such work is a good fit for the conference. The reviewers felts that this paper does not yet achieve the aim of making this work more widespread and needs more focus on communication. This is a strong paper and the authors are encouraged to address the accessibility questions. We hope the review offers useful points of feedback for their future work.
train
[ "BklKUV1fcB", "ryexmKi7jB", "B1gIW9imiS", "SJxlRtoXoB", "B1eJdibycS" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "I acknowledge reading the rebuttal of the authors. Thank you for your clarifications and explanation. My point was this paper would make a good submission to ICLR if it was better motivated presented and explained to a wider audience. Unfortunately in its current form it can only reach a limited audience.\n\n####\...
[ 3, -1, -1, -1, 3 ]
[ 4, -1, -1, -1, 3 ]
[ "iclr_2020_BJe-unNYPr", "B1eJdibycS", "BklKUV1fcB", "BklKUV1fcB", "iclr_2020_BJe-unNYPr" ]
iclr_2020_S1gfu3EtDr
EgoMap: Projective mapping and structured egocentric memory for Deep RL
Tasks involving localization, memorization and planning in partially observable 3D environments are an ongoing challenge in Deep Reinforcement Learning. We present EgoMap, a spatially structured neural memory architecture. EgoMap augments a deep reinforcement learning agent’s performance in 3D environments on challenging tasks with multi-step objectives. The EgoMap architecture incorporates several inductive biases including a differentiable inverse projection of CNN feature vectors onto a top-down spatially structured map. The map is updated with ego-motion measurements through a differentiable affine transform. We show this architecture outperforms both standard recurrent agents and state of the art agents with structured memory. We demonstrate that incorporating these inductive biases into an agent’s architecture allows for stable training with reward alone, circumventing the expense of acquiring and labelling expert trajectories. A detailed ablation study demonstrates the impact of key aspects of the architecture and through extensive qualitative analysis, we show how the agent exploits its structured internal memory to achieve higher performance.
reject
This paper presents a spatially structured neural memory architecture that supports navigation tasks. The paper describes a complex neural architecture that integrates visual information, camera parameters, egocentric velocities, and a differentiable 2D map canvas. This structure is trained end-to-end with A2C in the VizDoom environment. The strong inductive priors captured by these geometric transformations is demonstrated to be effective on navigation-related tasks in the experiments in this environment. The reviewers found many strengths and a few weaknesses in this paper. One strength is that the paper pulls together many related ideas in the mapping literature and combines them in one integrated system. The reviewers liked the method's ability to leverage semantic reasoning and spatial computation. They liked the careful updating of the maps and the use of projective geometry. The reviewers were less convinced of the generality of this method. The lack of realism in these simulated environments left the reviewers unconvinced that the benefits observed from using projective geometry in this setting will continue to hold in more realistic environments. The use of fixed geometric transformations with RGBD inputs instead of learned transformations also makes this approach less general than a system that could handle RGB inputs. Finally, the reviewers noted that the contributions of this paper are not well aligned with the paper's claims. This paper is not yet ready for publication as the paper's claims and experiments were not sufficiently convincing to the reviewers.
train
[ "HkxwoyHhoH", "Skxs4zXnoB", "rylECweXsr", "rkBszjfor", "BylEEjdWiB", "r1xgNJJAFS", "HkxJAE3CKS", "rke0Lprc5H" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your further response. We understand the desire to push for more realistic environments. We believe that barring the 2D sprites you mentioned, the visuals of DeepMind Lab and ViZDoom are comparable. Both are 3D environments where an agent observes from a monocular viewpoint, and observations / featur...
[ -1, -1, -1, -1, -1, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, 1, 5, 5 ]
[ "Skxs4zXnoB", "rylECweXsr", "rke0Lprc5H", "r1xgNJJAFS", "HkxJAE3CKS", "iclr_2020_S1gfu3EtDr", "iclr_2020_S1gfu3EtDr", "iclr_2020_S1gfu3EtDr" ]
iclr_2020_r1gzdhEKvH
Neural Linear Bandits: Overcoming Catastrophic Forgetting through Likelihood Matching
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role. Neural-linear bandits leverage the representation power of deep neural networks and combine it with efficient exploration mechanisms, designed for linear contextual bandits, on top of the last hidden layer. Since the representation is being optimized during learning, information regarding exploration with "old" features is lost. Here, we propose the first limited memory neural-linear bandit that is resilient to this catastrophic forgetting phenomenon. We perform simulations on a variety of real-world problems, including regression, classification, and sentiment analysis, and observe that our algorithm achieves superior performance and shows resilience to catastrophic forgetting.
reject
Reviewers found the problem statement having merit, but found the solution not completely justifiable. Bandit algorithms often come with theoretical justification because the feedback is such that the algorithm could be performing horribly without giving any indication of performance loss. With neural networks this is obviously challenging given the lack of supervised learning guarantees, but reviewers remain skeptical and prefer not to speculate based on empirical results.
train
[ "Hyxjb2HhiS", "ryeRDglGjr", "HJgFGPRbiS", "rklCw40WiS", "SJlaTfYptS", "HJlg_dy0KB", "rklDDmY0tB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I have read other reviews and the response. The authors' response helps clarify some of my questions. I am happy to increase the score to weak accept. \n\nThat being said, given the contribution of this paper is mainly focusing on the 'finite memory' aspect, I believe it is necessary for the paper to show the comp...
[ -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, 3, 4, 1 ]
[ "rklCw40WiS", "SJlaTfYptS", "HJlg_dy0KB", "rklDDmY0tB", "iclr_2020_r1gzdhEKvH", "iclr_2020_r1gzdhEKvH", "iclr_2020_r1gzdhEKvH" ]
iclr_2020_H1gz_nNYDS
AutoSlim: Towards One-Shot Architecture Search for Channel Numbers
We study how to set the number of channels in a neural network to achieve better accuracy under constrained resources (e.g., FLOPs, latency, memory footprint or model size). A simple and one-shot approach, named AutoSlim, is presented. Instead of training many network samples and searching with reinforcement learning, we train a single slimmable network to approximate the network accuracy of different channel configurations. We then iteratively evaluate the trained slimmable model and greedily slim the layer with minimal accuracy drop. By this single pass, we can obtain the optimized channel configurations under different resource constraints. We present experiments with MobileNet v1, MobileNet v2, ResNet-50 and RL-searched MNasNet on ImageNet classification. We show significant improvements over their default channel configurations. We also achieve better accuracy than recent channel pruning methods and neural architecture search methods with 100X lower search cost. Notably, by setting optimized channel numbers, our AutoSlim-MobileNet-v2 at 305M FLOPs achieves 74.2% top-1 accuracy, 2.4% better than default MobileNet-v2 (301M FLOPs), and even 0.2% better than RL-searched MNasNet (317M FLOPs). Our AutoSlim-ResNet-50 at 570M FLOPs, without depthwise convolutions, achieves 1.3% better accuracy than MobileNet-v1 (569M FLOPs).
reject
The paper presents a simple one-shot approach on searching the number of channels for deep convolutional neural networks. It trains a single slimmable network and then iteratively slim and evaluate the model to ensure a minimal accuracy drop. The method is simple and the results are promising. The main concern for this paper is the limited novelty. This work is based on slimmable network and the iterative slimming process is new, but in some sense similar to DropPath. The rebuttal that PathNet "has not demonstrated results on searching number of channels, and we are among the first few one-shot approaches on architectural search for number of channels" seem weak.
train
[ "BJghFTlYjr", "r1l2e3eKjr", "B1xldnxtir", "H1xkjO_ptS", "Hye2e3RpFB", "Bkx0hBuM9B" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your review efforts! We have addressed all questions below:\n\nQ1: Thanks for delving deep into our discussion in experiments. In our view, the difference of channel number distribution may come from several reasons. First, VGGNet, which many previous pruning methods targeted to optimize, has much more ...
[ -1, -1, -1, 6, 3, 3 ]
[ -1, -1, -1, 4, 3, 5 ]
[ "H1xkjO_ptS", "Bkx0hBuM9B", "Hye2e3RpFB", "iclr_2020_H1gz_nNYDS", "iclr_2020_H1gz_nNYDS", "iclr_2020_H1gz_nNYDS" ]
iclr_2020_HJe7unNFDH
Scaling Up Neural Architecture Search with Big Single-Stage Models
Neural architecture search (NAS) methods have shown promising results discovering models that are both accurate and fast. For NAS, training a one-shot model has became a popular strategy to approximate the quality of multiple architectures (child models) using a single set of shared weights. To avoid performance degradation due to parameter sharing, most existing methods have a two-stage workflow where the best child model induced from the one-shot model has to be retrained or finetuned. In this work, we propose BigNAS, an approach that simplifies this workflow and scales up neural architecture search to target a wide range of model sizes simultaneously. We propose several techniques to bridge the gap between the distinct initialization and learning dynamics across small and big models with shared parameters, which enable us to train a single-stage model: a single model from which we can directly slice high-quality child models without retraining or finetuning. With BigNAS we are able to train a single set of shared weights on ImageNet and use these weights to obtain child models whose sizes range from 200 to 1000 MFLOPs. Our discovered model family, BigNASModels, achieve top-1 accuracies ranging from 76.5% to 80.9%, surpassing all state-of-the-art models in this range including EfficientNets.
reject
This paper presents a NAS method that avoids having to retrain models from scratch and targets a range of model sizes at once. The work builds on Yu & Huang (2019) and studies a combination of many different techniques. Several baselines use a weaker training method, and no code is made available, raising doubts concerning reproducibility. The reviewers asked various questions, but for several of these questions (e.g., running experiments on MNIST and CIFAR) the authors did not answer satisfactorily. Therefore, the reviewer asking these questions also refuses to change his/her rating. Overall, as AnonReviewer #1 points out, the paper is very empirical. This is not necessarily a bad thing if the experiments yield a lot of insight, but this insight also appears limited. Therefore, I agree with the reviewers and recommend rejection.
train
[ "BJg0SbZYoS", "HyxFGyZtjS", "Byx-u0eYjr", "Bye2FkL2tB", "rkl2qQn3YH", "Hyx1C_yMcr" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your review efforts! We have addressed all questions below:\n\nGeneral question: what’s the general lessons we can learn from the paper?\nIn this work we demonstrated that it is feasible to train a single-stage model from which we can directly slice high-quality child models without retraining or finetu...
[ -1, -1, -1, 3, 3, 6 ]
[ -1, -1, -1, 4, 5, 1 ]
[ "Bye2FkL2tB", "rkl2qQn3YH", "Hyx1C_yMcr", "iclr_2020_HJe7unNFDH", "iclr_2020_HJe7unNFDH", "iclr_2020_HJe7unNFDH" ]
iclr_2020_SJx4O34YvS
Semantics Preserving Adversarial Attacks
While progress has been made in crafting visually imperceptible adversarial examples, constructing semantically meaningful ones remains a challenge. In this paper, we propose a framework to generate semantics preserving adversarial examples. First, we present a manifold learning method to capture the semantics of the inputs. The motivating principle is to learn the low-dimensional geometric summaries of the inputs via statistical inference. Then, we perturb the elements of the learned manifold using the Gram-Schmidt process to induce the perturbed elements to remain in the manifold. To produce adversarial examples, we propose an efficient algorithm whereby we leverage the semantics of the inputs as a source of knowledge upon which we impose adversarial constraints. We apply our approach on toy data, images and text, and show its effectiveness in producing semantics preserving adversarial examples which evade existing defenses against adversarial attacks.
reject
This paper describes a method for generating adversarial examples from images and text such that they maintain the semantics of the input. The reviewers saw a lot of value in this work, but also some flaws. The review process seemed to help answer many questions, but a few remain: there are some questions about the strength of the empirical results on text after the author's updates. Wether the adversarial images stay on the manifold is questioned (are blurry or otherwise noisy images "on manifold"?). One reviewer raises good questions about the soundness of the comparison to the Song paper. I think this review process has been very productive, and I hope the authors will agree. I hope this feedback helps them to improve their paper.
train
[ "Byg5fBqTKr", "BylqONbjiB", "B1gTvzsjjB", "H1eTRJiojH", "H1lRw0HijS", "BJehXzRNKS", "HJlMvCdgoS", "Hkl2qPPxiH", "rJx-BNkjFB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer" ]
[ "\n====== Updates ======\nI appreciate the authors' time and effort in the response. I have read the rebuttal, but I am not convinced by the authors' argument on using L2 (or L_\\infty) constraints. No matter whether L2 or L_\\infty constraint is used, the authors' method is not directly comparable to methods in So...
[ 1, -1, -1, -1, -1, 6, -1, -1, 6 ]
[ 4, -1, -1, -1, -1, 4, -1, -1, 3 ]
[ "iclr_2020_SJx4O34YvS", "iclr_2020_SJx4O34YvS", "H1lRw0HijS", "iclr_2020_SJx4O34YvS", "BylqONbjiB", "iclr_2020_SJx4O34YvS", "Byg5fBqTKr", "rJx-BNkjFB", "iclr_2020_SJx4O34YvS" ]
iclr_2020_rklVOnNtwH
Out-of-Distribution Detection Using Layerwise Uncertainty in Deep Neural Networks
In this paper, we tackle the problem of detecting samples that are not drawn from the training distribution, i.e., out-of-distribution (OOD) samples, in classification. Many previous studies have attempted to solve this problem by regarding samples with low classification confidence as OOD examples using deep neural networks (DNNs). However, on difficult datasets or models with low classification ability, these methods incorrectly regard in-distribution samples close to the decision boundary as OOD samples. This problem arises because their approaches use only the features close to the output layer and disregard the uncertainty of the features. Therefore, we propose a method that extracts the uncertainties of features in each layer of DNNs using a reparameterization trick and combines them. In experiments, our method outperforms the existing methods by a large margin, achieving state-of-the-art detection performance on several datasets and classification models. For example, our method increases the AUROC score of prior work (83.8%) to 99.8% in DenseNet on the CIFAR-100 and Tiny-ImageNet datasets.
reject
The paper proposes a method for OOD detection which leverages the uncertainties associated with the features at the intermediate layers (and not just the output layer). All the reviewers agreed that while this is an interesting direction, the paper requires more work before it can be accepted. In particular, the reviewers raised several concerns about other relevant baselines, some of the reported empirical results, and clarity of the explanation. I encourage the authors to revise the draft based on the reviewers’ feedback and resubmit to a different venue.
train
[ "r1gOgic6FS", "HyxIetY2sH", "Skebbl-_jH", "r1gvxN0ooB", "rke-py-OsB", "BkgI8JWusB", "rkgR64t_sr", "HkxJSbehYS", "HJg4ZAZ0tH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "** post rebuttal start **\n\nAfter reading reviews and authors' response, I decided not to change my score.\nHowever, I feel that this paper is somewhat under-evaluated initially, so I hope the authors have an opportunity in another venue with their revision.\n\n\nDetailed comments:\n\n1.1. I recommend to add an a...
[ 3, -1, -1, -1, -1, -1, -1, 1, 1 ]
[ 5, -1, -1, -1, -1, -1, -1, 3, 1 ]
[ "iclr_2020_rklVOnNtwH", "r1gvxN0ooB", "r1gOgic6FS", "rkgR64t_sr", "HJg4ZAZ0tH", "HkxJSbehYS", "rke-py-OsB", "iclr_2020_rklVOnNtwH", "iclr_2020_rklVOnNtwH" ]
iclr_2020_BJlLdhNFPr
Explaining A Black-box By Using A Deep Variational Information Bottleneck Approach
Interpretable machine learning has gained much attention recently. Briefness and comprehensiveness are necessary in order to provide a large amount of information concisely when explaining a black-box decision system. However, existing interpretable machine learning methods fail to consider briefness and comprehensiveness simultaneously, leading to redundant explanations. We propose the variational information bottleneck for interpretation, VIBI, a system-agnostic interpretable method that provides a brief but comprehensive explanation. VIBI adopts an information theoretic principle, information bottleneck principle, as a criterion for finding such explanations. For each instance, VIBI selects key features that are maximally compressed about an input (briefness), and informative about a decision made by a black-box system on that input (comprehensive). We evaluate VIBI on three datasets and compare with state-of-the-art interpretable machine learning methods in terms of both interpretability and fidelity evaluated by human and quantitative metrics.
reject
The authors present a system-agnostic interpretable method based on the idea of that provides a brief (=compressed) but comprehensive (=informative) explanation. Their system is build upon the idea of VIB. The authors compare against 3 state-of-the-art interpretable machine learning methods and the evaluation is terms of interpretability (=human understandable) and fidelity (=accuracy of approximating black-box model). Overall, all reviewers agreed that the topic of model interpretability is an important one and the novel connection between IB and interpretable data-summaries is a very natural one. This manuscript has generated a lot of discussion among the reviewers during the rebuttal and there are a number of concerns that are currently preventing me from recommending this paper for acceptance. The first concern relates to the lack of comparison against attention methods (I agree with the authors that this is a model-specific solution whereas they propose a model-agnostic one), however attention is currently the elephant in room and the first thing someone thinks of when thinking of interpretability. As such, the authors should have presented such a comparison. The second concern relates to the human evaluation protocol which could be significantly improved (Why 100 samples from all models but 200 for VIBI? Given the small set of results, are these model differences significant? Similarly, assuming that we have multiple annotations per sample, what is the variance in the annotations?). This paper is currently borderline and given reviewers' concerns and the limited space in the conference program I cannot recommend acceptance of this paper.
train
[ "BJgwCh42sH", "HyxBFVVnsr", "BkgRzHGniB", "rkxsOXG2sS", "SyeFbLlnor", "BkxSzVFiiH", "r1eN4QX8YB", "rygj9DIjiS", "SkgfjUIiiS", "r1giTflsoS", "S1lS-ltKsB", "rygQ12I4oS", "Skgb-8LViS", "Ske-h3jRYH", "ByeiT5Vg5r" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nIII) We sincerely thank you for making thoughtful discussions. We would like to first clarify responses we made before and then discuss further.\n\n - We simply wanted to point out that the participants were understanding that the task was to evaluate whether the explanations are good for getting an insight into...
[ -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 1, 3 ]
[ "BkgRzHGniB", "iclr_2020_BJlLdhNFPr", "SkgfjUIiiS", "rygj9DIjiS", "BkxSzVFiiH", "ByeiT5Vg5r", "iclr_2020_BJlLdhNFPr", "rygQ12I4oS", "Skgb-8LViS", "S1lS-ltKsB", "Ske-h3jRYH", "Skgb-8LViS", "r1eN4QX8YB", "iclr_2020_BJlLdhNFPr", "iclr_2020_BJlLdhNFPr" ]
iclr_2020_rylUOn4Yvr
ROBUST DISCRIMINATIVE REPRESENTATION LEARNING VIA GRADIENT RESCALING: AN EMPHASIS REGULARISATION PERSPECTIVE
It is fundamental and challenging to train robust and accurate Deep Neural Networks (DNNs) when semantically abnormal examples exist. Although great progress has been made, there is still one crucial research question which is not thoroughly explored yet: What training examples should be focused and how much more should they be emphasised to achieve robust learning? In this work, we study this question and propose gradient rescaling (GR) to solve it. GR modifies the magnitude of logit vector’s gradient to emphasise on relatively easier training data points when noise becomes more severe, which functions as explicit emphasis regularisation to improve the generalisation performance of DNNs. Apart from regularisation, we connect GR to examples weighting and designing robust loss functions. We empirically demonstrate that GR is highly anomaly-robust and outperforms the state-of-the-art by a large margin, e.g., increasing 7% on CIFAR100 with 40% noisy labels. It is also significantly superior to standard regularisers in both clean and abnormal settings. Furthermore, we present comprehensive ablation studies to explore the behaviours of GR under different cases, which is informative for applying GR in real-world scenarios.
reject
The paper proposes a gradient rescaling method to make deep neural network training more robust to label noise. The intuition of focusing more on easier examples is not particularly new, but empirical results are promising. On the weak side, no theoretical justification is provided, and the method introduces extra hyperparameters that need to be tuned. Finally, more discussions on recent SOTA methods (e.g., Lee et al. 2019) as well as further comprehensive evaluations on various cases, such as asymmetric label noise, semantic label noise, and open-set label noise, would be needed to justify and demonstrate the effectiveness of the proposed method.
test
[ "Skee-hIhiH", "H1eW7jugsr", "B1gcrlv3oB", "BkfpxlP2or", "BJerdkPhjB", "BylMdxxsor", "SJgoxomGsr", "rJgz97JRYB", "H1xJEl_Z9r", "B1xnoAwm9B" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nThank you for your questions.\n\n1&2: Is it always the case that “difficult” samples exhibit small logit values, and “easy” samples high logit values? If not,.....\nGenerally, the answer is yes. As training goes, the premise that semantic anomalies have small classification confidences while normal examples tend...
[ -1, -1, -1, -1, -1, -1, -1, 6, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 5 ]
[ "H1xJEl_Z9r", "B1xnoAwm9B", "H1xJEl_Z9r", "H1xJEl_Z9r", "H1xJEl_Z9r", "SJgoxomGsr", "rJgz97JRYB", "iclr_2020_rylUOn4Yvr", "iclr_2020_rylUOn4Yvr", "iclr_2020_rylUOn4Yvr" ]
iclr_2020_SJeLO34KwS
Dimensional Reweighting Graph Convolution Networks
In this paper, we propose a method named Dimensional reweighting Graph Convolutional Networks (DrGCNs), to tackle the problem of variance between dimensional information in the node representations of GCNs. We prove that DrGCNs can reduce the variance of the node representations by connecting our problem to the theory of the mean field. However, practically, we find that the degrees DrGCNs help vary severely on different datasets. We revisit the problem and develop a new measure K to quantify the effect. This measure guides when we should use dimensional reweighting in GCNs and how much it can help. Moreover, it offers insights to explain the improvement obtained by the proposed DrGCNs. The dimensional reweighting block is light-weighted and highly flexible to be built on most of the GCN variants. Carefully designed experiments, including several fixes on duplicates, information leaks, and wrong labels of the well-known node classification benchmark datasets, demonstrate the superior performances of DrGCNs over the existing state-of-the-art approaches. Significant improvements can also be observed on a large scale industrial dataset.
reject
As Reviewer 2 pointed out in his/her response to the authors' rebuttal, this paper (at least in current state) has significant shortcomings that need to be addressed before this paper merits acceptance.
train
[ "HklLi_XXjr", "r1gZTR6jsS", "H1l-npVijr", "BklU7VlFjS", "rJl5YULOjr", "rygPQTXXjS", "SJl8GYmXsr", "H1l1V3I-5B", "Hyxi-Xp6qS", "Sklf99RT9S" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Sorry for not being clear enough. \n\n1 For the \"error rate\":\n\n As pointed out, we do find that \"reducing error rate by 40%\" may be misunderstanding, we have corrected that to \" number of misclassified cases reduced by 40%\" in the revision. \n \n2 For matrix W:\n\nIn 2.1, the matrix $\\mathbf{W}$ is learne...
[ -1, -1, -1, -1, -1, -1, -1, 3, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 1, 1 ]
[ "Sklf99RT9S", "H1l-npVijr", "BklU7VlFjS", "rJl5YULOjr", "HklLi_XXjr", "Hyxi-Xp6qS", "H1l1V3I-5B", "iclr_2020_SJeLO34KwS", "iclr_2020_SJeLO34KwS", "iclr_2020_SJeLO34KwS" ]
iclr_2020_HJx_d34YDB
VIDEO AFFECTIVE IMPACT PREDICTION WITH MULTIMODAL FUSION AND LONG-SHORT TEMPORAL CONTEXT
Predicting the emotional impact of videos using machine learning is a challenging task. Feature extraction, multi-modal fusion and temporal context fusion are crucial stages for predicting valence and arousal values in the emotional impact, but have not been successfully exploited. In this paper, we proposed a comprehensive framework with innovative designs of model structure and multi-modal fusion strategy. We select the most suitable modalities for valence and arousal tasks respectively and each modal feature is extracted using the modality-specific pre-trained deep model on large generic dataset. Two-time-scale structures, one for the intra-clip and the other for the inter-clip, are proposed to capture the temporal dependency of video content and emotional states. To combine the complementary information from multiple modalities, an effective and efficient residual-based progressive training strategy is proposed. Each modality is step-wisely combined into the multi-modal model, responsible for completing the missing parts of features. With all those above, our proposed prediction framework achieves better performance with a large margin compared to the state-of-the-art.
reject
There is no author response for this paper. The paper addresses the affective analysis of video sequences in terms of continual emotions of valence and arousal. The authors propose a multi-modal approach (combining modalities such as audio, pose estimation, basic emotions and scene analysis) and a multi-scale temporal feature extractor (to capture short and long temporal context via LSTMs) to tackle the problem. All the reviewers and AC agreed that the paper lacks (1) novelty, as the proposed approach is a combination of the existing well-studied techniques without explanations why and when this could be advantageous beyond the considered task, (2) clarity and motivation -- see R2’s and R3’s concerns and suggestions on how to improve. We hope the reviews are useful for improving the paper.
val
[ "SygpRQ0sFB", "SygCYdSRFr", "Hkxiqjr8qr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This work design a framework to predict valence and arousal values in the emotional impact. Although its performance is much better than previous works, I have some questions about the current submission:\n-- The writing is not clear enough. I have to guess the technical details based on the context. For example, ...
[ 1, 1, 3 ]
[ 3, 3, 4 ]
[ "iclr_2020_HJx_d34YDB", "iclr_2020_HJx_d34YDB", "iclr_2020_HJx_d34YDB" ]
iclr_2020_BJgdOh4Ywr
Visual Imitation with Reinforcement Learning using Recurrent Siamese Networks
It would be desirable for a reinforcement learning (RL) based agent to learn behaviour by merely watching a demonstration. However, defining rewards that facilitate this goal within the RL paradigm remains a challenge. Here we address this problem with Siamese networks, trained to compute distances between observed behaviours and the agent’s behaviours. Given a desired motion such Siamese networks can be used to provide a reward signal to an RL agent via the distance between the desired motion and the agent’s motion. We experiment with an RNN-based comparator model that can compute distances in space and time between motion clips while training an RL policy to minimize this distance. Through experimentation, we have had also found that the inclusion of multi-task data and an additional image encoding loss helps enforce the temporal consistency. These two components appear to balance reward for matching a specific instance of a behaviour versus that behaviour in general. Furthermore, we focus here on a particularly challenging form of this problem where only a single demonstration is provided for a given task – the one-shot learning setting. We demonstrate our approach on humanoid agents in both 2D with 10 degrees of freedom (DoF) and 3D with 38 DoF.
reject
The main concern raised by reviewers is limited novelty, poor presentation, and limited experiments. All the reviewers appreciate the difficulty and importance of the problem. The rebuttal helped clarify novelty, but the other concerns remain.
val
[ "Hkefiga9jB", "rJeWINRKoH", "B1eU44AYsr", "BJe0PQlfiS", "BJxSVlxMsr", "HkgprklzjS", "HkgFxwxAFr", "rJeK8ITJ5r", "Syl0ilrb9B" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for taking the time to address these comments.\n\nAs a final note, I believe one of the main concerns with this work is that the experimental domain (humanoid), while a challenging control problem, is perhaps not as visually challenging as other RL domains. As the primary motivation for this work is to ...
[ -1, -1, -1, -1, -1, -1, 8, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "BJxSVlxMsr", "Syl0ilrb9B", "rJeK8ITJ5r", "rJeK8ITJ5r", "HkgFxwxAFr", "Syl0ilrb9B", "iclr_2020_BJgdOh4Ywr", "iclr_2020_BJgdOh4Ywr", "iclr_2020_BJgdOh4Ywr" ]
iclr_2020_SJeF_h4FwB
Label Cleaning with Likelihood Ratio Test
To collect large scale annotated data, it is inevitable to introduce label noise, i.e., incorrect class labels. A major challenge is to develop robust deep learning models that achieve high test performance despite training set label noise. We introduce a novel approach that directly cleans labels in order to train a high quality model. Our method leverages statistical principles to correct data labels and has a theoretical guarantee of the correctness. In particular, we use a likelihood ratio test(LRT) to flip the labels of training data. We prove that our LRT label correction algorithm is guaranteed to flip the label so it is consistent with the true Bayesian optimal decision rule with high probability. We incorporate our label correction algorithm into the training of deep neural networks and train models that achieve superior testing performance on multiple public datasets.
reject
This paper addresses a very interesting topic, and the authors clarified various issues raised by the reviewers. However, given the high competition of ICLR2020, this paper is unfortunately still below the bar. We hope that the detailed comments from the reviewers help you improve the paper for potential future submission.
train
[ "H1g4XV1njr", "HyxbpEyniB", "HJxd3ZynoB", "BygJz_12jH", "ryxITI7jFH", "rkxMYnMaKH", "r1xGBLK0tS" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for constructive comments. We address your concerns below. We split each of your comments into sub-comments such as (1-i) and (1-ii) for clarity.\n\n** (1-i). \"the first to correct labels with theoretical guarantees\", is not true.\nA: We are not sure which previous work you are referring to. As we orig...
[ -1, -1, -1, -1, 8, 3, 3 ]
[ -1, -1, -1, -1, 3, 5, 5 ]
[ "rkxMYnMaKH", "ryxITI7jFH", "r1xGBLK0tS", "iclr_2020_SJeF_h4FwB", "iclr_2020_SJeF_h4FwB", "iclr_2020_SJeF_h4FwB", "iclr_2020_SJeF_h4FwB" ]
iclr_2020_H1eqOnNYDH
Data augmentation instead of explicit regularization
Modern deep artificial neural networks have achieved impressive results through models with orders of magnitude more parameters than training examples which control overfitting with the help of regularization. Regularization can be implicit, as is the case of stochastic gradient descent and parameter sharing in convolutional layers, or explicit. Explicit regularization techniques, most common forms are weight decay and dropout, have proven successful in terms of improved generalization, but they blindly reduce the effective capacity of the model, introduce sensitive hyper-parameters and require deeper and wider architectures to compensate for the reduced capacity. In contrast, data augmentation techniques exploit domain knowledge to increase the number of training examples and improve generalization without reducing the effective capacity and without introducing model-dependent parameters, since it is applied on the training data. In this paper we systematically contrast data augmentation and explicit regularization on three popular architectures and three data sets. Our results demonstrate that data augmentation alone can achieve the same performance or higher as regularized models and exhibits much higher adaptability to changes in the architecture and the amount of training data.
reject
The paper explores the setting of *just* using data augmentation without an additional regularization term included. The submission claims that comparatively good performance can be achieved with data augmentation alone. The reviewers unanimously felt that the submission was not suitable for publication at ICLR. The reasons included skepticism that augmentation without regularization is a useful setting to explore, as well as concerns about the experiments used to support the conclusions in the paper. In particular, there were concerns that the experiments do not match best practice and that the error rates were too high. Finally, there were concerns about the clarity of definitions of "implicit" and "explicit" regularization.
train
[ "SklhygmKtS", "S1gEohNhjH", "H1eXN3N3or", "ryxyFY4hiS", "ByetY8N2sr", "B1ghHfV3sr", "ByghP5rssH", "Skgkbi9OsS", "HklFbtY_jS", "HkxWXuKOsS", "HJxI6FSQoB", "Hkg5p7vGKB", "Skel4cIaKB" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "The paper questions the conventional wisdom of using explicit regularization methods (e.g., L2, dropout) in training neural networks. The authors compare data augmentation with explicit regularization on several image classification datasets, architectures and amount of data, concluding using data augmentations is...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ "iclr_2020_H1eqOnNYDH", "SklhygmKtS", "SklhygmKtS", "SklhygmKtS", "Hkg5p7vGKB", "Hkg5p7vGKB", "Skel4cIaKB", "HJxI6FSQoB", "SklhygmKtS", "Hkg5p7vGKB", "iclr_2020_H1eqOnNYDH", "iclr_2020_H1eqOnNYDH", "iclr_2020_H1eqOnNYDH" ]
iclr_2020_r1x3unVKPS
Support-guided Adversarial Imitation Learning
We propose Support-guided Adversarial Imitation Learning (SAIL), a generic imitation learning framework that unifies support estimation of the expert policy with the family of Adversarial Imitation Learning (AIL) algorithms. SAIL addresses two important challenges of AIL, including the implicit reward bias and potential training instability. We also show that SAIL is at least as efficient as standard AIL. In an extensive evaluation, we demonstrate that the proposed method effectively handles the reward bias and achieves better performance and training stability than other baseline methods on a wide range of benchmark control tasks.
reject
The submission proposes a method for adversarial imitation learning that combines two previous approaches - GAIL and RED - by simply multiplying their reward functions. The claim is that this adaptation allows for better learning - both handling reward bias and improving training stability. The reviewers were divided in their assessment of the paper, criticizing the empirical results and the claims made by the authors. In particular, the primary claims of handling reward bias and reducing variance seem to be not well justified, including results which show that training stability only substantially improves when SAIL-b, which uses reward clipping, is used. Although the paper is promising, the recommendation is for a reject at this time. The authors are encouraged to clarify their claims and supporting experiments and to validate their method on more challenging domains.
train
[ "S1ex1UxPKS", "rJxtJOspFr", "SkepzwVwor", "BketIuNPiH", "rJeof8EPir", "HJlhjasptB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "**Summary of the paper: \nThe paper proposes an IL method named support-guided adversarial IL (SAIL), which is based on generative adversarial IL (GAIL) (Ho and Ermon, 2016) and random expert distillation (RED) (Wang et al., 2019). The key idea of SAIL is to construct a reward function by multiplying reward functi...
[ 6, 6, -1, -1, -1, 1 ]
[ 4, 3, -1, -1, -1, 4 ]
[ "iclr_2020_r1x3unVKPS", "iclr_2020_r1x3unVKPS", "rJxtJOspFr", "S1ex1UxPKS", "HJlhjasptB", "iclr_2020_r1x3unVKPS" ]
iclr_2020_Bygadh4tDB
Low Bias Gradient Estimates for Very Deep Boolean Stochastic Networks
Stochastic neural networks with discrete random variables are an important class of models for their expressivity and interpretability. Since direct differentiation and backpropagation is not possible, Monte Carlo gradient estimation techniques have been widely employed for training such models. Efficient stochastic gradient estimators, such Straight-Through and Gumbel-Softmax, work well for shallow models with one or two stochastic layers. Their performance, however, suffers with increasing model complexity. In this work we focus on stochastic networks with multiple layers of Boolean latent variables. To analyze such such networks, we employ the framework of harmonic analysis for Boolean functions. We use it to derive an analytic formulation for the source of bias in the biased Straight-Through estimator. Based on the analysis we propose \emph{FouST}, a simple gradient estimation algorithm that relies on three simple bias reduction steps. Extensive experiments show that FouST performs favorably compared to state-of-the-art biased estimators, while being much faster than unbiased ones. To the best of our knowledge FouST is the first gradient estimator to train up very deep stochastic neural networks, with up to 80 deterministic and 11 stochastic layers.
reject
Straight-Through is a popular, yet not theoretically well-understood, biased gradient estimator for Bernoulli random variables. The low variance of this estimator makes it a highly useful tool for training large-scale models with binary latents. However, the bias of this estimator may cause divergence in training, which is a significant practical issue. The paper develops a Fourier analysis of the Straight-Through estimator and provides an expression for the bias of the estimator in terms of the Fourier coefficients of the considered function. The paper in its current form is not good enough for publication, and the reviewers believe that the paper contains significant mistakes when deriving the estimator. Furthermore, the Fourier analysis seems unnecessary.
train
[ "rJglOLeCFB", "Skxn5jNhsH", "BkgiaQoTFH", "rkeybMznoB", "H1g_YlWuoS", "H1lowrVvoH", "H1gwrGJwjS", "HyljMM4wsS", "ryewcA-voH", "SJewcz1Dor", "rkg8tekDor", "rkeh3aCUjH", "ryg0epAUiB", "H1eAZuvkqr" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "------------- updated after rebuttal -------------------\n\nI thank the authors for clarifying and correcting the notations in Lemma 3. Though I still think the current state of the derivation is presented in a suboptimal way, and as a result, can be misleading to people.\n\nThe Fourier analysis used to give the r...
[ 3, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_Bygadh4tDB", "rkeybMznoB", "iclr_2020_Bygadh4tDB", "BkgiaQoTFH", "H1lowrVvoH", "HyljMM4wsS", "rkg8tekDor", "ryewcA-voH", "rkeh3aCUjH", "H1gwrGJwjS", "BkgiaQoTFH", "rJglOLeCFB", "H1eAZuvkqr", "iclr_2020_Bygadh4tDB" ]
iclr_2020_S1lAOhEKPS
X-Forest: Approximate Random Projection Trees for Similarity Measurement
Similarity measurement plays a central role in various data mining and machine learning tasks. Generally, a similarity measurement solution should, in an ideal state, possess the following three properties: accuracy, efficiency and independence from prior knowledge. Yet unfortunately, vital as similarity measurements are, no previous works have addressed all of them. In this paper, we propose X-Forest, consisting of a group of approximate Random Projection Trees, such that all three targets mentioned above are tackled simultaneously. Our key techniques are as follows. First, we introduced RP Trees into the tasks of similarity measurement such that accuracy is improved. In addition, we enforce certain layers in each tree to share identical projection vectors, such that exalted efficiency is achieved. Last but not least, we introduce randomness into partition to eliminate its reliance on prior knowledge. We conduct experiments on three real-world datasets, whose results demonstrate that our model, X-Forest, reaches an efficiency of up to 3.5 times higher than RP Trees with negligible compromising on its accuracy, while also being able to outperform traditional Euclidean distance-based similarity metrics by as much as 20% with respect to clustering tasks. We have released codes in github anonymously so as to meet the demand of reproducibility.
reject
This paper proposes a new method for measuring pairwise similarity between data points. The method is based on the idea that similarity between two data points to be the probability (over the randomness in constructing the trees) that they are close in a Random Projection tree. Reviewers found important limitations in this work, pertaining to clarity of mathematical statements and novelty. Unfortunately, the authors did not provide a rebuttal, so these concerns remain. Moreover, the program committee was made aware of the striking similarities between this submission and the preprint https://arxiv.org/abs/1908.10506 from Yan et al., which by itself would be grounds for rejection due to concerns of potential plagiarism. As a result, the AC recommends rejection at this time.
train
[ "Skx3jJ6qFB", "rkeLgBkTtS", "SkgRmZra5r" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper considers random projection forests for similarity measurements (which have been proposed earlier) and proposes to accelerate them by reusing projections. Tree levels up-to level-X use distinct random transformations, and subsequent levels cycle through existing projections (X of them). As this kind of r...
[ 1, 3, 3 ]
[ 3, 3, 4 ]
[ "iclr_2020_S1lAOhEKPS", "iclr_2020_S1lAOhEKPS", "iclr_2020_S1lAOhEKPS" ]
iclr_2020_r1lkKn4KDS
Learning Reusable Options for Multi-Task Reinforcement Learning
Reinforcement learning (RL) has become an increasingly active area of research in recent years. Although there are many algorithms that allow an agent to solve tasks efficiently, they often ignore the possibility that prior experience related to the task at hand might be available. For many practical applications, it might be unfeasible for an agent to learn how to solve a task from scratch, given that it is generally a computationally expensive process; however, prior experience could be leveraged to make these problems tractable in practice. In this paper, we propose a framework for exploiting existing experience by learning reusable options. We show that after an agent learns policies for solving a small number of problems, we are able to use the trajectories generated from those policies to learn reusable options that allow an agent to quickly learn how to solve novel and related problems.
reject
This paper presents a novel option discovery mechanism through incrementally learning reusasble options from a small number of policies that are usable across multiple tasks. The primary concern with this paper was with a number of issues around the experiments. Specifically, the reviewers took issue with the definition of novel tasks in the Atari context. A more robust discussion and analysis around what tasks are considered novel would be useful. Comparisons to other option discovery papers on the Atari domains is also required. Additionally, one reviewer had concerns on the hard limit of option execution length which remain unresolved following the discussion. While this is really promising work, it is not ready to be accepted at this stage.
val
[ "BJlUwRksoS", "SJl4PXJiiS", "HkxTvn_qor", "BJxpU2QbsS", "BklfYUM-jr", "SJxBWbX-ir", "SJg49tWqtB", "HyeGkWrhFH", "HyetQDATYH" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the explanations. These do clarify some of the issues I had with the paper.\n\n> what you refer to by investigating \"effect of task distribution or diversity\"\n\nMy comment was mainly meant to indicate that I think the empirical evaluations are somewhat limited. You do not really vary the training ...
[ -1, -1, -1, -1, -1, -1, 3, 6, 1 ]
[ -1, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ "SJxBWbX-ir", "HkxTvn_qor", "BJxpU2QbsS", "HyetQDATYH", "HyeGkWrhFH", "SJg49tWqtB", "iclr_2020_r1lkKn4KDS", "iclr_2020_r1lkKn4KDS", "iclr_2020_r1lkKn4KDS" ]
iclr_2020_H1ekF2EYDH
TechKG: A Large-Scale Chinese Technology-Oriented Knowledge Graph
Knowledge graph is a kind of valuable knowledge base which would benefit lots of AI-related applications. Up to now, lots of large-scale knowledge graphs have been built. However, most of them are non-Chinese and designed for general purpose. In this work, we introduce TechKG, a large scale Chinese knowledge graph that is technology-oriented. It is built automatically from massive technical papers that are published in Chinese academic journals of different research domains. Some carefully designed heuristic rules are used to extract high quality entities and relations. Totally, it comprises of over 260 million triplets that are built upon more than 52 million entities which come from 38 research domains. Our preliminary experiments indicate that TechKG has high adaptability and can be used as a dataset for many diverse AI-related applications.
reject
This paper presents a large-scale automatically extracted knowledge base in Chinese which contains information about entities and their relations present in academic papers. The authors have collected several papers that come from around 38 different domains. As such this is a dataset creation paper where the authors have used existing methodologies to perform relation extraction in Chinese. After having read the reviews and followup replies by authors, the main criticisms of the paper still hold. In addition to the lack of technical contribution, I feel that the writing of the paper can be improved a lot, for example, I would like to see a table with some example entities and relations extracted. That said, with further improvements this paper could potentially be a good contribution to LREC which is focused on dataset creation. In its current form, I recommend the paper to be rejected.
test
[ "BklNzvG0FS", "r1gB7ikNsH", "HJlv2U1EiS", "BJeME-1VoH", "HJlsq4XpFS", "H1euBO6PcB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "I have read the author response, thank you for responding.\n\nOriginal review:\nThis paper presents the extraction of a bibliographic database of Chinese technical papers. This database could potentially be a valuable resource for the community. However, the paper is mis-targeted to the ICLR conference, as it do...
[ 1, -1, -1, -1, 1, 3 ]
[ 4, -1, -1, -1, 1, 5 ]
[ "iclr_2020_H1ekF2EYDH", "H1euBO6PcB", "HJlsq4XpFS", "BklNzvG0FS", "iclr_2020_H1ekF2EYDH", "iclr_2020_H1ekF2EYDH" ]
iclr_2020_BygJKn4tPr
Effective Mechanism to Mitigate Injuries During NFL Plays
NFL(American football),which is regarded as the premier sports icon of America, has been severely accused in the recent years of being exposed to dangerous injuries that prove to be a bigger crisis as the players' lives have been increasingly at risk. Concussions, which refer to the serious brain traumas experienced during the passage of NFL play, have displayed a dramatic rise in the recent seasons concluding in an alarming rate in 2017/18. Acknowledging the potential risk, the NFL has been trying to fight via NeuroIntel AI mechanism as well as modifying existing game rules and risky play practices to reduce the rate of concussions. As a remedy, we are suggesting an effective mechanism to extensively analyse the potential concussion risks by adopting predictive analysis to project injury risk percentage per each play and positional impact analysis to suggest safer team formation pairs to lessen injuries to offer a comprehensive study on NFL injury analysis. The proposed data analytical approach differentiates itself from the other similar approaches that were focused only on the descriptive analysis rather than going for a bigger context with predictive modelling and formation pairs mining that would assist in modifying existing rules to tackle injury concerns. The predictive model that works with Kafka-stream processor real-time inputs and risky formation pairs identification by designing FP-Matrix, makes this far-reaching solution to analyse injury data on various grounds wherever applicable.
reject
All reviewers recommend reject, and there is no rebuttal.
train
[ "Skl9vNA6tB", "ryxj7V6k9S", "rklPBJyfcr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper aims to improve injury prediction and modeling in the National Football League (NFL) using machine learning. Unfortunately, the lack of clarity in the paper and poor writing prevents me from writing a thorough review. Given that the authors seem to apply off-the-shelf machine learning algorithms to NFL d...
[ 1, 1, 1 ]
[ 1, 4, 1 ]
[ "iclr_2020_BygJKn4tPr", "iclr_2020_BygJKn4tPr", "iclr_2020_BygJKn4tPr" ]
iclr_2020_BJlgt2EYwr
Stabilizing DARTS with Amended Gradient Estimation on Architectural Parameters
Differentiable neural architecture search has been a popular methodology of exploring architectures for deep learning. Despite the great advantage of search efficiency, it often suffers weak stability, which obstacles it from being applied to a large search space or being flexibly adjusted to different scenarios. This paper investigates DARTS, the currently most popular differentiable search algorithm, and points out an important factor of instability, which lies in its approximation on the gradients of architectural parameters. In the current status, the optimization algorithm can converge to another point which results in dramatic inaccuracy in the re-training process. Based on this analysis, we propose an amending term for computing architectural gradients by making use of a direct property of the optimality of network parameter optimization. Our approach mathematically guarantees that gradient estimation follows a roughly correct direction, which leads the search stage to converge on reasonable architectures. In practice, our algorithm is easily implemented and added to DARTS-based approaches efficiently. Experiments on CIFAR and ImageNet demonstrate that our approach enjoys accuracy gain and, more importantly, enables DARTS-based approaches to explore much larger search spaces that have not been studied before.
reject
This paper studies Differentiable Neural Architecture Search, focusing on a problem identified with the approximated gradient with respect to architectural parameters, and proposing an improved gradient estimation procedure. The authors claim that this alleviates the tendency of DARTS to collapse on degenerate architectures consisting of e.g. all skip connections, presently dealt with via early stopping. Reviewers generally liked the theoretical contribution, but found the evidence insufficient to support the claims. Requests for experiments by R1 with matched hyperparameters were granted (and several reviewers felt this strengthened the submission), though relegated to an appendix, but after a lengthy discussion reviewers still felt the evidence was insufficient. R1 also contended that the authors were overly dogmatic regarding "AutoML" -- that the early stopping heuristic was undesirable because of the additional human knowledge involved. I appreciate the sentiment but find this argument unconvincing -- while it is true that a great deal of human knowledge is still necessary to make architecture search work, the aim is certainly to develop fool-proof automatic methods. As reviewers were still unsatisfied with the empirical investigation after revisions and found that the weight of the contribution was insufficient for a 10 page paper, I recommend rejection at this time, while encouraging the authors to take seriously the reviewers' requests for a systematic study of the source of the empirical gains in order to strengthen their paper for future submission.
test
[ "SJeCAhFpFH", "rJgeM3MhtB", "SkgXDcgdqB", "BJxy0kc2oB", "SyevuCMQjS", "rJx99e7hoH", "BkePGgmhjB", "S1esW_bsiH", "rJxlp0gjjS", "Hkg2lNlojS", "BklBS0Gmsr", "Syg14kgjir", "SJx1-1XmoH", "B1x1kJXQiB", "HJedoAG7jH", "S1gPFMEBcr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "--- response after the author's rebuttal ---\n\nThank the authors to provide their response and let me clearly understand their contribution.\n\nHowever, after considering those, I will not change my rating. The paper identifies the inaccurate gradient computation in the original DARTS and propose a new estimation...
[ 3, 3, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, 4, 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1 ]
[ "iclr_2020_BJlgt2EYwr", "iclr_2020_BJlgt2EYwr", "iclr_2020_BJlgt2EYwr", "BkePGgmhjB", "SJeCAhFpFH", "S1esW_bsiH", "rJxlp0gjjS", "Syg14kgjir", "Hkg2lNlojS", "SyevuCMQjS", "SJeCAhFpFH", "BklBS0Gmsr", "rJgeM3MhtB", "S1gPFMEBcr", "SkgXDcgdqB", "iclr_2020_BJlgt2EYwr" ]
iclr_2020_BygZK2VYvB
Utilizing Edge Features in Graph Neural Networks via Variational Information Maximization
Graph Neural Networks (GNNs) broadly follow the scheme that the representation vector of each node is updated recursively using the message from neighbor nodes, where the message of a neighbor is usually pre-processed with a parameterized transform matrix. To make better use of edge features, we propose the Edge Information maximized Graph Neural Network (EIGNN) that maximizes the Mutual Information (MI) between edge features and message passing channels. The MI is reformulated as a differentiable objective via a variational approach. We theoretically show that the newly introduced objective enables the model to preserve edge information, and empirically corroborate the enhanced performance of MI-maximized models across a broad range of learning tasks including regression on molecular graphs and relation prediction in knowledge graphs.
reject
This paper proposed an auxiliary loss based on mutual information for graph neural network. Such loss is to maximize the mutual information between edge representation and corresponding edge feature in GNN ‘message passing’ function. GNN with edge features have already been proposed in the literature. Furthermore, the reviewers think the paper needs to improve further in terms of explain more clearly the motivation and rationale behind the method.
train
[ "rke365VAKH", "rylgNkQBor", "BkgVx1QBir", "SJe_c0fHiS", "Skxub0GSjH", "BylQWd5str", "Hyg-qWu3tB", "SJg2znvb5H" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper introduces a mutual information term into the training objective of message passing graph neural networks. The additional term favors the preservation on information in a mapping from an input edge feature vector e_{i,j} to a weight matrix f(e_{i,j}) used in computing messages across the edge from node...
[ 3, -1, -1, -1, -1, 6, 8, 3 ]
[ 4, -1, -1, -1, -1, 5, 4, 5 ]
[ "iclr_2020_BygZK2VYvB", "rke365VAKH", "Hyg-qWu3tB", "SJg2znvb5H", "BylQWd5str", "iclr_2020_BygZK2VYvB", "iclr_2020_BygZK2VYvB", "iclr_2020_BygZK2VYvB" ]
iclr_2020_BklWt24tvH
Learning Structured Communication for Multi-agent Reinforcement Learning
Learning to cooperate is crucial for many practical large-scale multi-agent applications. In this work, we consider an important collaborative task, in which agents learn to efficiently communicate with each other under a multi-agent reinforcement learning (MARL) setting. Despite the fact that there has been a number of existing works along this line, achieving global cooperation at scale is still challenging. In particular, most of the existing algorithms suffer from issues such as scalability and high communication complexity, in the sense that when the agent population is large, it can be difficult to extract effective information for high-performance MARL. In contrast, the proposed algorithmic framework, termed Learning Structured Communication (LSC), is not only scalable but also communication high-qualitative (learning efficient). The key idea is to allow the agents to dynamically learn a hierarchical communication structure, while under such a structure the graph neural network (GNN) is used to efficiently extract useful information to be exchanged between the neighboring agents. A number of new techniques are proposed to tightly integrate the communication structure learning, GNN optimization and MARL tasks. Extensive experiments are performed to demonstrate that, the proposed LSC framework enjoys high communication efficiency, scalability and global cooperation capability.
reject
The paper focuses on large-scale multi-agent reinforcement learning and proposes Learning Structured Communication (LSC) to deal issues of scale and learn sample efficiently. Reviewers are positive about the presented ideas, but note remaining limitations. In particular, the empirical validation does not lead to sufficiently novel insights, and additional analysis is needed to round out the paper.
train
[ "SyeJySLhiH", "HylaKsaDiB", "Hyg9K9TviH", "BklE3tawoS", "S1lkU_SAtS", "Skx5u5fZqS", "SkeSTs9dqH" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We update the paper again.\n\nThe repeated training results of all compared algorithms including ATOC are updated in the new Figure 7, which demonstrates the advantage of the proposed LSC algorithm in SCII enviroment.", "Thank you for your valuable and inspiring comments. \n\nThe focus of the paper is to address...
[ -1, -1, -1, -1, 6, 3, 3 ]
[ -1, -1, -1, -1, 4, 4, 4 ]
[ "BklE3tawoS", "S1lkU_SAtS", "Skx5u5fZqS", "SkeSTs9dqH", "iclr_2020_BklWt24tvH", "iclr_2020_BklWt24tvH", "iclr_2020_BklWt24tvH" ]
iclr_2020_BklEF3VFPB
Towards Stable and comprehensive Domain Alignment: Max-Margin Domain-Adversarial Training
Domain adaptation tackles the problem of transferring knowledge from a label-rich source domain to an unlabeled or label-scarce target domain. Recently domain-adversarial training (DAT) has shown promising capacity to learn a domain-invariant feature space by reversing the gradient propagation of a domain classifier. However, DAT is still vulnerable in several aspects including (1) training instability due to the overwhelming discriminative ability of the domain classifier in adversarial training, (2) restrictive feature-level alignment, and (3) lack of interpretability or systematic explanation of the learned feature space. In this paper, we propose a novel Max-margin Domain-Adversarial Training (MDAT) by designing an Adversarial Reconstruction Network (ARN). The proposed MDAT stabilizes the gradient reversing in ARN by replacing the domain classifier with a reconstruction network, and in this manner ARN conducts both feature-level and pixel-level domain alignment without involving extra network structures. Furthermore, ARN demonstrates strong robustness to a wide range of hyper-parameters settings, greatly alleviating the task of model selection. Extensive empirical results validate that our approach outperforms other state-of-the-art domain alignment methods. Additionally, the reconstructed target samples are visualized to interpret the domain-invariant feature space which conforms with our intuition.
reject
This paper proposes max-margin domain adversarial training with an adversarial reconstruction network that stabilizes the gradient by replacing the domain classifier. Reviewers and AC think that the method is interesting and motivation is reasonable. Concerns were raised regarding weak experimental results in the diversity of datasets and the comparison to state-of-the-art methods. The paper needs to show how the method works with respect to stability and interpretability. The paper should also clearly relate the contrastive loss for reconstruction to previous work, given that both the loss and the reconstruction idea have been extensively explored for DA. Finally, the theoretical analysis is shallow and the gap between the theory and the algorithm needs to be closed. Overall this is a borderline paper. Considering the bar of ICLR and limited quota, I recommend rejection.
train
[ "HkxaMiKaFB", "SJxnv049ir", "r1xjXBH9jr", "HkxmDDVqjH", "HJx-7YCVtr", "HJxSQjc9cr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This work proposes Adversarial Reconstruction Network (ARN), a network architecture, and Max-margin Domain-Adversarial Training (MDAT), an objective and training procedure for unsupervised domain adaptation. Similar to domain adversarial approaches, the generator aims at finding domain invariant representation whi...
[ 6, -1, -1, -1, 3, 3 ]
[ 4, -1, -1, -1, 5, 4 ]
[ "iclr_2020_BklEF3VFPB", "HkxaMiKaFB", "HJx-7YCVtr", "HJxSQjc9cr", "iclr_2020_BklEF3VFPB", "iclr_2020_BklEF3VFPB" ]
iclr_2020_SJg4Y3VFPS
Group-Connected Multilayer Perceptron Networks
Despite the success of deep learning in domains such as image, voice, and graphs, there has been little progress in deep representation learning for domains without a known structure between features. For instance, a tabular dataset of different demographic and clinical factors where the feature interactions are not given as a prior. In this paper, we propose Group-Connected Multilayer Perceptron (GMLP) networks to enable deep representation learning in these domains. GMLP is based on the idea of learning expressive feature combinations (groups) and exploiting them to reduce the network complexity by defining local group-wise operations. During the training phase, GMLP learns a sparse feature grouping matrix using temperature annealing softmax with an added entropy loss term to encourage the sparsity. Furthermore, an architecture is suggested which resembles binary trees, where group-wise operations are followed by pooling operations to combine information; reducing the number of groups as the network grows in depth. To evaluate the proposed method, we conducted experiments on five different real-world datasets covering various application areas. Additionally, we provide visualizations on MNIST and synthesized data. According to the results, GMLP is able to successfully learn and exploit expressive feature combinations and achieve state-of-the-art classification performance on different datasets.
reject
The authors propose Group Connected Multilayer Perceptron Networks which allow expressive feature combinations to learn meaningful deep representations. They experiment with different datasets and show that the proposed method gives improved performance. The authors have done a commendable job of replying to the queries of the reviewers and addresses many of their concerns. However, the main concern still remains: The improvements are not very significant on most datasets except the MNIST dataset. I understand the author's argument that other papers have also reported small improvements on these datasets and hence it is ok to report small improvements. However, the reviewers and the AC did not find this argument very convincing. Given that this is not a theoretical paper and that the novelty is not very high (as pointed out by R1) strong empirical results are accepted. Hence, at this point, I recommend that the paper cannot be accepted.
val
[ "r1lQFeXjiS", "ryxYLlmisr", "H1xylgQisS", "B1xQh1QojH", "BJgINkQooH", "HJeu3xwVoS", "ryeMyViJ5H", "BJgm18Gaqr" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nThank you for reviewing the manuscript and helpful comments. Please find a point-to-point response to your comments in the following.\n\n-------------------------------------------\n*Comment: “1. The intuition of this approach should be better explained. In equation (1) the features are group together using a bi...
[ -1, -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "ryeMyViJ5H", "ryeMyViJ5H", "BJgm18Gaqr", "BJgm18Gaqr", "HJeu3xwVoS", "iclr_2020_SJg4Y3VFPS", "iclr_2020_SJg4Y3VFPS", "iclr_2020_SJg4Y3VFPS" ]
iclr_2020_BkeHt34Fwr
Regional based query in graph active learning
Graph convolution networks (GCN) have emerged as a leading method to classify nodes and graphs. These GCN have been combined with active learning (AL) methods, when a small chosen set of tagged examples can be used. Most AL-GCN use the sample class uncertainty as selection criteria, and not the graph. In contrast, representative sampling uses the graph, but not the prediction. We propose to combine the two and query nodes based on the uncertainty of the graph around them. We here propose two novel methods to select optimal nodes in AL-GCN that explicitly use the graph information to query for optimal nodes. The first method named regional uncertainty is an extension of the classical entropy measure, but instead of sampling nodes with high entropy, we propose to sample nodes surrounded by nodes of different classes, or nodes with high ambiguity. The second method called Adaptive Page-Rank is an extension of the page-rank algorithm, where nodes that have a low probability of being reached by random walks from tagged nodes are selected. We show that the latter is optimal when the fraction of tagged nodes is low, and when this fraction grows to one over the average degree, the regional uncertainty performs better than all existing methods. While we have tested these methods on graphs, such methods can be extended to any classification problem, where a distance can be defined between the input samples.
reject
The paper proposes a method for performing active learning on graph convolutional networks. In particular, instead of performing uncertainty-based sampling based on an individual node level, the authors propose to look at regional based uncertainty. They propose an efficient algorithm based on page rank. Empirically, they compare their method to several other leading methods, comparing favorably. Reviewers found the work poorly organized and difficult to read. The idea to use region based estimates is intuitive but feels like nothing more than just that. It's not clear if there is a mathematical basis to justify such a method (e.g. an analysis of sample complexity as has been accomplished in other graph active learning problems, Dasarathy, Nowak, Zhu 2015). The idea requires further study and justification, and the paper needs an improved exposition. Finally, the authors were not anonymized on the PDF.
train
[ "HkeT8LYtoS", "SygL67FKjS", "ByxMCfDCKH", "Skxa6zrkqS" ]
[ "author", "author", "official_reviewer", "official_reviewer" ]
[ "Reviewer 2 has the same main comment as reviewer 1. “Overall it remains unclear *how* to select the right strategy (before seeing the results for a dataset) i.e. which of the proposed approaches or variants should one select for a new dataset.”. Again, we added now such a section in the discussion. \n\nBeyond that...
[ -1, -1, 1, 6 ]
[ -1, -1, 4, 1 ]
[ "ByxMCfDCKH", "Skxa6zrkqS", "iclr_2020_BkeHt34Fwr", "iclr_2020_BkeHt34Fwr" ]
iclr_2020_BJx8Fh4KPB
RL-LIM: Reinforcement Learning-based Locally Interpretable Modeling
Understanding black-box machine learning models is important towards their widespread adoption. However, developing globally interpretable models that explain the behavior of the entire model is challenging. An alternative approach is to explain black-box models through explaining individual prediction using a locally interpretable model. In this paper, we propose a novel method for locally interpretable modeling -- Reinforcement Learning-based Locally Interpretable Modeling (RL-LIM). RL-LIM employs reinforcement learning to select a small number of samples and distill the black-box model prediction into a low-capacity locally interpretable model. Training is guided with a reward that is obtained directly by measuring agreement of the predictions from the locally interpretable model with the black-box model. RL-LIM near-matches the overall prediction performance of black-box models while yielding human-like interpretability, and significantly outperforms state of the art locally interpretable models in terms of overall prediction performance and fidelity.
reject
The paper aims to find locally interpretable models, such that the local models are fit (w.r.t. the ground truth) and faithful (w.r.t. the global underlying black box model). The contribution of the paper is that the local model is trained from a subset of points, selected via an optimized importance weight function. The difference compared to Ren et al. (cited) is that the IW function is non-differentiable and optimized using Reinforcement Learning. A first concern (Rev#1, Rev#2) regards the positioning of the paper w.r.t. RL, as the actual optimization method could be any black-box optimization method: one wants to find the IW that maximizes the faithfulness. The rebuttal makes a good job in explaining the impact of using a non-differentiable IW function. A second concern (Rev#2) regards the interpretability of the IW underlying the local interpretable model. There is no doubt that the paper was considerably improved during the rebuttal period. However, the improvements raise additional questions (e.g. about selecting the IW depending on the distance to the probes). I encourage the authors to continue on this promising line of search.
train
[ "HkgjLUosjH", "SJlm_UjssB", "H1ekWsCtiS", "B1eu4ZJAtr", "BJxA1JhCtS" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Answer 6: We appreciate this suggestion to improve our paper. We have included a new Section, Appendix F, on analysis of the instance-wise weights to build insights on the selected samples. We visualize the distributions of the instance-wise weights of training samples for the entire probe samples, and we show the...
[ -1, -1, -1, 6, 3 ]
[ -1, -1, -1, 3, 1 ]
[ "B1eu4ZJAtr", "B1eu4ZJAtr", "BJxA1JhCtS", "iclr_2020_BJx8Fh4KPB", "iclr_2020_BJx8Fh4KPB" ]
iclr_2020_BJx8YnEFPH
Data Valuation using Reinforcement Learning
Quantifying the value of data is a fundamental problem in machine learning. Data valuation has multiple important use cases: (1) building insights about the learning task, (2) domain adaptation, (3) corrupted sample discovery, and (4) robust learning. To adaptively learn data values jointly with the target task predictor model, we propose a meta learning framework which we name Data Valuation using Reinforcement Learning (DVRL). We employ a data value estimator (modeled by a deep neural network) to learn how likely each datum is used in training of the predictor model. We train the data value estimator using a reinforcement signal of the reward obtained on a small validation set that reflects performance on the target task. We demonstrate that DVRL yields superior data value estimates compared to alternative methods across different types of datasets and in a diverse set of application scenarios. The corrupted sample discovery performance of DVRL is close to optimal in many regimes (i.e. as if the noisy samples were known apriori), and for domain adaptation and robust learning DVRL significantly outperforms state-of-the-art by 14.6% and 10.8%, respectively.
reject
The paper suggests an RL-based approach to design a data valuation estimator. The reviewers agree that the proposed method is new and promising, but they also raised concerns about the empirical evaluations, including not comparing with other approaches of data valuation and limited ablation study. The authors provided a rebuttal to address these concerns. It improves the evaluation of one of the reviewers, but it is difficult to recommend acceptance given that we did not have a champion for this paper and the overall score is not high enough.
train
[ "B1gTsYU0Fr", "rylmVyFjor", "SyxqV0OjjS", "Byx0rjujoB", "BJe7WXwiKS", "ryl1xi0RtB", "r1gBmxp19r", "Bye8jF5sYr" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public" ]
[ "This paper proposes a method for assigning values to each datum. For example, data with incorrect labels, data of low quality, or data from off-the-target distributions should be assigned low values. The main method involves training a neural network to predict the value for each training datum. The reward is bas...
[ 6, -1, -1, -1, 6, 3, -1, -1 ]
[ 1, -1, -1, -1, 3, 4, -1, -1 ]
[ "iclr_2020_BJx8YnEFPH", "BJe7WXwiKS", "B1gTsYU0Fr", "ryl1xi0RtB", "iclr_2020_BJx8YnEFPH", "iclr_2020_BJx8YnEFPH", "Bye8jF5sYr", "iclr_2020_BJx8YnEFPH" ]
iclr_2020_BkxDthVtvS
Equivariant neural networks and equivarification
A key difference from existing works is that our equivarification method can be applied without knowledge of the detailed functions of a layer in a neural network, and hence, can be generalized to any feedforward neural networks. Although the network size scales up, the constructed equivariant neural network does not increase the complexity of the network compared with the original one, in terms of the number of parameters. As an illustration, we build an equivariant neural network for image classification by equivarifying a convolutional neural network. Results show that our proposed method significantly reduces the design and training complexity, yet preserving the learning performance in terms of accuracy.
reject
This paper proposes a way to construct group equivariant neural networks from pre-trained non-equivariant networks. The equivarification is done with respect to known finite groups, and can be done globally or layer-wise. The authors discuss their approach in the context of the image data domain. The paper is theoretically sound and proposes a novel perspective on equivarification, however, the reviewers agree that the experimental section should be strengthened and connections with other approaches (e.g. the work by Cohen and Welling) should be made clearer. The reviewers also had concerns about the computational cost of the equivarification method proposed in this paper. While the authors’ revision addressed some of the reviewers’ concerns, it was not enough to accept the paper this time round. Hence, unfortunately I recommend a rejection.
train
[ "H1lsO14PtB", "r1eMHtR4jB", "Hyl5yipNir", "rylvUchEiS", "Bkgdz79Mor", "SkgCrTWMjr", "rkl4q1YWjr", "ByxdtdAesB", "ryltsMnlsr", "r1xeNwATKH", "B1xIzwVAcB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "public", "author", "author", "official_reviewer", "official_reviewer" ]
[ "In this work, the authors employ concepts from group theory to turn an arbitrary feed forward neural network into an equivariant one, i.e. a network whose output transforms in a way that is consistent with the transformation of the input. To this end, the authors first introduce the basic concepts of group theory ...
[ 6, -1, -1, -1, 3, -1, -1, -1, -1, 3, 3 ]
[ 1, -1, -1, -1, 5, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2020_BkxDthVtvS", "iclr_2020_BkxDthVtvS", "rkl4q1YWjr", "Bkgdz79Mor", "iclr_2020_BkxDthVtvS", "B1xIzwVAcB", "iclr_2020_BkxDthVtvS", "r1xeNwATKH", "H1lsO14PtB", "iclr_2020_BkxDthVtvS", "iclr_2020_BkxDthVtvS" ]
iclr_2020_rkgdYhVtvH
Unifying Graph Convolutional Neural Networks and Label Propagation
Label Propagation (LPA) and Graph Convolutional Neural Networks (GCN) are both message passing algorithms on graphs. Both solve the task of node classification but LPA propagates node label information across the edges of the graph, while GCN propagates and transforms node feature information. However, while conceptually similar, theoretical relation between LPA and GCN has not yet been investigated. Here we study the relationship between LPA and GCN in terms of two aspects: (1) feature/label smoothing where we analyze how the feature/label of one node are spread over its neighbors; And, (2) feature/label influence of how much the initial feature/label of one node influences the final feature/label of another node. Based on our theoretical analysis, we propose an end-to-end model that unifies GCN and LPA for node classification. In our unified model, edge weights are learnable, and the LPA serves as regularization to assist the GCN in learning proper edge weights that lead to improved classification performance. Our model can also be seen as learning attention weights based on node labels, which is more task-oriented than existing feature-based attention models. In a number of experiments on real-world graphs, our model shows superiority over state-of-the-art GCN-based methods in terms of node classification accuracy.
reject
The authors attempt to unify graph convolutional networks and label propagation and propose a model that unifies them. The reviewers liked the idea but felt that more extensive experiments are needed. The impact of labels needs to be specially studied more in-depth.
train
[ "HJeq4HgrsS", "Bkg5pNerjH", "S1gy3mlrir", "SygJVJ60YH", "ryxzFyxAFr", "rkgeaF11cr" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We appreciate reviewer’s helpful and detailed feedback. The updates in our paper are marked in red. \n\n\n1. Adding an experiment showing how much the LPA impacts the results.\n\nWe appreciate reviewer’s suggestion but in fact we have already performed it. In particular, in Figures 2 and 3 we vary the number of LP...
[ -1, -1, -1, 3, 6, 1 ]
[ -1, -1, -1, 1, 1, 3 ]
[ "ryxzFyxAFr", "SygJVJ60YH", "rkgeaF11cr", "iclr_2020_rkgdYhVtvH", "iclr_2020_rkgdYhVtvH", "iclr_2020_rkgdYhVtvH" ]
iclr_2020_H1gdF34FvS
Advantage Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning
In this paper, we aim to develop a simple and scalable reinforcement learning algorithm that uses standard supervised learning methods as subroutines. Our goal is an algorithm that utilizes only simple and convergent maximum likelihood loss functions, while also being able to leverage off-policy data. Our proposed approach, which we refer to as advantage-weighted regression (AWR), consists of two standard supervised learning steps: one to regress onto target values for a value function, and another to regress onto weighted target actions for the policy. The method is simple and general, can accommodate continuous and discrete actions, and can be implemented in just a few lines of code on top of standard supervised learning methods. We provide a theoretical motivation for AWR and analyze its properties when incorporating off-policy data from experience replay. We evaluate AWR on a suite of standard OpenAI Gym benchmark tasks, and show that it achieves competitive performance compared to a number of well-established state-of-the-art RL algorithms. AWR is also able to acquire more effective policies than most off-policy algorithms when learning from purely static datasets with no additional environmental interactions. Furthermore, we demonstrate our algorithm on challenging continuous control tasks with highly complex simulated characters.
reject
This paper caused a lot of discussions before and after the rebuttal. The concerns are related to the novelty of this paper, which seems to be relatively limited. Since we do not have a champion among positive reviewers, and the overall score is not high enough, I cannot recommend its acceptance at this stage.
train
[ "Byl2SGsuur", "Bkx4jhqnjH", "SJlNOIGiir", "Hkxkf1xssS", "H1e5oakisr", "H1l6B0MKiB", "BklEPf_dsH", "HJgGMnwujr", "ByeI-RU_sS", "S1xbcZkOsr", "HygmpBpPoB", "H1xClC-Lsr", "ByxfG-fUoS", "B1ely-G8sH", "rJxlseGLsr", "Ske7cbjNsH", "rkgHb-o4jB", "rylJoJQViS", "rkxEYyY7sS", "Bkx4V19Mor"...
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", ...
[ "[Note: I wrote this review after John Schulman's first comment, before any reply, and before Gehrard Neumann's comment]\n\nThe authors propose an actor-critic algorithm based mostly on regression. Being off-policy, the algorithm can learn from multiple policies. It can also be applied to continuous as well as to d...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_H1gdF34FvS", "iclr_2020_H1gdF34FvS", "Hkxkf1xssS", "r1gTvqFziS", "BklEPf_dsH", "iclr_2020_H1gdF34FvS", "ByeI-RU_sS", "ByeI-RU_sS", "S1xbcZkOsr", "HygmpBpPoB", "Byl2SGsuur", "iclr_2020_H1gdF34FvS", "HyxTjQo6FH", "rylVAPJCFH", "Byl2SGsuur", "rkxEYyY7sS", "SJgoLpzbsS", "rkx...
iclr_2020_BJeuKnEtDH
Cascade Style Transfer
Recent studies have made tremendous progress in style transfer for specific domains, e.g., artistic, semantic and photo-realistic. However, existing approaches have limited flexibility in extending to other domains, as different style representations are often specific to particular domains. This also limits the stylistic quality. To address these limitations, we propose Cascade Style Transfer, a simple yet effective framework that can improve the quality and flexibility of style transfer by combining multiple existing approaches directly. Our cascade framework contains two architectures, i.e., Serial Style Transfer (SST) and Parallel Style Transfer (PST). The SST takes the stylized output of one method as the input content of the others. This could help improve the stylistic quality. The PST uses a shared backbone and a loss module to optimize the loss functions of different methods in parallel. This could help improve the quality and flexibility, and guide us to find domain-independent approaches. Our experiments are conducted on three major style transfer domains: artistic, semantic and photo-realistic. In all these domains, our methods have shown superiority over the state-of-the-art methods.
reject
This work combines style transfer approaches either in a serial or parallel fashion, and shows that the combination of methods is more powerful than isolated methods. The novelty in this work is extremely limited and not offset by insightful analysis or very thorough experiments, given that most results are qualitative. Authors have not provided a public response. Therefore, we recommend rejection.
train
[ "SkxhgEzMtB", "HyeCd0d_cS", "HkgZ0-Ah9H" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary:\nIn this study, the authors propose a new method for performing artistic style transfer for arbitrary image and styles. The new method employs a cascade/serial architecture for performing the style transfer. The authors test their method using human preference studies.\n\nIn summary, I found the architec...
[ 1, 1, 1 ]
[ 4, 3, 5 ]
[ "iclr_2020_BJeuKnEtDH", "iclr_2020_BJeuKnEtDH", "iclr_2020_BJeuKnEtDH" ]
iclr_2020_HyxFF34FPr
FoveaBox: Beyound Anchor-based Object Detection
We present FoveaBox, an accurate, flexible, and completely anchor-free framework for object detection. While almost all state-of-the-art object detectors utilize predefined anchors to enumerate possible locations, scales and aspect ratios for the search of the objects, their performance and generalization ability are also limited to the design of anchors. Instead, FoveaBox directly learns the object existing possibility and the bounding box coordinates without anchor reference. This is achieved by: (a) predicting category-sensitive semantic maps for the object existing possibility, and (b) producing category-agnostic bounding box for each position that potentially contains an object. The scales of target boxes are naturally associated with feature pyramid representations. We demonstrate its effectiveness on standard benchmarks and report extensive experimental analysis. Without bells and whistles, FoveaBox achieves state-of-the-art single model performance on the standard COCO detection benchmark. More importantly, FoveaBox avoids all computation and hyper-parameters related to anchor boxes, which are often sensitive to the final detection performance. We believe the simple and effective approach will serve as a solid baseline and help ease future research for object detection.
reject
The paper proposes a method for object detection by predicting category-specific object probability and category-agnostic bounding box coordinates for each position that's likely to contain an object. The proposed idea is interesting and the experimental results show improvement over RetinaNet and other baselines. However, in terms of weakness, (1) conceptually speaking it's unclear whether the proposed method is a big departure from the existing frameworks; and (2) although the authors are claiming SOTA performance, the proposed method seems to be worse than other existing/recent work. Some example references are listed below (more available here: https://paperswithcode.com/paper/foveabox-beyond-anchor-based-object-detector). [1] Scale-Aware Trident Networks for Object Detection https://arxiv.org/abs/1901.01892 [2] GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond https://arxiv.org/abs/1904.11492 [3] CBNet: A Novel Composite Backbone Network Architecture for Object Detection https://arxiv.org/abs/1909.03625 [4] EfficientDet: Scalable and Efficient Object Detection https://arxiv.org/abs/1911.09070 References [3] and [4] are concurrent works so shouldn't be a ground of rejection per se, but the performance gap is quite large. Compared to [1] and [2] which have been on arxiv for a while (+5 months) the performance of the proposed method is still inferior. Despite considering that object detection is a very competitive field, the conceptual/technical novelty and overall practical significance seem limited for ICLR. For a future submission, I would suggest that a revision of this paper being reviewed in a computer vision conference, rather than ML conference.
train
[ "H1e8TZVM9S", "HyxX4-1OiB", "rJx937q8iH", "SyxULIiziH", "SklvOrjfjS", "rygtjxmbiH", "rJlOCkwbqH", "rJlAJhYL9r", "ryx_zZm2_B", "SJgLDgzndr" ]
[ "official_reviewer", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "author", "public" ]
[ "This paper introduces an anchor-free object detection framework that aims at simultaneously predicting the object position and the corresponding boundary. To achieve this, the proposed FoveaBox detector predicts category-sensitive semantic maps for the object existing possibility, and produces category-agnostic b...
[ 6, -1, -1, -1, -1, -1, 6, 3, -1, -1 ]
[ 5, -1, -1, -1, -1, -1, 5, 4, -1, -1 ]
[ "iclr_2020_HyxFF34FPr", "H1e8TZVM9S", "rJlOCkwbqH", "rygtjxmbiH", "rJlAJhYL9r", "H1e8TZVM9S", "iclr_2020_HyxFF34FPr", "iclr_2020_HyxFF34FPr", "SJgLDgzndr", "iclr_2020_HyxFF34FPr" ]
iclr_2020_HyetFnEFDS
Diving into Optimization of Topology in Neural Networks
Seeking effective networks has become one of the most crucial and practical areas in deep learning. The architecture of a neural network can be represented as a directed acyclic graph, whose nodes denote transformation of layers and edges represent information flow. Despite the selection of \textit{micro} node operations, \textit{macro} connections among the whole network, noted as \textit{topology}, largely affects the optimization process. We first rethink the residual connections via a new \textit{topological view} and observe the benefits provided by dense connections to the optimization. Motivated by which, we propose an innovation method to optimize the topology of a neural network. The optimization space is defined as a complete graph, through assigning learnable weights which reflect the importance of connections, the optimization of topology is transformed into learning a set of  continuous variables of edges. To extend the optimization to larger search spaces, a new series of networks, named as TopoNet, are designed. To further focus on critical edges and promote generalization ablity in dense topologies, auxiliary sparsity constraint is adopted to constrain the distribution of edges. Experiments on classical networks prove the effectiveness of the optimization of topology. Experiments with TopoNets further verify both availability and transferability of the proposed method in different tasks e.g. image classification, object detection and face recognition.
reject
This paper proposes an approach for architecture search by framing it as a differentiable optimization over directed acyclic graphs. While the reviewers appreciated the significance of architecture search as a problem and acknowledged that the paper proposes a principled approach for this problem, there were concerns about lack of experimental rigor, and limited technical novelty over some existing works.
train
[ "HklYNPCY5r", "SJlSSBqYcr", "rJlriEAijB", "SkgOXEQ3sH", "BJeVUM3osH", "S1xTGik3iB", "Bklh50AqjB", "HylM-a0cor", "H1guRKAssS", "Byxx0fHTtr", "S1gIRLVo5r" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "------------------------- Update after rebuttal ---------------------------\n\n\nThank you for addressing my concerns. I feel the rebuttal did improve the paper, e.g., the significance of results can be evaluated better now. I still like the overall idea of the paper as optimizing connectivity patterns in ...
[ 6, 6, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ 5, 1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_HyetFnEFDS", "iclr_2020_HyetFnEFDS", "HklYNPCY5r", "H1guRKAssS", "S1gIRLVo5r", "iclr_2020_HyetFnEFDS", "SJlSSBqYcr", "Byxx0fHTtr", "BJeVUM3osH", "iclr_2020_HyetFnEFDS", "iclr_2020_HyetFnEFDS" ]
iclr_2020_HklsthVYDH
Learning to Defense by Learning to Attack
Adversarial training provides a principled approach for training robust neural networks. From an optimization perspective, the adversarial training is essentially solving a minimax robust optimization problem. The outer minimization is trying to learn a robust classifier, while the inner maximization is trying to generate adversarial samples. Unfortunately, such a minimax problem is very difficult to solve due to the lack of convex-concave structure. This work proposes a new adversarial training method based on a generic learning-to-learn (L2L) framework. Specifically, instead of applying the existing hand-designed algorithms for the inner problem, we learn an optimizer, which is parametrized as a convolutional neural network. At the same time, a robust classifier is learned to defense the adversarial attack generated by the learned optimizer. Our experiments over CIFAR-10 and CIFAR-100 datasets demonstrate that the L2L outperforms existing adversarial training methods in both classification accuracy and computational efficiency. Moreover, our L2L framework can be extended to the generative adversarial imitation learning and stabilize the training.
reject
This paper considers solving the minimax formulation of adversarial training, where it proposes a new method based on a generic learning-to-learn (L2L) framework. Particularly, instead of applying the existing hand-designed algorithms for the inner problem, it learns an optimizer parametrized as a convolutional neural network. A robust classifier is learned to defense the adversarial attack generated by the learned optimizer. The idea is using L2L is sensible. However, main concerns on empirical studies remain after rebuttal.
train
[ "Byx3KHJW5S", "HJgSWVljsr", "rJgnKNxjjB", "S1gcDEeisr", "H1gu0RJjor", "SyxZP2yssB", "Skgx84u9YS", "SyxVKI86tH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors propose a framework where one component is an attacker network that keeps learning about how to perturb the loss more, and one component is a defense network that robustify learning with respect to the attacker network. The framework is flexible on how the attacker network can be trained, and advances ...
[ 6, -1, -1, -1, -1, -1, 3, 6 ]
[ 1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_HklsthVYDH", "Skgx84u9YS", "Skgx84u9YS", "Skgx84u9YS", "SyxVKI86tH", "Byx3KHJW5S", "iclr_2020_HklsthVYDH", "iclr_2020_HklsthVYDH" ]
iclr_2020_Byl3K2VtwB
Unsupervised Learning of Node Embeddings by Detecting Communities
We present Deep MinCut (DMC), an unsupervised approach to learn node embeddings for graph-structured data. It derives node representations based on their membership in communities. As such, the embeddings directly provide interesting insights into the graph structure, so that the separate node clustering step of existing methods is no longer needed. DMC learns both, node embeddings and communities, simultaneously by minimizing the mincut loss, which captures the number of connections between communities. Striving for high scalability, we also propose a training process for DMC based on minibatches. We provide empirical evidence that the communities learned by DMC are meaningful and that the node embeddings are competitive in different node classification benchmarks.
reject
The authors present an approach to learn node embeddings by minimising the mincut loss which ensures that the network simultaneously learns node representations and communities. To ensure scalability, the authors also propose an iterative process using mini-batches. I think this is a good paper with interesting results. However, I would suggest that the authors try to make it more accessible to a larger audience (2 reviewers have indicated that they had difficulty in following the paper). For example, while Theorem 1 and Theorem 2 are interesting they could have been completely pushed to the Appendix and it would have sufficed to say that your work/results are grounded in well-proven theorems as mentioned in 1 and 2. I agree that the authors have done a good job of responding to reviewers' queries and addressed the main concerns. However, since the reviewers have unanimously given a low rating to this paper, I do not feel confident about overriding their rating and accepting this paper. Hence, at this point I will have to recommend that this paper cannot be accepted. This paper has good potential and the authors should submit it to another suitable venue soon.
train
[ "Hygt-_9ujr", "SJxwC15diB", "Hygdc-5_oS", "r1l2xmcdsB", "SkelkcQ7iH", "Bkg1cfOhYr", "SylKtUbCFr", "BklTIir0FB" ]
[ "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to thank the reviewer for these constructive comments.\n1. The assumption that each class label is associated with a community is not entirely correct. First, there is no 1-to-1 mapping between a class label and a community. The 1-to-1 mapping relies on an assumption that the number of communities an...
[ -1, -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, -1, 4, 4, 1 ]
[ "SylKtUbCFr", "BklTIir0FB", "Bkg1cfOhYr", "SkelkcQ7iH", "iclr_2020_Byl3K2VtwB", "iclr_2020_Byl3K2VtwB", "iclr_2020_Byl3K2VtwB", "iclr_2020_Byl3K2VtwB" ]
iclr_2020_ryxAY34YwB
Make Lead Bias in Your Favor: A Simple and Effective Method for News Summarization
Lead bias is a common phenomenon in news summarization, where early parts of an article often contain the most salient information. While many algorithms exploit this fact in summary generation, it has a detrimental effect on teaching the model to discriminate and extract important information. We propose that the lead bias can be leveraged in a simple and effective way in our favor to pretrain abstractive news summarization models on large-scale unlabelled corpus: predicting the leading sentences using the rest of an article. Via careful data cleaning and filtering, our transformer-based pretrained model without any finetuning achieves remarkable results over various news summarization tasks. With further finetuning, our model outperforms many competitive baseline models. For example, the pretrained model without finetuning outperforms pointer-generator network on CNN/DailyMail dataset. The finetuned model obtains 3.2% higher ROUGE-1, 1.6% higher ROUGE-2 and 2.1% higher ROUGE-L scores than the best baseline model on XSum dataset.
reject
This paper proposes a method to leverage the Lead (i.e., first sentence of an article) in training a model for abstractive news summarization. Reviewers' initial recommendations were weak reject to weak accept, pointing out the limitations of the paper including 1) little novelty in modeling, 2) weak evaluation, and 3) lack of deep analysis. After the author rebuttal and revised paper, one of the reviewers increased the score and were leaning toward weak accept. However, reviewers noted that there was significant overlap with another submission, and we discussed that it would be best to accept one of the two, incorporating the contributions of both papers. Hence, I recommend that this paper not be accepted, and perhaps some of the non-overlapping contents of this paper can be included in the other, accepted paper. Thank you for submitting this paper. I enjoyed reading it.
train
[ "HJgO4kMOFB", "Hye5SLEjiB", "H1gjXOYqoH", "rkeJYDRKsH", "BketsD0YiB", "SyeFcDAYiB", "HyeUIvRFsr", "HkgUIDhoFH", "HJxhRlhaKS" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposed an interesting idea on how we can leverage the lead bias in summarization datasets to pretrain abstractive news summarization models on large-scale unlabelled corpus in simple and effective way. \n\nFor pre-training, they collected three years of online news articles data. Then, they take the t...
[ 6, -1, -1, -1, -1, -1, -1, 1, 6 ]
[ 5, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_ryxAY34YwB", "H1gjXOYqoH", "rkeJYDRKsH", "HJxhRlhaKS", "HJgO4kMOFB", "HkgUIDhoFH", "iclr_2020_ryxAY34YwB", "iclr_2020_ryxAY34YwB", "iclr_2020_ryxAY34YwB" ]
iclr_2020_B1l1qnEFwH
Deep Audio Prior
Deep convolutional neural networks are known to specialize in distilling compact and robust prior from a large amount of data. We are interested in applying deep networks in the absence of training dataset. In this paper, we introduce deep audio prior (DAP) which leverages the structure of a network and the temporal information in a single audio file. Specifically, we demonstrate that a randomly-initialized neural network can be used with carefully designed audio prior to tackle challenging audio problems such as universal blind source separation, interactive audio editing, audio texture synthesis, and audio co-separation. To understand the robustness of the deep audio prior, we construct a benchmark dataset Universal-150 for universal sound source separation with a diverse set of sources. We show superior audio results than previous work on both qualitatively and quantitative evaluations. We also perform thorough ablation study to validate our design choices.
reject
This paper proposes to use CNN'S prior to deal with the tasks in audio processing. The motivation is weak and the presentation is not clear. The technical contribution is trivial.
train
[ "SkgmBSkUoS", "r1xr_4kUiB", "SJxJI7yUsS", "B1lqtkk8iS", "BklxBa0HsH", "B1lo-8cotH", "rkl7DDOkcS", "Syeteh-f5B" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank the reviewer for the constructive feedback. We address the concerns from the reviewer as follows: \n\nA1: Thanks for the suggestion! We have updated our paper with an appendix to show the used U-net structure (please see Section A.1 and Figure 10) and visualize the results from different tr...
[ -1, -1, -1, -1, -1, 3, 6, 6 ]
[ -1, -1, -1, -1, -1, 4, 5, 5 ]
[ "rkl7DDOkcS", "SJxJI7yUsS", "B1lo-8cotH", "Syeteh-f5B", "iclr_2020_B1l1qnEFwH", "iclr_2020_B1l1qnEFwH", "iclr_2020_B1l1qnEFwH", "iclr_2020_B1l1qnEFwH" ]
iclr_2020_SkeWc2EKPH
Model-free Learning Control of Nonlinear Stochastic Systems with Stability Guarantee
Reinforcement learning (RL) offers a principled way to achieve the optimal cumulative performance index in discrete-time nonlinear stochastic systems, which are modeled as Markov decision processes. Its integration with deep learning techniques has promoted the field of deep RL with an impressive performance in complicated continuous control tasks. However, from a control-theoretic perspective, the first and most important property of a system to be guaranteed is stability. Unfortunately, stability is rarely assured in RL and remains an open question. In this paper, we propose a stability guaranteed RL framework which simultaneously learns a Lyapunov function along with the controller or policy, both of which are parameterized by deep neural networks, by borrowing the concept of Lyapunov function from control theory. Our framework can not only offer comparable or superior control performance over state-of-the-art RL algorithms, but also construct a Lyapunov function to validate the closed-loop stability. In the simulated experiments, our approach is evaluated on several well-known examples including classic CartPole balancing, 3-dimensional robot control and control of synthetic biology gene regulatory networks. Compared with RL algorithms without stability guarantee, our approach can enable the system to recover to the operating point when interfered by uncertainties such as unseen disturbances and system parametric variations to a certain extent.
reject
The authors propose a method to guarantee the stability of a learnt continuous controller by optimizing the objective through a Lyapunov critic. The method is demonstrated on low dimensional continuous control problems such as cart pole. The reviewers were mixed in their opinion of the paper, especially after the authors' rebuttal. The concerns center around some of the authors' claims regarding theoretical results, in particular that stability guarantees can be asserted for a model-free controller. This claim seems to be incorrect especially on novel data where stability cannot be guaranteed, thus indicating that 'robust controller' might be a better description. There are also concerns about the novelty and the contributions of the paper. Overall, the method is promising but the claims need to be carefully written. The recommendation is to reject the paper at this time.
train
[ "B1eyRil5tr", "SyeJTy7ccr", "rJeTOyqqjr", "BkgQhkc9sr", "ryesLVccsB", "rkg7Fm55or", "Bkg9ZN95jH", "BygTiQ55oS", "Bkevlg59jS", "B1e1O5yJ9H" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "\n\n######## Rebuttal Response:\n\nThanks for the thorough response.\n\nQ2: The title still hasn’t changed on the current draft\nQ4: To be more precise: \n‘a novel data-based approach for analyzing the stability of the closed-loop system is proposed by constructing a Lyapunov function parameterized by deep neural ...
[ 1, 6, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ 3, 3, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_SkeWc2EKPH", "iclr_2020_SkeWc2EKPH", "iclr_2020_SkeWc2EKPH", "SyeJTy7ccr", "Bkg9ZN95jH", "B1eyRil5tr", "BygTiQ55oS", "rkg7Fm55or", "B1e1O5yJ9H", "iclr_2020_SkeWc2EKPH" ]
iclr_2020_Skgb5h4KPH
Frequency Principle: Fourier Analysis Sheds Light on Deep Neural Networks
We study the training process of Deep Neural Networks (DNNs) from the Fourier analysis perspective. We demonstrate a very universal Frequency Principle (F-Principle) --- DNNs often fit target functions from low to high frequencies --- on high-dimensional benchmark datasets, such as MNIST/CIFAR10, and deep networks, such as VGG16. This F-Principle of DNNs is opposite to the learning behavior of most conventional iterative numerical schemes (e.g., Jacobi method), which exhibits faster convergence for higher frequencies, for various scientific computing problems. With a naive theory, we illustrate that this F-Principle results from the regularity of the commonly used activation functions. The F-Principle implies an implicit bias that DNNs tend to fit training data by a low-frequency function. This understanding provides an explanation of good generalization of DNNs on most real datasets and bad generalization of DNNs on parity function or randomized dataset.
reject
Borderline decision. The idea is nice, but the theory is not completely convincing. That makes the results in this paper not be significant enough.
train
[ "r1g3UpbZoH", "Hyg-liWWiB", "SkedZT-ZsS", "BylnTTZWsB", "BJg6WcZbsr", "r1eohdR3Kr", "rkeMvyPTFH", "BklpyIz0YB", "BkglOAJn5B" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public" ]
[ "\nWe thank you for very helpful and detailed comments. We respond to your concerns as follows.\n\n1 We had also looked at other directions, e.g., second or higher principle component, and found that F-Principle always holds. We will add more details about these results in our revision.\n\n2 In all our experiments,...
[ -1, -1, -1, -1, -1, 6, 3, 3, -1 ]
[ -1, -1, -1, -1, -1, 3, 1, 4, -1 ]
[ "r1eohdR3Kr", "BklpyIz0YB", "rkeMvyPTFH", "BkglOAJn5B", "iclr_2020_Skgb5h4KPH", "iclr_2020_Skgb5h4KPH", "iclr_2020_Skgb5h4KPH", "iclr_2020_Skgb5h4KPH", "iclr_2020_Skgb5h4KPH" ]
iclr_2020_HJxf53EtDr
Unifying Graph Convolutional Networks as Matrix Factorization
In recent years, substantial progress has been made on graph convolutional networks (GCN). In this paper, for the first time, we theoretically analyze the connections between GCN and matrix factorization (MF), and unify GCN as matrix factorization with co-training and unitization. Moreover, under the guidance of this theoretical analysis, we propose an alternative model to GCN named Co-training and Unitized Matrix Factorization (CUMF). The correctness of our analysis is verified by thorough experiments. The experimental results show that CUMF achieves similar or superior performances compared to GCN. In addition, CUMF inherits the benefits of MF-based methods to naturally support constructing mini-batches, and is more friendly to distributed computing comparing with GCN. The distributed CUMF on semi-supervised node classification significantly outperforms distributed GCN methods. Thus, CUMF greatly benefits large scale and complex real-world applications.
reject
The paper makes an interesting attempt at connecting graph convolutional neural networks (GCN) with matrix factorization (MF) and then develops a MF solution that achieves similar prediction performance as GCN. While the work is a good attempt, the work suffers from two major issues: (1) the connection between GCN and other related models have been examined recently. The paper did not provide additional insights; (2) some parts of the derivations could be problematic. The paper could be a good publication in the future if the motivation of the work can be repositioned.
val
[ "BylXmt2nYH", "B1e9X3qDjr", "rJl7sDDviH", "B1luJxPUjr", "rkgCKBUUjB", "rkxH0TrLsB", "Skx8x7nTuS", "HJxxpshhYB", "rJeyquvktS", "SJlmqC8JYr", "BJgt6Q0CdH", "HkefsuGA_H" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "public", "author", "public" ]
[ "\nThe work poses an interesting question: Are GCNs (and GNNs) just special types of matrix factorization methods? Unfortunately, the short answer is **no**, which goes against what the authors say. \n\nUntil recently I thought like the authors, but the concurrent work [1] (On the Equivalence between Node Embedding...
[ 1, -1, -1, -1, -1, -1, 6, 1, -1, -1, -1, -1 ]
[ 5, -1, -1, -1, -1, -1, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2020_HJxf53EtDr", "rkxH0TrLsB", "B1luJxPUjr", "HJxxpshhYB", "BylXmt2nYH", "Skx8x7nTuS", "iclr_2020_HJxf53EtDr", "iclr_2020_HJxf53EtDr", "SJlmqC8JYr", "BJgt6Q0CdH", "HkefsuGA_H", "iclr_2020_HJxf53EtDr" ]
iclr_2020_Byg79h4tvB
PROTOTYPE-ASSISTED ADVERSARIAL LEARNING FOR UNSUPERVISED DOMAIN ADAPTATION
This paper presents a generic framework to tackle the crucial class mismatch problem in unsupervised domain adaptation (UDA) for multi-class distributions. Previous adversarial learning methods condition domain alignment only on pseudo labels, but noisy and inaccurate pseudo labels may perturb the multi-class distribution embedded in probabilistic predictions, hence bringing insufficient alleviation to the latent mismatch problem. Compared with pseudo labels, class prototypes are more accurate and reliable since they summarize over all the instances and are able to represent the inherent semantic distribution shared across domains. Therefore, we propose a novel Prototype-Assisted Adversarial Learning (PAAL) scheme, which incorporates instance probabilistic predictions and class prototypes together to provide reliable indicators for adversarial domain alignment. With the PAAL scheme, we align both the instance feature representations and class prototype representations to alleviate the mismatch among semantically different classes. Also, we exploit the class prototypes as proxy to minimize the within-class variance in the target domain to mitigate the mismatch among semantically similar classes. With these novelties, we constitute a Prototype-Assisted Conditional Domain Adaptation (PACDA) framework which well tackles the class mismatch problem. We demonstrate the good performance and generalization ability of the PAAL scheme and also PACDA framework on two UDA tasks, i.e., object recognition (Office-Home,ImageCLEF-DA, andOffice) and synthetic-to-real semantic segmentation (GTA5→CityscapesandSynthia→Cityscapes).
reject
The paper focuses on adversarial domain adaptation, and proposes an approach inspired from the DANN. The contribution lies in additional terms in the loss, aimed to i) align the source and target prototypes in each class (using pseudo labels for target examples); ii) minimize the variance of the latent representations for each class in the target domain. Reviews point out that the expected benefits of target prototypes might be ruined if the pseudo-labels are too noisy; they note that the specific problem needs be more clearly formalized and they regret the lack of clarity of the text. The sensitivity w.r.t. the hyper-parameter values needs be assessed more thoroughly. One also notes that SAFN is one of the baseline methods; but its best variant (with entropic regularization) is not considered, while the performance thereof is on par or greater than that of PACFA for ImageCLEF-Da; idem for AdapSeg (consider its multi-level variant) or AdvEnt with MinEnt. For these reasons, the paper seems premature for publication at ICLR 2020.
train
[ "BJg0OCVRtS", "Byge5trYiH", "HJlVdMdusH", "Hkg2Mfddor", "Hkga-lu_sB", "rygbpRPdsr", "Skl3YsvOdH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Summary:\n- key problem: address \"class mismatch\" in adversarial learning methods for unsupervised domain adaptation (UDA);\n- contributions: 1) extension of the domain adversarial learning objective to leverage class prototypes (exponential moving average of features weighted by predicted class probabilities) i...
[ 3, -1, -1, -1, -1, -1, 3 ]
[ 5, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2020_Byg79h4tvB", "iclr_2020_Byg79h4tvB", "BJg0OCVRtS", "BJg0OCVRtS", "Skl3YsvOdH", "Skl3YsvOdH", "iclr_2020_Byg79h4tvB" ]
iclr_2020_B1em9h4KDS
Generative Imputation and Stochastic Prediction
In many machine learning applications, we are faced with incomplete datasets. In the literature, missing data imputation techniques have been mostly concerned with filling missing values. However, the existence of missing values is synonymous with uncertainties not only over the distribution of missing values but also over target class assignments that require careful consideration. In this paper, we propose a simple and effective method for imputing missing features and estimating the distribution of target assignments given incomplete data. In order to make imputations, we train a simple and effective generator network to generate imputations that a discriminator network is tasked to distinguish. Following this, a predictor network is trained using the imputed samples from the generator network to capture the classification uncertainties and make predictions accordingly. The proposed method is evaluated on CIFAR-10 image dataset as well as three real-world tabular classification datasets, under different missingness rates and structures. Our experimental results show the effectiveness of the proposed method in generating imputations as well as providing estimates for the class uncertainties in a classification task when faced with missing values.
reject
The paper proposes a method that does uncertainty modeling over missing data imputation using a framework based on generative adversarial network. While the method shows some empirical improvements over the baselines, reviewers have found the work incremental in terms of technical novelty over the existing GAIN approach which renders it slightly below the acceptance threshold for the main conference, particularly in case of space constraints in the program.
train
[ "B1xLXZ319r", "r1lqDR52sr", "ryxv_o92sB", "Syx-Qc0tsH", "SJeNUFCYjB", "SylKeoRtjS", "B1xGs5AtsB", "rkxjr9AKiH", "BylnOKAtjH", "H1eec_RtoH", "HyeAuZ3DtS", "Hyxvxaag5H" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This is a nice piece of incremental work on top of previously published GAN imputation methods. It seems to work well in the limited evaluation and is at least claimed to be easier to use for practitioners. This paper could benefit tremendously from both better evaluation and discussion. The paper would be much cl...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_B1em9h4KDS", "ryxv_o92sB", "B1xGs5AtsB", "B1xLXZ319r", "Hyxvxaag5H", "HyeAuZ3DtS", "HyeAuZ3DtS", "B1xLXZ319r", "Hyxvxaag5H", "iclr_2020_B1em9h4KDS", "iclr_2020_B1em9h4KDS", "iclr_2020_B1em9h4KDS" ]
iclr_2020_S1gNc3NtvB
Learning Algorithmic Solutions to Symbolic Planning Tasks with a Neural Computer
A key feature of intelligent behavior is the ability to learn abstract strategies that transfer to unfamiliar problems. Therefore, we present a novel architecture, based on memory-augmented networks, that is inspired by the von Neumann and Harvard architectures of modern computers. This architecture enables the learning of abstract algorithmic solutions via Evolution Strategies in a reinforcement learning setting. Applied to Sokoban, sliding block puzzle and robotic manipulation tasks, we show that the architecture can learn algorithmic solutions with strong generalization and abstraction: scaling to arbitrary task configurations and complexities, and being independent of both the data representation and the task domain.
reject
The authors present a method that optimizes a differentiable neural computer with evolutionary search, and which can transfer abstract strategies to novel problems. The reviewers all agreed that the approach is interesting, though were concerned about the magnitude of the contribution / novelty compared to existing work, clarity of contributions, impact of pretraining, and simplicity of examples. While the reviewers felt that the authors resolved the many of their concerns in the rebuttal, there was remaining concern about the significance of the contribution. Thus, I recommend this paper for rejection at this time.
val
[ "rygiuvKJ9B", "SyllmIQAtB", "SkxklVWdiH", "BJeb1Oawjr", "ryeQhz3LiB", "BJgMFG2UjB", "rJlKJG2UsH", "BylAVtWwFr" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper presents an approach called a neural computer, which has a Differential Neural Computer (DNC) at its core that is optimised with an evolutionary strategy. In addition to the typical DNC architecture, the system proposed in this paper has different modules that transfer different domain representations i...
[ 6, 6, -1, -1, -1, -1, -1, 3 ]
[ 4, 3, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2020_S1gNc3NtvB", "iclr_2020_S1gNc3NtvB", "BJeb1Oawjr", "ryeQhz3LiB", "rygiuvKJ9B", "SyllmIQAtB", "BylAVtWwFr", "iclr_2020_S1gNc3NtvB" ]
iclr_2020_H1eH9hNtwr
Stagnant zone segmentation with U-net
Silo discharging and monitoring the process for industrial or research application depend on computerized segmentation of different parts of images such as stagnant and flowing zones which is the toughest task. X-ray Computed Tomography (CT) is one of a powerful non-destructive technique for cross-sectional images of a 3D object based on X-ray absorption. CT is the most proficient for investigating different granular flow phenomena and segmentation of the stagnant zone as compared to other imaging techniques. In any case, manual segmentation is tiresome and erroneous for further investigations. Hence, automatic and precise strategies are required. In the present work, a U-net architecture is used for segmenting the stagnant zone during silo discharging process. This proposed image segmentation method provides fast and effective outcomes by exploiting a convolutional neural networks technique with an accuracy of 97 percent
reject
The paper proposed U-net for segmentation of stagnant zones in computed tomography. Technical contribution of the paper is severely limited, and is not of the quality expected of publications in this venue. The paper is not anonymized and violates the double blind review rule. I'm thus recommending rejection.
train
[ "BygUatwYOB", "Hyxln7Sotr", "SklcjZbP5r", "rkx12MBoYr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "- This paper simply proposes to use UNet for the segmentation of stagnant zones in X-ray CTs. While the applicability of this model may represent an advance in the particular field of the authors, the technical contribution of this paper is far from the level expected in this conference. \n\n- As the paper reads, ...
[ 1, 1, 1, -1 ]
[ 5, 4, 3, -1 ]
[ "iclr_2020_H1eH9hNtwr", "iclr_2020_H1eH9hNtwr", "iclr_2020_H1eH9hNtwr", "iclr_2020_H1eH9hNtwr" ]
iclr_2020_HygSq3VFvH
Self-Supervised State-Control through Intrinsic Mutual Information Rewards
Learning to discover useful skills without a manually-designed reward function would have many applications, yet is still a challenge for reinforcement learning. In this paper, we propose Mutual Information-based State-Control (MISC), a new self-supervised Reinforcement Learning approach for learning to control states of interest without any external reward function. We formulate the intrinsic objective as rewarding the skills that maximize the mutual information between the context states and the states of interest. For example, in robotic manipulation tasks, the context states are the robot states and the states of interest are the states of an object. We evaluate our approach for different simulated robotic manipulation tasks from OpenAI Gym. We show that our method is able to learn to manipulate the object, such as pushing and picking up, purely based on the intrinsic mutual information rewards. Furthermore, the pre-trained policy and mutual information discriminator can be used to accelerate learning to achieve high task rewards. Our results show that the mutual information between the context states and the states of interest can be an effective ingredient for overcoming challenges in robotic manipulation tasks with sparse rewards. A video showing experimental results is available at https://youtu.be/cLRrkd3Y7vU
reject
The paper considers a setting where the state of a (robotics) environment can be divided roughly into "context states" (such as variables under the robot's direct control) and "states of interest" (such as the state variables of an object to be manipulated), and learn skills by maximizing a lower bound on the mutual information between these two components of the state. Experimental results compare to DDPG/SAC, and show that the learned discriminator is somewhat transferable between environments. Reviewers found the assumptions necessary on the degree of domain knowledge to be quite strong and domain-specific, and that even after revision, the authors were understating the degree to which this was necessary. The paper did improve based on reviewer feedback, and while R3 was more convinced by the follow-up experiments (though remarked that requiring environment variations to obtain new skills was a "significant step backward from things like [Diversity is All You Need]"), the other reviewers remained unconvinced regarding domain knowledge and in particular how it interacts with the scalability of the proposed method to complex environments/robots. Given the reviewers' concerns regarding applicability and scalability, I recommend rejection in its present form. A future revision may be able to more convincingly demonstrate that limitations based on domain knowledge are less significant than they appear.
train
[ "SkxED7_nKS", "Hyg3B8z5oB", "rkgrTHMcjB", "SJgUsHzqsr", "HJeVtBG5sB", "SJxy9Ndy5r", "Hkgcm01WqB" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "I take issue with the usage of the phrase \"skill discovery\". In prior work (e.g. VIC, DIAYN), this meant learning a skill-conditional policy. Here, there is only a single (unconditioned) policy, and the different \"skills\" come from modifications of the environment -- the number of skills is tied to the number ...
[ 6, -1, -1, -1, -1, 3, 3 ]
[ 5, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2020_HygSq3VFvH", "iclr_2020_HygSq3VFvH", "SkxED7_nKS", "SJxy9Ndy5r", "Hkgcm01WqB", "iclr_2020_HygSq3VFvH", "iclr_2020_HygSq3VFvH" ]
iclr_2020_HylLq2EKwS
Collaborative Filtering With A Synthetic Feedback Loop
We propose a novel learning framework for recommendation systems, assisting collaborative filtering with a synthetic feedback loop. The proposed framework consists of a ``recommender'' and a ``virtual user.'' The recommender is formulizd as a collaborative-filtering method, recommending items according to observed user behavior. The virtual user estimates rewards from the recommended items and generates the influence of the rewards on observed user behavior. The recommender connected with the virtual user constructs a closed loop, that recommends users with items and imitates the unobserved feedback of the users to the recommended items. The synthetic feedback is used to augment observed user behavior and improve recommendation results. Such a model can be interpreted as the inverse reinforcement learning, which can be learned effectively via rollout (simulation). Experimental results show that the proposed framework is able to boost the performance of existing collaborative filtering methods on multiple datasets.
reject
The paper proposes to learn a "virtual user" while learning a "recommender" model, to improve the performance of the recommender system. A reinforcement learning algorithm is used for address the problem the authors defined. Multiple reviewers raised several concerns regarding its technical details including the feedback signal F, but the authors have not responded to any of the concerns raised by the reviewers. The lack of authors involvement in the discussion suggest that this paper is not at the stage to be published.
train
[ "SkeAqcyO_H", "Hkg4-Ixptr", "rJx5gsqRKB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper is essentially an attempt to incorporate a form of reinforcement learning into recommender systems, via the use of a synthetic feedback loop and a \"virtual user\".\n\nOverall this seems like a nice attempt to combine Inverse Reinforcement Learning frameworks with collaborative filtering algorithms.\n\nW...
[ 6, 3, 3 ]
[ 4, 3, 1 ]
[ "iclr_2020_HylLq2EKwS", "iclr_2020_HylLq2EKwS", "iclr_2020_HylLq2EKwS" ]
iclr_2020_B1xw9n4Kwr
Model Architecture Controls Gradient Descent Dynamics: A Combinatorial Path-Based Formula
Recently, there has been a growing interest in automatically exploring neural network architecture design space with the goal of finding an architecture that improves performance (characterized as improved accuracy, speed of training, or resource requirements). However, our theoretical understanding of how model architecture affects performance or accuracy is limited. In this paper, we study the impact of model architecture on the speed of training in the context of gradient descent optimization. We model gradient descent as a first-order ODE and use ODE's coefficient matrix H to characterize the convergence rate. We introduce a simple analysis technique that enumerates H in terms of all possible ``paths'' in the network. We show that changes in model architecture parameters reflect as changes in the number of paths and the properties of each path, which jointly control the speed of convergence. We believe our analysis technique is useful in reasoning about more complex model architecture modifications.
reject
This paper focuses on understanding the role of model architecture on convergence behavior and in particular on the speed of training. The authors study the gradient flow of training via studying an ODE's coefficient matrix H. They study the effect of H in terms of possible paths in the network. The reviewers all agreed that characterizing the behavior in terms of path is nice. However, they had concerns about novelty with respect to existing work on NTK. Other comments by reviewers include (1) poor literature review (2) subpar exposition and (3) hand-wavy and rack of rigor in some results. While some of these concerns were alleviated during the discussion. Reviewers were not fully satisfied. I general agree with the overall assessment of the reviewers. The paper has some interesting ideas but suffers from lack of clarity and rigor. Therefore, I can not recommend acceptance in the current form.
train
[ "HJlaIEjQqr", "H1gfE7EjjB", "rylPzENssS", "H1gKoGEosH", "HygrYMVsiB", "SJgzi6QjjS", "SkxOcKDTYS", "BylpetaaFr" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper considers the problem of understanding the impact of deep neural networks (DNN) model architecture on the convergence rate of gradient descent dynamics. To achieve this goal, the paper follows the recent trend of continuous-time perspective of optimization, and proposes to model gradient descent via the...
[ 3, -1, -1, -1, -1, -1, 6, 3 ]
[ 3, -1, -1, -1, -1, -1, 3, 1 ]
[ "iclr_2020_B1xw9n4Kwr", "BylpetaaFr", "SkxOcKDTYS", "HJlaIEjQqr", "HJlaIEjQqr", "iclr_2020_B1xw9n4Kwr", "iclr_2020_B1xw9n4Kwr", "iclr_2020_B1xw9n4Kwr" ]
iclr_2020_B1xDq2EFDH
Analytical Moment Regularizer for Training Robust Networks
Despite the impressive performance of deep neural networks (DNNs) on numerous learning tasks, they still exhibit uncouth behaviours. One puzzling behaviour is the subtle sensitive reaction of DNNs to various noise attacks. Such a nuisance has strengthened the line of research around developing and training noise-robust networks. In this work, we propose a new training regularizer that aims to minimize the probabilistic expected training loss of a DNN subject to a generic Gaussian input. We provide an efficient and simple approach to approximate such a regularizer for arbitrarily deep networks. This is done by leveraging the analytic expression of the output mean of a shallow neural network, avoiding the need for memory and computation expensive data augmentation. We conduct extensive experiments on LeNet and AlexNet on various datasets including MNIST, CIFAR10, and CIFAR100 to demonstrate the effectiveness of our proposed regularizer. In particular, we show that networks that are trained with the proposed regularizer benefit from a boost in robustness against Gaussian noise to an equivalent amount of performing 3-21 folds of noisy data augmentation. Moreover, we empirically show on several architectures and datasets that improving robustness against Gaussian noise, by using the new regularizer, can improve the overall robustness against 6 other types of attacks by two orders of magnitude.
reject
This paper received two weak and one strong reject from the reviewers. The major issues cited were 1) a lack of strong enough baselines or empirical results, 2) Novelty with respect to "Certified adversarial robustness via randomized smoothing" and 3) a limitation to Gaussian noise perturbations. Unfortunately, as a result the reviewers agreed that this work was not ready for acceptance. Adding stronger empirical results and a careful treatment of related work would make this a much stronger paper for a future submission.
val
[ "B1lFqwX6YB", "Hkepx6NnjH", "H1ena0N2ir", "S1lWO3V3sB", "B1lawgdQtH", "H1gqLof79B", "SkgrG2NJur" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author" ]
[ "I am not fully convinced by the robustness result in Figure 3. In MNIST, the proposed method is worse than data augmentation. In CIFAR-10, the proposed method does perform better, however, \\tilde{N} for data augmentation is chosen as 2, which is too small in my opinion. The data augmentation's robustness is simil...
[ 3, -1, -1, -1, 1, 3, -1 ]
[ 4, -1, -1, -1, 4, 4, -1 ]
[ "iclr_2020_B1xDq2EFDH", "B1lFqwX6YB", "B1lawgdQtH", "H1gqLof79B", "iclr_2020_B1xDq2EFDH", "iclr_2020_B1xDq2EFDH", "iclr_2020_B1xDq2EFDH" ]
iclr_2020_Syld53NtvH
Expected Tight Bounds for Robust Deep Neural Network Training
Training Deep Neural Networks (DNNs) that are robust to norm bounded adversarial attacks remains an elusive problem. While verification based methods are generally too expensive to robustly train large networks, it was demonstrated by Gowal et. al. that bounded input intervals can be inexpensively propagated from layer to layer through deep networks. This interval bound propagation (IBP) approach led to high robustness and was the first to be employed on large networks. However, due to the very loose nature of the IBP bounds, particularly for large/deep networks, the required training procedure is complex and involved. In this paper, we closely examine the bounds of a block of layers composed of an affine layer, followed by a ReLU, followed by another affine layer. To this end, we propose \emph{expected} bounds (true bounds in expectation), which are provably tighter than IBP bounds in expectation. We then extend this result to deeper networks through blockwise propagation and show that we can achieve orders of magnitudes tighter bounds compared to IBP. Using these tight bounds, we demonstrate that a simple standard training procedure can achieve impressive robustness-accuracy trade-off across several architectures on both MNIST and CIFAR10.
reject
The authors propose a new technique for training networks to be robust to adversarial perturbations. They do this by computing bounds on the impact of the worst case adversarial attack, but that only hold under strong assumptions on the distribution of the network weights. While these bounds are not rigorous, the authors show that they can produce networks that improve the robustness-accuracy tradeoff on image classification tasks. While the idea proposed by the authors is interesting, the reviewers had several concerns about this paper: 1) The assumptions required for the bounds to hold are unrealistic and unlikely to hold in practice, especially for convolutional neural networks. 2) The comparisons are not presented in a fair manner that allow the reader to interpret the difference between the nature of certificates computed by the authors and those computed in prior work. 3) The empirical gains are not substantial if one normalizes for the non-rigorous nature of the certificates computed (given that they only hold under hard-to-justify assumptions). The rebuttal phase clarified some issues in the paper, but the fundamental flaws with the approach remain unaddressed. Thus, I recommend rejection and suggest that the authors revisit the assumptions and develop more convincing arguments and/or experiments justifying them for practical deep learning scenarios.
train
[ "S1l1tneioB", "HJg3F5esor", "HJerVcesoS", "B1ggnKljjr", "S1lQKKgjoS", "SylHJFesoB", "B1xvtsu2Fr", "HyxiCSC6Kr", "BJlSowJ15B", "Hy0RtEJ_r" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "1/ I understand but then I think the theory of the paper is not as strong as it sounds. A stronger argument w.r.t. the modelization would make me encline to accept this statement.\n\n2/ Thanks for your answer.\n\n3/ Right. I spent a significant amount of time to validate them..\n\n4/ I still don't understand the i...
[ -1, -1, -1, -1, -1, -1, 3, 3, 3, -1 ]
[ -1, -1, -1, -1, -1, -1, 5, 3, 5, -1 ]
[ "B1ggnKljjr", "HJerVcesoS", "BJlSowJ15B", "S1lQKKgjoS", "HyxiCSC6Kr", "B1xvtsu2Fr", "iclr_2020_Syld53NtvH", "iclr_2020_Syld53NtvH", "iclr_2020_Syld53NtvH", "iclr_2020_Syld53NtvH" ]
iclr_2020_SJlOq34Kwr
Unsupervised Intuitive Physics from Past Experiences
We consider the problem of learning models of intuitive physics from raw, unlabelled visual input. Differently from prior work, in addition to learning general physical principles, we are also interested in learning ``on the fly'' physical properties specific to new environments, based on a small number of environment-specific experiences. We do all this in an unsupervised manner, using a meta-learning formulation where the goal is to predict videos containing demonstrations of physical phenomena, such as objects moving and colliding with a complex background. We introduce the idea of summarizing past experiences in a very compact manner, in our case using dynamic images, and show that this can be used to solve the problem well and efficiently. Empirically, we show, via extensive experiments and ablation studies, that our model learns to perform physical predictions that generalize well in time and space, as well as to a variable number of interacting physical objects.
reject
While the reviewers found the paper interesting, all the reviewers raised concerns about the fairly simple experimental settings, which makes it hard to appreciate the strengths of the proposed method. During rebuttal phase, the reviewers still felt this weakness was not sufficiently addressed.
train
[ "r1ewN1AsYH", "SygSlxlNtr", "BygXgVI2iS", "Syx8j-u9ir", "BylPHFMdoS", "rJx_VcMuoH", "Byx6z9GOjB", "SJlXTYzuiH", "rkgFiYzdjH", "Hke-9ZE-tH" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "UPDATE: I appreciate the authors' discussions and qualitative results. My main original concern was that the empirical evaluation only studies a single type of situation of inferring physical parameters. Given that the authors claim that the proposed method infers \"on the fly\" physical properties, I would expect...
[ 3, 3, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2020_SJlOq34Kwr", "iclr_2020_SJlOq34Kwr", "Syx8j-u9ir", "BylPHFMdoS", "r1ewN1AsYH", "Byx6z9GOjB", "Hke-9ZE-tH", "rkgFiYzdjH", "SygSlxlNtr", "iclr_2020_SJlOq34Kwr" ]
iclr_2020_S1eYchEtwH
Learning Human Postural Control with Hierarchical Acquisition Functions
Learning control policies in robotic tasks requires a large number of interactions due to small learning rates, bounds on the updates or unknown constraints. In contrast humans can infer protective and safe solutions after a single failure or unexpected observation. In order to reach similar performance, we developed a hierarchical Bayesian optimization algorithm that replicates the cognitive inference and memorization process for avoiding failures in motor control tasks. A Gaussian Process implements the modeling and the sampling of the acquisition function. This enables rapid learning with large learning rates while a mental replay phase ensures that policy regions that led to failures are inhibited during the sampling process. The features of the hierarchical Bayesian optimization method are evaluated in a simulated and physiological humanoid postural balancing task. We quantitatively compare the human learning performance to our learning approach by evaluating the deviations of the center of mass during training. Our results show that we can reproduce the efficient learning of human subjects in postural control tasks which provides a testable model for future physiological motor control tasks. In these postural control tasks, our method outperforms standard Bayesian Optimization in the number of interactions to solve the task, in the computational demands and in the frequency of observed failures.
reject
The paper proposes hierarchical Bayesian optimization (HiBO) for learning control policies from a small number of environment interaction and applies it to the postural control of a humanoid. Both reviewers raised issues with the clarity of presentation, as well as contribution and overall fit to this venue. The authors’ response helped to clarify these issues only marginally. Therefore, primarily due to lack of clarity, I recommend rejecting this paper, but encourage the authors to improve the presentation as per the reviewers’ suggestions and resubmitting.
train
[ "rkxtaO2mcr", "BkxLPPgFiB", "SJxH6hmPjr", "B1gaVnmPjB", "rJlNlmY5tH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer" ]
[ "After rebuttal:\n\nThank you to the authors for responding to my review.\n\n1) The title of the conference is \"... on Learning Representations\". As I stated in the review (\"no, e.g., neural networks are employed\"), neural networks are an *example* of, but do not subsume, all representation learning methods. Th...
[ 1, -1, -1, -1, 1 ]
[ 3, -1, -1, -1, 4 ]
[ "iclr_2020_S1eYchEtwH", "SJxH6hmPjr", "rJlNlmY5tH", "rkxtaO2mcr", "iclr_2020_S1eYchEtwH" ]
iclr_2020_Skltqh4KvB
Are there any 'object detectors' in the hidden layers of CNNs trained to identify objects or scenes?
Various methods of measuring unit selectivity have been developed with the aim of better understanding how neural networks work. But the different measures provide divergent estimates of selectivity, and this has led to different conclusions regarding the conditions in which selective object representations are learned and the functional relevance of these representations. In an attempt to better characterize object selectivity, we undertake a comparison of various selectivity measures on a large set of units in AlexNet, including localist selectivity, precision, class-conditional mean activity selectivity (CCMAS), network dissection, the human interpretation of activation maximization (AM) images, and standard signal-detection measures. We find that the different measures provide different estimates of object selectivity, with precision and CCMAS measures providing misleadingly high estimates. Indeed, the most selective units had a poor hit-rate or a high false-alarm rate (or both) in object classification, making them poor object detectors. We fail to find any units that are even remotely as selective as the 'grandmother cell' units reported in recurrent neural networks. In order to generalize these results, we compared selectivity measures on a few units in VGG-16 and GoogLeNet trained on the ImageNet or Places-365 datasets that have been described as 'object detectors'. Again, we find poor hit-rates and high false-alarm rates for object classification.
reject
This paper conducted a number of empirical studies to find whether units in object-classification CNN can be used as object detectors. The claimed conclusion is that there are no units that are sufficient powerful to be considered as object detectors. Three reviewers have split reviews. While reviewer #1 is positive about this work, the review is quite brief. In contrast, Reviewer #2 and #3 both rate weak reject, with similar major concerns. That is, the conclusion seems non-conclusive and not surprising as well. What would be the contribution of this type of conclusion to the ICLR community? In particular, Reviewer #2 provided detailed and well elaborated comments. The authors made efforts to response to all reviewers’ comments. However, the major concerns remain, and the rating were not changed. The ACs concur the major concerns and agree that the paper can not be accepted at its current state.
train
[ "SJlrOmrDor", "rkgAf7rPsr", "HJgoxGrviB", "HJxqg9VQFB", "B1e6QO3SKH", "B1x6uRTsYB" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "REVIEWER 3 WRITES: In overall, the manuscript is well-written and easy-to-follow. I particularly appreciated a kindly presented overview on the literature and thoroughly conducted experiments including a complete user study. One of my key concerns, however, is that I am still not fully convinced whether the key fi...
[ -1, -1, -1, 3, 3, 8 ]
[ -1, -1, -1, 1, 3, 1 ]
[ "HJxqg9VQFB", "B1e6QO3SKH", "iclr_2020_Skltqh4KvB", "iclr_2020_Skltqh4KvB", "iclr_2020_Skltqh4KvB", "iclr_2020_Skltqh4KvB" ]
iclr_2020_H1e552VKPr
Subgraph Attention for Node Classification and Hierarchical Graph Pooling
Graph neural networks have gained significant interest from the research community for both node classification within a graph and graph classification within a set of graphs. Attention mechanism applied on the neighborhood of a node improves the performance of graph neural networks. Typically, it helps to identify a neighbor node which plays more important role to determine the label of the node under consideration. But in real world scenarios, a particular subset of nodes together, but not the individual nodes in the subset, may be important to determine the label of a node. To address this problem, we introduce the concept of subgraph attention for graphs. To show the efficiency of this, we use subgraph attention with graph convolution for node classification. We further use subgraph attention for the entire graph classification by proposing a novel hierarchical neural graph pooling architecture. Along with attention over the subgraphs, our pooling architecture also uses attention to determine the important nodes within a level graph and attention to determine the important levels in the whole hierarchy. Competitive performance over the state-of-the-arts for both node and graph classification shows the efficiency of the algorithms proposed in this paper.
reject
Initially, two reviewers gave high scores to this paper while they both admitted that they know little about this field. The other review raised significant concerns on novelty while claiming high confidence. During discussions, one of the high-scoring reviewers lowered his/her score. Thus a reject is recommended.
train
[ "S1gTvA5g5S", "S1eqiTWhiH", "SJlgJ4BVsB", "rkgdHqg2jB", "ryeDBqB4iS", "HJgMfO7NjS", "B1eyv-pYYr", "H1eP1XmtcH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper introduces a subgraph attention method for graphs. Recently, many papers have shown that attention is a very important concept. However, there was no attention method for graph input structures, while a particular subset of nodes is very crucial to make the output. \n\nThis paper first proposes the gra...
[ 3, -1, -1, -1, -1, -1, 6, 1 ]
[ 1, -1, -1, -1, -1, -1, 1, 4 ]
[ "iclr_2020_H1e552VKPr", "iclr_2020_H1e552VKPr", "S1gTvA5g5S", "HJgMfO7NjS", "B1eyv-pYYr", "H1eP1XmtcH", "iclr_2020_H1e552VKPr", "iclr_2020_H1e552VKPr" ]
iclr_2020_Hyx5qhEYvH
A SPIKING SEQUENTIAL MODEL: RECURRENT LEAKY INTEGRATE-AND-FIRE
Stemming from neuroscience, Spiking neural networks (SNNs), a brain-inspired neural network that is a versatile solution to fault-tolerant and energy efficient information processing pertains to the ”event-driven” characteristic as the analogy of the behavior of biological neurons. However, they are inferior to artificial neural networks (ANNs) in real complicated tasks and only had it been achieved good results in rather simple applications. When ANNs usually being questioned about it expensive processing costs and lack of essential biological plausibility, the temporal characteristic of RNN-based architecture makes it suitable to incorporate SNN inside as imitating the transition of membrane potential through time, and a brain-inspired Recurrent Leaky Integrate-and-Fire (RLIF) model has been put forward to overcome a series of challenges, such as discrete binary output and dynamical trait. The experiment results show that our recurrent architecture has an ultra anti-interference ability and strictly follows the guideline of SNN that spike output through it is discrete. Furthermore, this architecture achieves a good result on neuromorphic datasets and can be extended to tasks like text summarization and video understanding.
reject
This work extends Leaky Integrate and Fire (LIF) by proposing a recurrent version. All reviewers agree that the work as submitted is way too preliminary. Prior art is missing many results, presentation is difficult to follow and incomplete and contains errors. Even if these concerns were addressed, the benefit of the proposed method is unclear. Authors have not responded. We thus recommend rejection.
val
[ "HkgH65Z1qH", "SJlM3Xs49r", "SyxiwuB_cH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a brain-inspired recurrent neural network architecture, named Recurrent Leaky Integrate-and-Fire (RLIF). Computationally, the model is designed to mimic how biological neurons behave, e.g. producing binary values. The hope is that this will allow such computational models to be easily implement...
[ 3, 1, 1 ]
[ 1, 3, 5 ]
[ "iclr_2020_Hyx5qhEYvH", "iclr_2020_Hyx5qhEYvH", "iclr_2020_Hyx5qhEYvH" ]
iclr_2020_H1eo9h4KPH
Certifying Distributional Robustness using Lipschitz Regularisation
Distributional robust risk (DRR) minimisation has arisen as a flexible and effective framework for machine learning. Approximate solutions based on dualisation have become particularly favorable in addressing the semi-infinite optimisation, and they also provide a certificate of the robustness for the worst-case population loss. However existing methods are restricted to either linear models or very small perturbations, and cannot find the globally optimal solution for restricted nonlinear models such as kernel methods. In this paper we resolved these limitations by upper bounding DRRs with an empirical risk regularised by the Lipschitz constant of the model, including deep neural networks and kernel methods. As an application, we showed that it also provides a certificate for adversarial training, and global solutions can be achieved on product kernel machines in polynomial time.
reject
This works relates adversarial robustness and Lipschitz constant regularization. After the rebuttal period reviewers still had some concerns. In particular it was felt that Theorem 1 could likely be deduced from known results in optimal transport, and it would be nice to make this connection explicit. There were still concerns about scalability. The authors are encouraged to continue with this work, considering the above points in future revisions.
train
[ "H1gwSPq6KH", "HJeo_A52jS", "BklUTWF2sH", "HklvAjWosB", "Hylg0l_coB", "BkgkZyOqoS", "BJeI4ovcor", "rkgBsPb3tS", "SJllbcg0KH" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Through the lens of Distributional Robust Risk (DRR), this work draws a link between adversarial robustness and Lipschitz constant regularisation. The authors first provide an upper bound of the DRR (with a Wasserstein ball as the ambiguity set) in terms of the true risk and the Lipschitz constant of the loss func...
[ 6, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ 1, -1, -1, -1, -1, -1, -1, 1, 3 ]
[ "iclr_2020_H1eo9h4KPH", "HklvAjWosB", "iclr_2020_H1eo9h4KPH", "BJeI4ovcor", "rkgBsPb3tS", "H1gwSPq6KH", "SJllbcg0KH", "iclr_2020_H1eo9h4KPH", "iclr_2020_H1eo9h4KPH" ]
iclr_2020_HyxhqhVKPB
Moniqua: Modulo Quantized Communication in Decentralized SGD
Decentralized stochastic gradient descent (SGD), where parallel workers are connected to form a graph and communicate adjacently, has shown promising results both theoretically and empirically. In this paper we propose Moniqua, a technique that allows decentralized SGD to use quantized communication. We prove in theory that Moniqua communicates a provably bounded number of bits per iteration, while converging at the same asymptotic rate as the original algorithm does with full-precision communication. Moniqua improves upon prior works in that it (1) requires no additional memory, (2) applies to non-convex objectives, and (3) supports biased/linear quantizers. We demonstrate empirically that Moniqua converges faster with respect to wall clock time than other quantized decentralized algorithms. We also show that Moniqua is robust to very low bit-budgets, allowing less than 4-bits-per-parameter communication without affecting convergence when training VGG16 on CIFAR10.
reject
This papers proposed an interesting idea for distributed decentralized training with quantized communication. The method is interesting and elegant. However, it is incremental, does not support arbitrary communication compression, and does not have a convincing explanation why modulo operation makes the algorithm better. The experiments are not convincing. Comparison is shown only for the beginning of the optimization where the algorithm does not achieve state of the art accuracy. Moreover, the modular hyperparameter is not easy to choose and seems cannot help achieve consensus.
train
[ "B1eLpFlt2r", "S1goKh_njB", "SkgXxRvnir", "rJxNy8vioH", "BylRkeLosr", "rJeJhbGjsB", "BkeOLbMsoS", "ByesrlMjjr", "B1gOjgMjsH", "ryxdBfMoiH", "HygIRkMoiB", "BJlhW4Q6FS", "Hyl8Hk66KS", "Bke4EFvwcr" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This papers proposed an interesting idea for distributed decentralized training with quantized communication. The authors show that naively compressing the exchanged model can fail to converge, and introduce to compress the model difference with modulo operation. The idea is further applied to decentralized data a...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 8 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2020_HyxhqhVKPB", "SkgXxRvnir", "rJxNy8vioH", "BylRkeLosr", "BkeOLbMsoS", "BJlhW4Q6FS", "B1gOjgMjsH", "Hyl8Hk66KS", "ByesrlMjjr", "iclr_2020_HyxhqhVKPB", "Bke4EFvwcr", "iclr_2020_HyxhqhVKPB", "iclr_2020_HyxhqhVKPB", "iclr_2020_HyxhqhVKPB" ]
iclr_2020_BJepq2VtDB
Training Deep Networks with Stochastic Gradient Normalized by Layerwise Adaptive Second Moments
We propose NovoGrad, an adaptive stochastic gradient descent method with layer-wise gradient normalization and decoupled weight decay. In our experiments on neural networks for image classification, speech recognition, machine translation, and language modeling, it performs on par or better than well tuned SGD with momentum and Adam/AdamW. Additionally, NovoGrad (1) is robust to the choice of learning rate and weight initialization, (2) works well in a large batch setting, and (3) has two times smaller memory footprint than Adam.
reject
The paper presented an adaptive stochastic gradient descent method with layer-wise normalization and decoupled weight decay and justified it on a variety of tasks. The main concern for this paper is the novelty is not sufficient. The method is a combination of LARS and AdamW with slight modifications. Although the paper has good empirically evaluations, theoretical convergence proof would make the paper more convincing.
test
[ "rye6CW6jsr", "rJeX3FRjsH", "ByeIv10jiB", "rJeCpuqnFB", "B1gFrcwAFH", "HJgnbu7RYS" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the review, and especially for questions Q1 and Q3 below. Thinking how to answer them was very helpful!\n \nQ1: \"It would take NovoGrad an enormous amount of steps to converge to the optimum if the algorithm is initialized far enough from the optimum. Since NovoGrad is based on normalized gradients,...
[ -1, -1, -1, 6, 3, 3 ]
[ -1, -1, -1, 3, 4, 4 ]
[ "B1gFrcwAFH", "rJeCpuqnFB", "HJgnbu7RYS", "iclr_2020_BJepq2VtDB", "iclr_2020_BJepq2VtDB", "iclr_2020_BJepq2VtDB" ]
iclr_2020_H1lac2Vtwr
SesameBERT: Attention for Anywhere
Fine-tuning with pre-trained models has achieved exceptional results for many language tasks. In this study, we focused on one such self-attention network model, namely BERT, which has performed well in terms of stacking layers across diverse language-understanding benchmarks. However, in many downstream tasks, information between layers is ignored by BERT for fine-tuning. In addition, although self-attention networks are well-known for their ability to capture global dependencies, room for improvement remains in terms of emphasizing the importance of local contexts. In light of these advantages and disadvantages, this paper proposes SesameBERT, a generalized fine-tuning method that (1) enables the extraction of global information among all layers through Squeeze and Excitation and (2) enriches local information by capturing neighboring contexts via Gaussian blurring. Furthermore, we demonstrated the effectiveness of our approach in the HANS dataset, which is used to determine whether models have adopted shallow heuristics instead of learning underlying generalizations. The experiments revealed that SesameBERT outperformed BERT with respect to GLUE benchmark and the HANS evaluation set.
reject
This paper proposes a few architectural modifications to the BERT model for language understanding, which are meant to apply during fine-tuning for target tasks. All three reviewers had concerns about the motivation for at least one of the proposed methods, and none of three reviewers found the primary experimental results convincing: The proposed methods yield a small improvement on average across target tasks, but one that is not consistent across tasks, and that may not be statistically significant. The authors clarified some points, but did not substantially rebut any of the reviewers concerns. Even though the reviewers express relatively low confidence, their concerns sound serious and uncontested, so I don't think we can accept this paper as is.
train
[ "r1eSgtSIir", "H1enF_SUir", "rJlbEDBLjr", "Hyl-ENSUjB", "rJezoUU0tr", "rkeEfJhCKS", "r1g_66GAKr" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the detailed comments. In what follows, we address in detail the raised issues. Here we explain some common questions.\n\n1. In this paper, because the adjustment is on fine-tuning process related to BERT, the results on GLUE score are not that significant, although there are some signifi...
[ -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, 3, 1, 3 ]
[ "iclr_2020_H1lac2Vtwr", "r1g_66GAKr", "rJezoUU0tr", "rkeEfJhCKS", "iclr_2020_H1lac2Vtwr", "iclr_2020_H1lac2Vtwr", "iclr_2020_H1lac2Vtwr" ]
iclr_2020_SJl1o2NFwS
Understanding and Improving Transformer From a Multi-Particle Dynamic System Point of View
The Transformer architecture is widely used in natural language processing. Despite its success, the design principle of the Transformer remains elusive. In this paper, we provide a novel perspective towards understanding the architecture: we show that the Transformer can be mathematically interpreted as a numerical Ordinary Differential Equation (ODE) solver for a convection-diffusion equation in a multi-particle dynamic system. In particular, how words in a sentence are abstracted into contexts by passing through the layers of the Transformer can be interpreted as approximating multiple particles' movement in the space using the Lie-Trotter splitting scheme and the Euler's method. Given this ODE's perspective, the rich literature of numerical analysis can be brought to guide us in designing effective structures beyond the Transformer. As an example, we propose to replace the Lie-Trotter splitting scheme by the Strang-Marchuk splitting scheme, a scheme that is more commonly used and with much lower local truncation errors. The Strang-Marchuk splitting scheme suggests that the self-attention and position-wise feed-forward network (FFN) sub-layers should not be treated equally. Instead, in each layer, two position-wise FFN sub-layers should be used, and the self-attention sub-layer is placed in between. This leads to a brand new architecture. Such an FFN-attention-FFN layer is "Macaron-like", and thus we call the network with this new architecture the Macaron Net. Through extensive experiments, we show that the Macaron Net is superior to the Transformer on both supervised and unsupervised learning tasks. The reproducible code can be found on http://anonymized
reject
In this work, the authors interpret the Transformer as a numerical ODE modelling multi-particle convection. Guided by this connection, the authors take the Transformer that uses a feed forward net over attentions, and create a variant of transformer which instead uses an FFN-attention-FFN layer, thus the name macaron net. The authors present experiments in the GLUE dataset and in two MT datasets, and they overall report improved performance using their variant of Transformer. Thus, the main selling point of the paper is how seeing Transformer under his new light can potentially improve results through the construction of better models. The main criticisms from the authors is that this story is not entirely convincing because the proposed variant departs a bit from the theory (R1 and comment about the Strang-Marchuk splitting) and the papers does not consider an evaluation of accuracy of Macaron in solving the underlying set of ODEs (comment from R3). As such, I cannot recommend acceptance of this paper -- I believe another set of revisions would increase the impact of this paper.
train
[ "HJlXYol2jr", "S1eKfLh4KH", "SygNWhNWtr", "HyexZfMiKB" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank all reviewers for the valuable comments. Since the concerns are shared, we decide to answer all of the questions here.\n\n[Regarding Strang-Marchuk splitting]\n\nFirst, Strang-Marchuk splitting can also be used for nonautonomous system (right hand side is varying from time, i.e. \\dot x = f(x,t) but not...
[ -1, 3, 1, 3 ]
[ -1, 4, 5, 3 ]
[ "iclr_2020_SJl1o2NFwS", "iclr_2020_SJl1o2NFwS", "iclr_2020_SJl1o2NFwS", "iclr_2020_SJl1o2NFwS" ]
iclr_2020_HJggj3VKPH
On the Dynamics and Convergence of Weight Normalization for Training Neural Networks
We present a proof of convergence for ReLU networks trained with weight normalization. In the analysis, we consider over-parameterized 2-layer ReLU networks initialized at random and trained with batch gradient descent and a fixed step size. The proof builds on recent theoretical works that bound the trajectory of parameters from their initialization and monitor the network predictions via the evolution of a ''neural tangent kernel'' (Jacot et al. 2018). We discover that training with weight normalization decomposes such a kernel via the so called ''length-direction decoupling''. This in turn leads to two convergence regimes and can rigorously explain the utility of WeightNorm. From the modified convergence we make a few curious observations including a natural form of ''lazy training'' where the direction of each weight vector remains stationary.
reject
The goal of this paper is to study the dynamics of convergence of neural network training when weight normalization is used. This is an important and interesting area. The authors focus on analyzing such effect based on a recent theoretical trend which studies neural network dynamics based on the so called neural tangent kernel (NTK). The authors show an interesting phenomena of length-direction decoupling. The reviewers raise various points some of which have been addressed by the authors in their response. Two main points not yet clearly addressed is (1) what is the novelty of the theoretical framework given existing literature and (2) what are the benefits of weight normalization based on this theory (e.g. generalization etc. ). The authors suggest improved convergence rate and overparameterization dependence (i.e. that with weight normalization the required width is decreased) as a possible advantage. However, as pointed out by reviewer 3 there are existing results which already obtain better results without weight normalization (the authors' response that this is only true in randomized scenarios is actually not accurate). Based on above I do not think the paper is ready for publication. That said I think this is a nice direction and well-written paper. I recommend the authors revise and resubmit to a future venue. Some suggestions for improvements in case this is helpful (1) improve literature review and discussion of existing results (2) identify clear benefits to weight normalization. I doubt that improving overparameterization in existing form is one of them unless you provide a lower-bound (I suspect one can eventually obtain even linear overparameterization i.e. number of parameters proportional to number of training data even in the NTK regime without weight normalization. The suggestion by the reviewer at looking at generalization might be a good direction to pursue.
train
[ "Bklk18_hKS", "Bye0AsyAKH", "B1ekzG2miS", "B1lHCW3XsB", "ryl4FZ2moH", "SJepkb3XjB", "rygJndyntr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "*Summary of the contributions:*\nThis paper deals with convergence, of the hidden layer of a 2-layers Relu Network trained with Weight Normalization which consists in decoupling the direction and the magnitude for the pre-activation layers. The authors show, under mild assumption, the linear convergence (with high...
[ 3, 6, -1, -1, -1, -1, 3 ]
[ 4, 3, -1, -1, -1, -1, 4 ]
[ "iclr_2020_HJggj3VKPH", "iclr_2020_HJggj3VKPH", "rygJndyntr", "ryl4FZ2moH", "Bklk18_hKS", "Bye0AsyAKH", "iclr_2020_HJggj3VKPH" ]
iclr_2020_Ske-ih4FPS
Unsupervised Few Shot Learning via Self-supervised Training
Learning from limited exemplars (few-shot learning) is a fundamental, unsolved problem that has been laboriously explored in the machine learning community. However, current few-shot learners are mostly supervised and rely heavily on a large amount of labeled examples. Unsupervised learning is a more natural procedure for cognitive mammals and has produced promising results in many machine learning tasks. In the current study, we develop a method to learn an unsupervised few-shot learner via self-supervised training (UFLST), which can effectively generalize to novel but related classes. The proposed model consists of two alternate processes, progressive clustering and episodic training. The former generates pseudo-labeled training examples for constructing episodic tasks; and the later trains the few-shot learner using the generated episodic tasks which further optimizes the feature representations of data. The two processes facilitate with each other, and eventually produce a high quality few-shot learner. Using the benchmark dataset Omniglot, we show that our model outperforms other unsupervised few-shot learning methods to a large extend and approaches to the performances of supervised methods. Using the benchmark dataset Market1501, we further demonstrate the feasibility of our model to a real-world application on person re-identification.
reject
This paper proposes an approach for unsupervised meta-learning for few-shot learning that iteratively combines clustering and episodic learning. The approach is interesting, and the topic is of interest to the ICLR community. Further, it is nice to see experiments on a more real world setting with the Market1501 dataset. However, the paper lacks any meaningful comparison to prior works on unsupervised meta-learning. While it is accurate that the architecture used and/or assumptions used in this paper are somewhat different from those in prior works, it's important to find a way to compare to at least one of these prior methods in a meaningful way (e.g. by setting up a controlled comparison by running these prior methods in the experimental set-up considered in this work). Without such as comparison, it's impossible to judge the significance of this work in the context of prior papers. The paper isn't ready for publication at ICLR.
train
[ "H1ec-5optr", "rkegS1uhiS", "rJl1DEF9oH", "B1ezFCx2oB", "HygBn4d3oB", "SyekRlaTKB", "rkxqIg4jFr" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper aims to conduct few-shot learning on unlabeled data (instead of on training tasks with few-shot labeled data per task). The proposed algorithm is a trivial combination of existing clustering method and a few-shot learning method, i.e., the clustering provides pseudo labels, from which a series of few-sh...
[ 1, -1, -1, -1, -1, 6, 3 ]
[ 4, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2020_Ske-ih4FPS", "H1ec-5optr", "SyekRlaTKB", "rkxqIg4jFr", "iclr_2020_Ske-ih4FPS", "iclr_2020_Ske-ih4FPS", "iclr_2020_Ske-ih4FPS" ]
iclr_2020_r1xbj2VKvr
Dual Graph Representation Learning
Graph representation learning embeds nodes in large graphs as low-dimensional vectors and benefit to many downstream applications. Most embedding frameworks, however, are inherently transductive and unable to generalize to unseen nodes or learn representations across different graphs. Inductive approaches, such as GraphSAGE, neglect different contexts of nodes and cannot learn node embeddings dually. In this paper, we present an unsupervised dual encoding framework, \textbf{CADE}, to generate context-aware representation of nodes by combining real-time neighborhood structure with neighbor-attentioned representation, and preserving extra memory of known nodes. Experimently, we exhibit that our approach is effective by comparing to state-of-the-art methods.
reject
This work proposes context-aware representation of graph nodes leveraging attention over neighbors (as already done in previous work). Reviewers concerns about lack of novelty, lack of clarity of paper and lack of comparison to state of the art methods have not been addressed at all. We recommend rejection.
train
[ "rklZ33titH", "HJl_y-pTFB", "S1lWKbq0Yr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposed a dual graph representation method to learn the representation of nodes in a graph. In particular, it learns the embedding of paired nodes simultaneously for multiple times, and use the mean values as the final representation. The experimental result demonstrates some improvement over existing ...
[ 3, 3, 1 ]
[ 5, 4, 5 ]
[ "iclr_2020_r1xbj2VKvr", "iclr_2020_r1xbj2VKvr", "iclr_2020_r1xbj2VKvr" ]
iclr_2020_BkxfshNYwB
Mincut Pooling in Graph Neural Networks
The advance of node pooling operations in Graph Neural Networks (GNNs) has lagged behind the feverish design of new message-passing techniques, and pooling remains an important and challenging endeavor for the design of deep architectures. In this paper, we propose a pooling operation for GNNs that leverages a differentiable unsupervised loss based on the minCut optimization objective. For each node, our method learns a soft cluster assignment vector that depends on the node features, the target inference task (e.g., a graph classification loss), and, thanks to the minCut objective, also on the connectivity structure of the graph. Graph pooling is obtained by applying the matrix of assignment vectors to the adjacency matrix and the node features. We validate the effectiveness of the proposed pooling method on a variety of supervised and unsupervised tasks.
reject
Two reviewers are negative on this paper while the other reviewer is positive. Overall, the paper does not make the bar of ICLR. A reject is recommended.
train
[ "BJgfMuCt5S", "rylcLJbsoS", "HJlpDcugiS", "r1lRGR_HsH", "HyggzeueoB", "Bylwcn9ejH", "Hyey1w5AtS", "H1gU4xse9B" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a graph pooling method by utilizing the Mincut regularization loss. It is an interesting idea and performs well in a number of tasks. However, due to the limitation of novelty and poor organizations, this paper cannot meet the standard of ICLR. The detailed reasons why I give a weak reject are ...
[ 3, -1, -1, -1, -1, -1, 8, 3 ]
[ 4, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2020_BkxfshNYwB", "HJlpDcugiS", "H1gU4xse9B", "iclr_2020_BkxfshNYwB", "BJgfMuCt5S", "Hyey1w5AtS", "iclr_2020_BkxfshNYwB", "iclr_2020_BkxfshNYwB" ]