paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2020_HylznxrYDr
FINBERT: FINANCIAL SENTIMENT ANALYSIS WITH PRE-TRAINED LANGUAGE MODELS
While many sentiment classification solutions report high accuracy scores in product or movie review datasets, the performance of the methods in niche domains such as finance still largely falls behind. The reason of this gap is the domain-specific language, which decreases the applicability of existing models, and lack of quality labeled data to learn the new context of positive and negative in the specific domain. Transfer learning has been shown to be successful in adapting to new domains without large training data sets. In this paper, we explore the effectiveness of NLP transfer learning in financial sentiment classification. We introduce FinBERT, a language model based on BERT, which improved the state-of-the-art performance by 14 percentage points for a financial sentiment classification task in FinancialPhrasebank dataset.
reject
This paper presents FinBERT, a BERT-based model that is further trained on a financial corpus and evaluated on Financial PhraseBank and Financial QA. The authors show that FinBERT slightly outperforms baseline methods on both tasks. The reviewers agree that the novelty is limited and this seems to be an application of BERT to financial dataset. There are many cases when it is okay to not present something entirely novel in terms of model as long as a paper still provides new insights on other things. Unfortunately, the new experiments in this paper are also not convincing. The improvements are very minor on small evaluation datasets, which makes the main contributions of the paper not enough for a venue such as ICLR. The authors did not respond to any of the reviewers' concerns. I recommend rejecting this paper.
train
[ "r1g2yTe2Yr", "HkeWvWtCtr", "H1xnFZHVqH", "r1gkpwkhcH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a domain adaptation type of task via proposing fine-tuning of pre-trained models such as BERT on data from financial domains. The paper starts off with a good motivation about requiring some kind of domain adaptation particularly when performing tasks such as sentiment analysis on data sets fro...
[ 3, 3, 1, 3 ]
[ 5, 4, 1, 4 ]
[ "iclr_2020_HylznxrYDr", "iclr_2020_HylznxrYDr", "iclr_2020_HylznxrYDr", "iclr_2020_HylznxrYDr" ]
iclr_2020_Skx73lBFDS
Combining graph and sequence information to learn protein representations
Computational methods that infer the function of proteins are key to understanding life at the molecular level. In recent years, representation learning has emerged as a powerful paradigm to discover new patterns among entities as varied as images, words, speech, molecules. In typical representation learning, there is only one source of data or one level of abstraction at which the learned representation occurs. However, proteins can be described by their primary, secondary, tertiary, and quaternary structure or even as nodes in protein-protein interaction networks. Given that protein function is an emergent property of all these levels of interactions in this work, we learn joint representations from both amino acid sequence and multilayer networks representing tissue-specific protein-protein interactions. Using these representations, we train machine learning models that outperform existing methods on the task of tissue-specific protein function prediction on 10 out of 13 tissues. Furthermore, we outperform existing methods by 19% on average.
reject
The paper presents a linear classifier based on a concatenation of two types of features for protein function prediction. The two features are constructed using methods from previous papers, based on peptide sequence and protein-protein interactions. All the reviewers agree that the problem is an important one, but the paper as it is presented does not provide any methodological advance, and weak empirical evidence of better protein function prediction. Therefore the paper would require a major revision before being suitable for ICLR.
train
[ "HJl3xM52tr", "S1x_LgRTtr", "S1gcZeCCFB", "BJgIvFnadr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public" ]
[ "This work tries to predict the protein functional activation on a tissue by combining the information from amino acid sequence, and tissue-specific protein-protein interaction network. The authors claim that with this joint representation, their model outperforms current methods (Omhnet) on 10 out of 13 tissues by...
[ 1, 1, 3, -1 ]
[ 1, 4, 1, -1 ]
[ "iclr_2020_Skx73lBFDS", "iclr_2020_Skx73lBFDS", "iclr_2020_Skx73lBFDS", "iclr_2020_Skx73lBFDS" ]
iclr_2020_HyxQ3gSKvr
Variational Information Bottleneck for Unsupervised Clustering: Deep Gaussian Mixture Embedding
In this paper, we develop an unsupervised generative clustering framework that combines variational information bottleneck and the Gaussian Mixture Model. Specifically, in our approach we use the variational information bottleneck method and model the latent space as a mixture of Gaussians. We derive a bound on the cost function of our model that generalizes the evidence lower bound (ELBO); and provide a variational inference type algorithm that allows to compute it. In the algorithm, the coders’ mappings are parametrized using neural networks and the bound is approximated by Markov sampling and optimized with stochastic gradient descent. Numerical results on real datasets are provided to support the efficiency of our method.
reject
This paper proposes to use a mixture of Gaussians to variationally encode high-dimensional data through a latent space. The latent codes are constrained using the variational information bottleneck machinery. While the paper is well-motivated and relatively well-written, it contains minimal novel ideas. The consensus in reviews and lack of rebuttal make it clear that this paper should be significantly augmented with novel material before being published to ICLR.
train
[ "HJl3w-spFH", "Byx_WBDRtr", "rkgw2B4Jqr", "BJx9ZbkrdS", "Byl8Wur9Dr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "This paper considers the autoencoder model combining the usual information bottleneck and the Gaussian mixture model (GMM). Using an approximation to deal with GMMs, the authors derive a bound on the cost function generalizing the ELBO. The performance of the proposed method is tested on three benchmark datasets a...
[ 3, 3, 3, -1, -1 ]
[ 3, 4, 3, -1, -1 ]
[ "iclr_2020_HyxQ3gSKvr", "iclr_2020_HyxQ3gSKvr", "iclr_2020_HyxQ3gSKvr", "Byl8Wur9Dr", "iclr_2020_HyxQ3gSKvr" ]
iclr_2020_BJg73xHtvr
Constant Curvature Graph Convolutional Networks
Interest has been rising lately towards methods representing data in non-Euclidean spaces, e.g. hyperbolic or spherical. These geometries provide specific inductive biases useful for certain real-world data properties, e.g. scale-free or hierarchical graphs are best embedded in a hyperbolic space. However, the very popular class of graph neural networks is currently limited to model data only via Euclidean node embeddings and associated vector space operations. In this work, we bridge this gap by proposing mathematically grounded generalizations of graph convolutional networks (GCN) to (products of) constant curvature spaces. We do this by i) extending the gyro-vector space theory from hyperbolic to spherical spaces, providing a unified and smooth view of the two geometries, ii) leveraging gyro-barycentric coordinates that generalize the classic Euclidean concept of the center of mass. Our class of models gives strict generalizations in the sense that they recover their Euclidean counterparts when the curvature goes to zero from either side. Empirically, our methods outperform different types of classic Euclidean GCNs in the tasks of node classification and minimizing distortion for symbolic data exhibiting non-Euclidean behavior, according to their discrete curvature.
reject
This paper proposes using non-Euclidean spaces for GCNs, leveraging the gyrovector space formalism. The model allows products of constant curvature, both positive and negative, generalizing hyperbolic embeddings. Reviewers got mixed impressions on this paper. Whereas some found its methodology compelling and its empirical evaluation satisfactory, it was generally perceived that this paper will greatly benefit from another round of reviewing. In particular, the authors should improve readability of the main text and provide a more thorough discussion on related recent (and concurrent) work.
train
[ "HJlp1gQsjr", "H1lgGrMjiH", "ByetjhULsH", "BJlcvhU8iS", "H1xjf2ILjr", "HyxrcqSgsH", "rJxr_fmNFS", "B1lrh6UpYS", "H1eQwEeRYS" ]
[ "official_reviewer", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the response/revision. It clarifies all my questions", "We would love to hear your feedback on our substantially improved paper (based on your suggestions). Unfortunately, ICLR's tight schedule would prevent us to answer any additional questions after 15th of November. Thank you!", "We would like...
[ -1, -1, -1, -1, -1, -1, 1, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "BJlcvhU8iS", "ByetjhULsH", "rJxr_fmNFS", "B1lrh6UpYS", "H1eQwEeRYS", "iclr_2020_BJg73xHtvr", "iclr_2020_BJg73xHtvr", "iclr_2020_BJg73xHtvr", "iclr_2020_BJg73xHtvr" ]
iclr_2020_S1lVhxSYPH
Ternary MobileNets via Per-Layer Hybrid Filter Banks
MobileNets family of computer vision neural networks have fueled tremendous progress in the design and organization of resource-efficient architectures in recent years. New applications with stringent real-time requirements in highly constrained devices require further compression of MobileNets-like already computeefficient networks. Model quantization is a widely used technique to compress and accelerate neural network inference and prior works have quantized MobileNets to 4 − 6 bits albeit with a modest to significant drop in accuracy. While quantization to sub-byte values (i.e. precision ≤ 8 bits) has been valuable, even further quantization of MobileNets to binary or ternary values is necessary to realize significant energy savings and possibly runtime speedups on specialized hardware, such as ASICs and FPGAs. Under the key observation that convolutional filters at each layer of a deep neural network may respond differently to ternary quantization, we propose a novel quantization method that generates per-layer hybrid filter banks consisting of full-precision and ternary weight filters for MobileNets. The layer-wise hybrid filter banks essentially combine the strengths of full-precision and ternary weight filters to derive a compact, energy-efficient architecture for MobileNets. Using this proposed quantization method, we quantized a substantial portion of weight filters of MobileNets to ternary values resulting in 27.98% savings in energy, and a 51.07% reduction in the model size, while achieving comparable accuracy and no degradation in throughput on specialized hardware in comparison to the baseline full-precision MobileNets.
reject
The paper presents a quantization method that generates per-layer hybrid filter banks consisting of full-precision and ternary weight filters for MobileNets. The paper is well-written. However, it is incremental. Moreover, empirical results are not convincing enough. Experiments are only performed on ImageNet. Comparison on more datasets and more model architectures should be performed.
train
[ "BygTB6qnsr", "B1g-Xoq2sr", "ByxO8q53oS", "ryxEfElTYB", "Bkxmu76RYr", "S1g1j74b9r" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the thoughtful feedback. Please find the responses inline below.\n\n(1) I think the authors over-state their claims of no loss in accuracy, in Table 2 we see a clear loss in accuracy from MobileNets to MobileNets + Hybrid Filter Banks.\n\nPlease note that the results reported for hybrid f...
[ -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, 1, 3, 4 ]
[ "ryxEfElTYB", "Bkxmu76RYr", "S1g1j74b9r", "iclr_2020_S1lVhxSYPH", "iclr_2020_S1lVhxSYPH", "iclr_2020_S1lVhxSYPH" ]
iclr_2020_SklEhlHtPr
DeepPCM: Predicting Protein-Ligand Binding using Unsupervised Learned Representations
In-silico protein-ligand binding prediction is an ongoing area of research in computational chemistry and machine learning based drug discovery, as an accurate predictive model could greatly reduce the time and resources necessary for the detection and prioritization of possible drug candidates. Proteochemometric modeling (PCM) attempts to make an accurate model of the protein-ligand interaction space by combining explicit protein and ligand descriptors. This requires the creation of information-rich, uniform and computer interpretable representations of proteins and ligands. Previous work in PCM modeling relies on pre-defined, handcrafted feature extraction methods, and many methods use protein descriptors that require alignment or are otherwise specific to a particular group of related proteins. However, recent advances in representation learning have shown that unsupervised machine learning can be used to generate embeddings which outperform complex, human-engineered representations. We apply this reasoning to propose a novel proteochemometric modeling methodology which, for the first time, uses embeddings generated via unsupervised representation learning for both the protein and ligand descriptors. We evaluate performance on various splits of a benchmark dataset, including a challenging split that tests the model’s ability to generalize to proteins for which bioactivity data is greatly limited, and we find that our method consistently outperforms state-of-the-art methods.
reject
This paper uses unsupervised learning to create useful representations to improve the performance of models in predicting protein-ligand binding. After reviewers had time to consider each other's comments, there was consensus that the current work is too lacking in novelty on the modeling side to warrant publication in ICLR. Additionally, current experiments are lacking comparisons with important baselines. The work in its current form may be better suited for a domain journal.
train
[ "BkxnneP9cS", "HJxiMPIitS", "H1lbzsATKB", "Bylv0OAStB", "SJg0owRHYS", "r1xmGnv7KH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public" ]
[ "The authors present a model with state-of-the-art performance for predicting protein-ligand affinity and provide a thorough set of benchmarks to illustrate the superiority of combining learned low-dimensional embedding representations of both ligands and proteins. The authors then show that these learned represen...
[ 3, 1, 3, -1, -1, -1 ]
[ 5, 5, 4, -1, -1, -1 ]
[ "iclr_2020_SklEhlHtPr", "iclr_2020_SklEhlHtPr", "iclr_2020_SklEhlHtPr", "SJg0owRHYS", "r1xmGnv7KH", "iclr_2020_SklEhlHtPr" ]
iclr_2020_SkgS2lBFPS
A Bilingual Generative Transformer for Semantic Sentence Embedding
Semantic sentence embedding models take natural language sentences and turn them into vectors, such that similar vectors indicate similarity in the semantics between the sentences. Bilingual data offers a useful signal for learning such embeddings: properties shared by both sentences in a translation pair are likely semantic, while divergent properties are likely stylistic or language-specific. We propose a deep latent variable model that attempts to perform source separation on parallel sentences, isolating what they have in common in a latent semantic vector, and explaining what is left over with language-specific latent vectors. Our proposed approach differs from past work on semantic sentence encoding in two ways. First, by using a variational probabilistic framework, we introduce priors that encourage source separation, and can use our model’s posterior to predict sentence embeddings for monolingual data at test time. Second, we use high- capacity transformers as both data generating distributions and inference networks – contrasting with most past work on sentence embeddings. In experiments, our approach substantially outperforms the state-of-the-art on a standard suite of se- mantic similarity evaluations. Further, we demonstrate that our approach yields the largest gains on more difficult subsets of test where simple word overlap is not a good indicator of similarity.
reject
This paper presents a model for building sentence embeddings using a generative transformer model that encoders separately semantic aspects (that are common across languages) and language-specific aspects. The authors evaluate their embeddings in a non-parametric way (i.e., on STS tasks by measuring cosine similarity) and find their method to outperform other sentence embeddings methods. The main concern that both reviewers (and myself) have about this work relates to its evaluation part. While the authors present a set of very interesting difficult evaluation and probing splits aiming at quantifying the linguistic behaviour of their model, it is unsatisfying the fact that the authors do not evaluate their model extensively in standard classification embedding benchmarks (e.g., as in GLUE). The authors comment: “[their model in producing embeddings] it isn’t as strong when using classification for final predictions. This indicates that the embeddings learned by our approach may be most useful when no downstream training is possible”. If this is true, why is it the case and isn’t it quite restrictive? I think this work is interesting with a nice analysis but the current empirical results are borderline (yes, the model is better on STS, but this is quite limited of an idea compared to using these embeddings as features in a classification tasks). As such, I do not recommend this paper for acceptance but I do hope that authors will keep improving their method and will make it work in more general problems involving classification tasks.
train
[ "rkgXEC3ojr", "HygUn6niiB", "Byxm962sjH", "BJxQNnnjjH", "r1gT3o3iiS", "SJgs3cCaFH", "SJeqGhkAYr", "rJxpiIYr5r" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks you for the review and comments! We have addressed them below:\n\n\"compare their model with many state-of-the-art models that could produce sentence embeddings. However, how they produce the sentence embeddings with existing models is not convincing. For example, why using the hidden states of the last fou...
[ -1, -1, -1, -1, -1, 3, 6, 3 ]
[ -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "SJgs3cCaFH", "Byxm962sjH", "SJeqGhkAYr", "rJxpiIYr5r", "iclr_2020_SkgS2lBFPS", "iclr_2020_SkgS2lBFPS", "iclr_2020_SkgS2lBFPS", "iclr_2020_SkgS2lBFPS" ]
iclr_2020_HkgU3xBtDS
REFINING MONTE CARLO TREE SEARCH AGENTS BY MONTE CARLO TREE SEARCH
Reinforcement learning methods that continuously learn neural networks by episode generation with game tree search have been successful in two-person complete information deterministic games such as chess, shogi, and Go. However, there are only reports of practical cases and there are little evidence to guarantee the stability and the final performance of learning process. In this research, the coordination of episode generation was focused on. By means of regarding the entire system as game tree search, the new method can handle the trade-off between exploitation and exploration during episode generation. The experiments with a small problem showed that it had robust performance compared to the existing method, Alpha Zero.
reject
This paper is a clear reject. The paper is very poorly written and contains zero citations. Also, the reviewers have a hard time understanding what the paper is about.
train
[ "r1e8or7xir", "SylnaCGliB", "SklCBsGeoS", "rklDTxPsKB", "r1xxIGWhFB", "B1g9iFA2KH" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you deeply for your reviewing my paper and for your kindness.\nI want to improve my paper writing skills and I hope to earn more money to feel free to ask for proofreading before the next chance.\n\nI hope to know whether there has been other publishing for this topic (meta procedure for MCTS based reinforce...
[ -1, -1, -1, 1, 1, 1 ]
[ -1, -1, -1, 5, 5, 3 ]
[ "r1xxIGWhFB", "rklDTxPsKB", "B1g9iFA2KH", "iclr_2020_HkgU3xBtDS", "iclr_2020_HkgU3xBtDS", "iclr_2020_HkgU3xBtDS" ]
iclr_2020_Hygv3xrtDr
Sparse Skill Coding: Learning Behavioral Hierarchies with Sparse Codes
Many approaches to hierarchical reinforcement learning aim to identify sub-goal structure in tasks. We consider an alternative perspective based on identifying behavioral `motifs'---repeated action sequences that can be compressed to yield a compact code of action trajectories. We present a method for iteratively compressing action trajectories to learn nested behavioral hierarchies of arbitrary depth, with actions of arbitrary length. The learned temporally extended actions provide new action primitives that can participate in deeper hierarchies as the agent learns. We demonstrate the relevance of this approach for tasks with non-trivial hierarchical structure and show that the approach can be used to accelerate learning in recursively more complex tasks through transfer.
reject
The paper proposes an interesting idea of identifying repeated action sequences, or behavioral motifs, in the context of hierarchical reinforcement learning, using sparsity/compression. While this is a fresh and useful idea, it appears that the paper requires more work, both in terms of presentation/clarity and in terms of stronger empirical results.
train
[ "ByggG2tniH", "SJlAblT6YB", "rye7qxJAKS", "HklhbwPTYr" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We appreciate the thoughtful feedback on our paper from all 3 reviewers. We believe that substantial revisions to the exposition and experiments are required to properly communicate and demonstrate our approach. This includes 1) comparisons to alternative sequence compression based approaches, 2) a clearer analysi...
[ -1, 1, 6, 3 ]
[ -1, 5, 3, 3 ]
[ "iclr_2020_Hygv3xrtDr", "iclr_2020_Hygv3xrtDr", "iclr_2020_Hygv3xrtDr", "iclr_2020_Hygv3xrtDr" ]
iclr_2020_HkxDheHFDr
LAVAE: Disentangling Location and Appearance
We propose a probabilistic generative model for unsupervised learning of structured, interpretable, object-based representations of visual scenes. We use amortized variational inference to train the generative model end-to-end. The learned representations of object location and appearance are fully disentangled, and objects are represented independently of each other in the latent space. Unlike previous approaches that disentangle location and appearance, ours generalizes seamlessly to scenes with many more objects than encountered in the training regime. We evaluate the proposed model on multi-MNIST and multi-dSprites data sets.
reject
This paper presents a VAE approach where the model learns representation while disentangling the location and appearance information. The reviewers found issues with the experimental evaluation of the paper, and have given many useful feedback. None of the reviewers were willing to change their score during the discussion period. with the current score, the paper does not make the cut for ICLR, and I recommend to reject this paper.
train
[ "B1xFs1anFH", "SygdvztCKB", "HyeSe_lhoS", "HJeeQSlnsS", "H1xyP11TYH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer" ]
[ "This paper introduces a compositional generative model of images, where the image is described by a variable number of latent variables. Moreover, the latent variables are disentangled, in the sense that they represent different parts of the scene, and where appearance and location are described separately. While ...
[ 1, 1, -1, -1, 3 ]
[ 5, 3, -1, -1, 3 ]
[ "iclr_2020_HkxDheHFDr", "iclr_2020_HkxDheHFDr", "HJeeQSlnsS", "iclr_2020_HkxDheHFDr", "iclr_2020_HkxDheHFDr" ]
iclr_2020_BkePneStwH
XD: Cross-lingual Knowledge Distillation for Polyglot Sentence Embeddings
Current state-of-the-art results in multilingual natural language inference (NLI) are based on tuning XLM (a pre-trained polyglot language model) separately for each language involved, resulting in multiple models. We reach significantly higher NLI results with a single model for all languages via multilingual tuning. Furthermore, we introduce cross-lingual knowledge distillation (XD), where the same polyglot model is used both as teacher and student across languages to improve its sentence representations without using the end-task labels. When used alone, XD beats multilingual tuning for some languages and the combination of them both results in a new state-of-the-art of 79.2% on the XNLI dataset, surpassing the previous result by absolute 2.5%. The models and code for reproducing our experiments will be made publicly available after de-anonymization.
reject
This paper proposes a method for transferring an NLP model trained on one language a new language, without using labeled data in the new language. Reviewers were split on their recommendations, but the reviews collectively raised a number of concerns which, together, make me uncomfortable accepting the paper. Reviewers were not convinced by the value of the experimental setting described in the paper—at least in the experiments conducted here, the claim that the model is distinctively effective depend on ruling out a large class of models arbitrarily. it would likely be valuable to find a concrete task/dataset/language combination that more closely aligns with the motivations for this work, and to evaluate whether the proposed method is genuinely the most effective practical option in that setting. Further, the reviewers raise a number of points involving baseline implementations, language families, and other issues, that collectively make me doubt that the paper is fully sound in its current form.
train
[ "rJewS5lLtB", "rkeSEwFnjS", "SJxsnUY3iH", "ByeBtItnor", "r1xdX8FhoH", "S1e0ZJj35B", "BygAwA40FH", "rJgnL8waqB", "Skxg25ViDB" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "This paper proposes two improved strategies for fine-tuning XLM (a multilingual variant of BERT) for cross-lingual NLI. First of all, it shows that fine-tuning a single model on the combination of all languages (the original English data from MultiNLI and their MT translation into the rest of languages) performs b...
[ 1, -1, -1, -1, -1, 6, 6, 3, -1 ]
[ 5, -1, -1, -1, -1, 4, 4, 4, -1 ]
[ "iclr_2020_BkePneStwH", "rJewS5lLtB", "BygAwA40FH", "S1e0ZJj35B", "rJgnL8waqB", "iclr_2020_BkePneStwH", "iclr_2020_BkePneStwH", "iclr_2020_BkePneStwH", "iclr_2020_BkePneStwH" ]
iclr_2020_Bklu2grKwB
Learning RNNs with Commutative State Transitions
Many machine learning tasks involve analysis of set valued inputs, and thus the learned functions are expected to be permutation invariant. Recent works (e.g., Deep Sets) have sought to characterize the neural architectures which result in permutation invariance. These typically correspond to applying the same pointwise function to all set components, followed by sum aggregation. Here we take a different approach to such architectures and focus on recursive architectures such as RNNs, which are not permutation invariant in general, but can implement permutation invariant functions in a very compact manner. We first show that commutativity and associativity of the state transition function result in permutation invariance. Next, we derive a regularizer that minimizes the degree of non-commutativity in the transitions. Finally, we demonstrate that the resulting method outperforms other methods for learning permutation invariant models, due to its use of recursive computation.
reject
This paper examines learning problems where the network outputs are intended to be invariant to permutations of the network inputs. Some past approaches for this problem setting have enforced permutation-invariance by construction. This paper takes a different approach, using a recurrent neural network that passes over the data. The paper proves the network will be permutation invariant when the internal state transition function is associative and commutative. The paper then focuses on the commutative property by describing a regularization objective that pushes the recurrent network towards becoming commutative. Experimental results with this regularizer show potentially better performance than DeepSet, another architecture that is designed for permutation invariance. The subsequent discussion of the paper raised several concerns with the current version of the paper. The theoretical contributions for full permutation-invariance follow quickly from the prior DeepSet results. The paper's focus on commutative regularization in the absence of associative regularization is not compelling if the objective is really for permutation invariance. The experimental results were limited in scope. These results lacked error bars and an examination of the relevance of associativity. The reviewers also identified several related lines of work which could provide additional context for the results that were missing from the paper. This paper is not ready for publication due to the multiple concerns raised by the reviewers. The paper would become stronger by addressing these concerns, particularly the associativity of the transition function, empirical results, and related work.
train
[ "ByxIDYM0KS", "BJgfw1hYiS", "H1ebzJ2FsB", "B1xsq6jYsH", "rye8e_rTFr", "SkeVXO9RKH" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The rebuttal did not address my concerns convincingly. There were also simple fixes that the authors could have implemented but they decided not to update the paper. I will keep my original assessment. \n\n--------------\n\nThe premise of the work is very interesting: RNNs that are permutation-invariant. Unfortuna...
[ 1, -1, -1, -1, 1, 3 ]
[ 5, -1, -1, -1, 4, 4 ]
[ "iclr_2020_Bklu2grKwB", "rye8e_rTFr", "ByxIDYM0KS", "SkeVXO9RKH", "iclr_2020_Bklu2grKwB", "iclr_2020_Bklu2grKwB" ]
iclr_2020_BkxthxHYvr
Conditional generation of molecules from disentangled representations
Though machine learning approaches have shown great success in estimating properties of small molecules, the inverse problem of generating molecules with desired properties remains challenging. This difficulty is in part because the set of molecules which have a given property is structurally very diverse. Treating this inverse problem as a conditional distribution estimation task, we draw upon work in learning disentangled representations to learn a conditional distribution over molecules given a desired property, where the molecular structure is encoded in a continuous latent random variable. By including property information as an input factor independent from the structure representation, one can perform conditional molecule generation via a ``style transfer'' process, in which we explicitly set the property to a desired value at generation time. In contrast to existing approaches, we disentangle the latent factors from the property factors using a regularization term which constrains the generated molecules to have the property provided to the generation network, no matter how the latent factor changes.
reject
The paper aims to generate molecules with desired properties using a variant of supervised variational auto-encoders. Disentanglement is encouraged among the style factors. The reviewers point out that the idea is nice, but authors avoid quantitative comparison with SotA Graph-based generative models. Especially, the JT-VAE is acknowledged as a strong baseline widely in the community and is a VAE-based model, it is important to do these comparisons.
train
[ "BJeynP12or", "S1eljSynoS", "BJewJEJ2iH", "H1lScMJniH", "B1l9h1k3sB", "HJlbqymTYH", "Byo_bZRtB", "HJeJEM-0tS" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\n# Figure 6 and overfitting\nThe step-like nature of this figure is actually primarily due to the fact that not every “structure” is compatible with different target values of the property. It is true that for a z that is encoded from certain x, when we combine it with different ys, for many values of y, it tends...
[ -1, -1, -1, -1, -1, 3, 1, 6 ]
[ -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "S1eljSynoS", "HJlbqymTYH", "H1lScMJniH", "Byo_bZRtB", "HJeJEM-0tS", "iclr_2020_BkxthxHYvr", "iclr_2020_BkxthxHYvr", "iclr_2020_BkxthxHYvr" ]
iclr_2020_Hyg53gSYPB
Defense against Adversarial Examples by Encoder-Assisted Search in the Latent Coding Space
Deep neural networks were shown to be vulnerable to crafted adversarial perturbations, and thus bring serious safety problems. To solve this problem, we proposed AE-GAN+sr, a framework for purifying input images by searching a closest natural reconstruction with little computation. We first build a reconstruction network AE-GAN, which adapted auto-encoder by introducing adversarial loss to the objective function. In this way, we can enhance the generative ability of decoder and preserve the abstraction ability of encoder to form a self-organized latent space. In the inference time, when given an input, we will start a search process in the latent space which aims to find the closest reconstruction to the given image on the distribution of normal data. The encoder can provide a good start point for the searching process, which saves much computation cost. Experiments show that our method is robust against various attacks and can reach comparable even better performance to similar methods with much fewer computations.
reject
The paper proposes a defense for adversarial attacks based on autoencoders that tries to find the closest point to the natural image in the output span of the decoder and "purify" the adversarial example. There were concerns about the work being too incremental over DefenseGAN and about empirical evaluation of the defense. It is crucial to test the defense methods against best available attacks to establish the effectiveness. Authors should also discuss and consider evaluating their method against the attack proposed in https://arxiv.org/pdf/1712.09196.pdf that claims to greatly reduce the defense accuracy of DefenseGAN.
train
[ "B1lfFS_qFr", "BygTjO82iB", "BJgXYMU3ir", "Byxhy4L3ir", "SJleI4SniB", "rkxeACN3jH", "BJgDflrhjr", "rygeKoE2iS", "H1lFTg4w9B", "Bkxa-E0hcr", "rJxMJaDpcr", "r1eK-R1aOH", "HklJqt5huS", "H1lYkuFiDB" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "public" ]
[ "Summary: This paper proposes AE-GAN+sr, an auto-encoder based GAN for equipping neural networks with better defenses against adversarial attacks. The authors evaluate their method on black-box attacks, white-box attacks, and gray-box attacks on MNIST and Fashion-MNIST, and show decent empirical results when compar...
[ 3, -1, -1, -1, -1, -1, -1, -1, 3, 3, 6, -1, -1, -1 ]
[ 1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, -1, -1, -1 ]
[ "iclr_2020_Hyg53gSYPB", "r1eK-R1aOH", "H1lFTg4w9B", "H1lFTg4w9B", "Bkxa-E0hcr", "B1lfFS_qFr", "B1lfFS_qFr", "rJxMJaDpcr", "iclr_2020_Hyg53gSYPB", "iclr_2020_Hyg53gSYPB", "iclr_2020_Hyg53gSYPB", "HklJqt5huS", "H1lYkuFiDB", "iclr_2020_Hyg53gSYPB" ]
iclr_2020_r1gc3lBFPH
Keyword Spotter Model for Crop Pest and Disease Monitoring from Community Radio Data
In societies with well developed internet infrastructure, social media is the leading medium of communication for various social issues especially for breaking news situations. In rural Uganda however, public community radio is still a dominant means for news dissemination. Community radio gives audience to the general public especially to individuals living in rural areas, and thus plays an important role in giving a voice to those living in the broadcast area. It is an avenue for participatory communication and a tool relevant in both economic and social development.This is supported by the rise to ubiquity of mobile phones providing access to phone-in or text-in talk shows. In this paper, we describe an approach to analysing the readily available community radio data with machine learning-based speech keyword spotting techniques. We identify the keywords of interest related to agriculture and build models to automatically identify these keywords from audio streams. Our contribution through these techniques is a cost-efficient and effective way to monitor food security concerns particularly in rural areas. Through keyword spotting and radio talk show analysis, issues such as crop diseases, pests, drought and famine can be captured and fed into an early warning system for stakeholders and policy makers.
reject
Main summary: Design an effective and economical model which spots keywords about pests and disease from community radio data in Luganda and English. Discussions: all reviewers vote on rejecting the paper, due to lack of generalizability, training and evaluation discussion need work Recommendation: Reject
train
[ "BklSdgXQtH", "SyeDlJt6FB", "Skeu3xh0KH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Overview\n\nThis paper presents a very interesting application of speech keyword spotting techniques; the aim is to listen to continuous streams of community radio in Uganda in order to spot keywords of interest related to agriculture to monitor food security concerns in rural areas. The lack of internet infrastru...
[ 1, 1, 1 ]
[ 5, 3, 1 ]
[ "iclr_2020_r1gc3lBFPH", "iclr_2020_r1gc3lBFPH", "iclr_2020_r1gc3lBFPH" ]
iclr_2020_rklj3gBYvH
NORML: Nodal Optimization for Recurrent Meta-Learning
Meta-learning is an exciting and powerful paradigm that aims to improve the effectiveness of current learning systems. By formulating the learning process as an optimization problem, a model can learn how to learn while requiring significantly less data or experience than traditional approaches. Gradient-based meta-learning methods aims to do just that, however recent work have shown that the effectiveness of these approaches are primarily due to feature reuse and very little has to do with priming the system for rapid learning (learning to make effective weight updates on unseen data distributions). This work introduces Nodal Optimization for Recurrent Meta-Learning (NORML), a novel meta-learning framework where an LSTM-based meta-learner performs neuron-wise optimization on a learner for efficient task learning. Crucially, the number of meta-learner parameters needed in NORML, increases linearly relative to the number of learner parameters. Allowing NORML to potentially scale to learner networks with very large numbers of parameters. While NORML also benefits from feature reuse it is shown experimentally that the meta-learner LSTM learns to make effective weight updates using information from previous data-points and update steps.
reject
The paper proposes a LSTM-based meta-learning approach that learns how to update each neuron in another model for best few-shot learning performance. The reviewers agreed that this is a worthwhile problem and the approach has merits, but that it is hard to judge the significance of the work, given limited or unclear novelty compared to the work of Ravi & Larochelle (2017) and a lack of fair baseline comparisons. I recommend rejecting the paper for now, but encourage the authors to take the reviewers' feedback into account and submit to another venue.
train
[ "ByeW_VS5Kr", "rkgxcuLcYr", "H1eDLjf4cr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This submission proposes NORML, a meta-learning method that 1) learns initial parameters for a base model that leads to good few-shot learning performance and 2) where a recurrent neural network (LSTM) is used to control the learning updates on a small support set for a given task. The method is derived specifical...
[ 1, 1, 1 ]
[ 5, 4, 4 ]
[ "iclr_2020_rklj3gBYvH", "iclr_2020_rklj3gBYvH", "iclr_2020_rklj3gBYvH" ]
iclr_2020_Hkxi2gHYvH
Predictive Coding for Boosting Deep Reinforcement Learning with Sparse Rewards
While recent progress in deep reinforcement learning has enabled robots to learn complex behaviors, tasks with long horizons and sparse rewards remain an ongoing challenge. In this work, we propose an effective reward shaping method through predictive coding to tackle sparse reward problems. By learning predictive representations offline and using these representations for reward shaping, we gain access to reward signals that understand the structure and dynamics of the environment. In particular, our method achieves better learning by providing reward signals that 1) understand environment dynamics 2) emphasize on features most useful for learning 3) resist noise in learned representations through reward accumulation. We demonstrate the usefulness of this approach in different domains ranging from robotic manipulation to navigation, and we show that reward signals produced through predictive coding are as effective for learning as hand-crafted rewards.
reject
The paper proposes to use the representation learned via CPC to do reward shaping via clustering the embedding and providing a reward based on the distance from the goal. The reviewers point out some conceptual issues with the paper, the key one being that the method is contingent on a random policy being able to reach the goal, which is not true for difficult environments that the paper claims to be motivated by. One reviewer noted limited experiment runs and lack of comparisons with other reward shaping methods. I recommend rejection, but hope the authors find the feedback helpful and submit a future version elsewhere.
test
[ "rkgOPynosB", "Hkg0pRsjiS", "BklKzCojsS", "Byeo7VT3tS", "SkgcmjEaFH", "HJg3kV22FS" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for your insightful review. Below are our responses to your questions:\n\n1. How is the ‘success rate’ computed (e.g. in figure 7 and table 1).\n\n“Success” in the grid word domains means the agent is able to reach the goal position (randomly set each run) within a certain time step limit (typi...
[ -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, 4, 4, 4 ]
[ "HJg3kV22FS", "Byeo7VT3tS", "SkgcmjEaFH", "iclr_2020_Hkxi2gHYvH", "iclr_2020_Hkxi2gHYvH", "iclr_2020_Hkxi2gHYvH" ]
iclr_2020_Syx33erYwH
ASYNCHRONOUS MULTI-AGENT GENERATIVE ADVERSARIAL IMITATION LEARNING
Imitation learning aims to inversely learn a policy from expert demonstrations, which has been extensively studied in the literature for both single-agent setting with Markov decision process (MDP) model, and multi-agent setting with Markov game (MG) model. However, existing approaches for general multi-agent Markov games are not applicable to multi-agent extensive Markov games, where agents make asynchronous decisions following a certain order, rather than simultaneous decisions. We propose a novel framework for asynchronous multi-agent generative adversarial imitation learning (AMAGAIL) under general extensive Markov game settings, and the learned expert policies are proven to guarantee subgame perfect equilibrium (SPE), a more general and stronger equilibrium than Nash equilibrium (NE). The experiment results demonstrate that compared to state-of-the-art baselines, our AMAGAIL model can better infer the policy of each expert agent using their demonstration data collected from asynchronous decision-making scenarios (i.e., extensive Markov games).
reject
This paper extends multi-agent imitation learning to extensive-form games. There is a long discussion between reviewer #3 and the authors on the difference between Markov Games (MGs) and Extensive-Form Games (EFGs). The core of the discussion is on whether methods developed under the MG formalism (where agents take actions simultaneously) naturally can be applied to the EFG problem setting (where agents can take actions asynchronously). Despite the long discussion, the authors and reviewer did not come to an agreement on this point. Given that it is a crucial point for determining the significance of the contribution, my decision is to decline the paper. I suggest that the authors add a detailed discussion on why MG methods cannot be applied to EFGs in the way suggested by reviewer #3 in the next version of this work and then resubmit.
train
[ "rygutwUsjr", "Syx_TR9toB", "r1xeSirOiH", "ByxxFiMPsH", "SylxzBS7iB", "HylJxYmUsH", "SyxV3t78ir", "Syl8OYmLjB", "ryesBumLoS", "r1e_Q73NsS", "rkeEN05Vor", "SkguwdqNoS", "SJxncUYNsS", "BJgnR9OVir", "rklpwou4iH", "HJegpGI7iB", "HJgcPaH7jH", "HJxEBSSXiS", "BJeioPE7sH", "rJxTbuHqKr"...
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_...
[ "Acknowledging here that I read the updated paper.\n\nI appreciate that the terminology has been cleared up, but unfortunately my issues with this work are not about terminology. \n\nThe authors are claiming that MAGAIL would model a turn based game in a grid world by assuming that an agent is \"standing still\" wh...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 6, 6, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 1, -1, -1, -1 ]
[ "Syx_TR9toB", "ByxxFiMPsH", "ByxxFiMPsH", "SyxV3t78ir", "HJeIPYyAtB", "HJeIPYyAtB", "r1e_Q73NsS", "rJxTbuHqKr", "SkgrhmORFB", "rkeEN05Vor", "BJgnR9OVir", "SJxncUYNsS", "rklpwou4iH", "HJegpGI7iB", "HJegpGI7iB", "HJgcPaH7jH", "rJxTbuHqKr", "SylxzBS7iB", "SkgrhmORFB", "iclr_2020_S...
iclr_2020_SJgn3lBtwH
Re-Examining Linear Embeddings for High-dimensional Bayesian Optimization
Bayesian optimization (BO) is a popular approach to optimize resource-intensive black-box functions. A significant challenge in BO is to scale to high-dimensional parameter spaces while retaining sample efficiency. A solution considered in previous literature is to embed the high-dimensional parameter space into a lower-dimensional manifold, often a random linear embedding. In this paper, we identify several crucial issues and misconceptions about the use of linear embeddings for BO. We thoroughly study and analyze the consequences of using linear embeddings and show that some of the design choices in current approaches adversely impact their performance. Based on this new theoretical understanding we propose ALEBO, a new algorithm for high-dimensional BO via linear embeddings that outperforms state-of-the-art methods on a range of problems.
reject
This paper explores the practice of using lower-dimensional embeddings to perform Bayesian optimization on high dimensional problems. The authors identify several issues with performing such an optimization on a lower-dimensional projection and propose solutions leading to better empirical performance of the optimization routine. Overall the reviewers found the work well written and enjoyable. However, the reviewers were concerned primarily about the connection to existing literature (R2) and the empirical analysis (R1, R3). The authors claim that their method outperforms state-of-the-art on a range of problems but the reviewers did not feel there was sufficient empirical evidence to back up this claim. Unfortunately, as such the paper is not quite ready for publication. The authors claim to have significantly expanded the experiments in the response period, however, which will likely make it much stronger for a future submission.
train
[ "S1xgsdcnjS", "B1l_lGq3iH", "HJlVOb5njB", "r1xHz3YnoH", "Syli-Rq2Fr", "BJlTr1daYB", "SJg7FudRYr" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your in depth review of the paper. We especially appreciate the thoughts around improving aspects of modeling and interesting extensions. Some of the concerns with the paper are around modeling decisions. We made clarifying edits in the paper, and address each question below. The primary concern is a...
[ -1, -1, -1, -1, 3, 1, 3 ]
[ -1, -1, -1, -1, 4, 4, 4 ]
[ "Syli-Rq2Fr", "SJg7FudRYr", "BJlTr1daYB", "iclr_2020_SJgn3lBtwH", "iclr_2020_SJgn3lBtwH", "iclr_2020_SJgn3lBtwH", "iclr_2020_SJgn3lBtwH" ]
iclr_2020_r1x63grFvH
Limitations for Learning from Point Clouds
In this paper we prove new universal approximation theorems for deep learning on point clouds that do not assume fixed cardinality. We do this by first generalizing the classical universal approximation theorem to general compact Hausdorff spaces and then applying this to the permutation-invariant architectures presented in 'PointNet' (Qi et al) and 'Deep Sets' (Zaheer et al). Moreover, though both architectures operate on the same domain, we show that the constant functions are the only functions they can mutually uniformly approximate. In particular, DeepSets architectures cannot uniformly approximate the diameter function but can uniformly approximate the center of mass function but it is the other way around for PointNet.
reject
The present paper establishes uniform approximation theorems (UATs) for PointNet and DeepSets that do not fix the cardinality of the input set. Two nonexperts read the paper and came away not understanding what this exercise has taught us and why the weakening of the hypotheses was important. The authors made no attempt to argue these points in their rebuttals and so I went looking at the paper to find the answer in their revisions, but did not find it after scanning through the paper. I think a paper like this needs to explain what is gained and what obstructions earlier approaches met, and why the current techniques side step those. One of the reviewers felt that the fixed cardinality assumption was mild. I'm really not sure why the authors didn't attack this idea. Maybe it is mild in some technical sense? What I read of the paper seemed excellent in term of style and clarity. I think the paper simply needs to make a better case that it is not merely an exercise in topology. I think the result here is publishable on its own grounds, but for the paper to effectively communicate those findings, the authors should have revised it to address these issues. They chose not to and so I recommend ICLR take a pass. Once the reviewers revised the framing and scope/impact, provided it doesn't sound trivial, I think it'll be ready for publication.
train
[ "rJxCrFiDKH", "BJl_MskAFS", "Syx6NYS3oB", "Hyx7lxrhsB", "H1g7UpV3iH", "B1eCTtV2oB", "rJgDrkcXqr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "PointNet (Qi et al, 2017) and Deep sets (Zaheer et al, 2017) have allowed to use deep architectures that deal with point clouds as inputs, taking into account the invariance in the ordering of points. However, existing results on their approximation abilities are limited to fixed cardinalities. This paper removes ...
[ 8, 3, -1, -1, -1, -1, 3 ]
[ 4, 1, -1, -1, -1, -1, 1 ]
[ "iclr_2020_r1x63grFvH", "iclr_2020_r1x63grFvH", "BJl_MskAFS", "rJgDrkcXqr", "rJxCrFiDKH", "iclr_2020_r1x63grFvH", "iclr_2020_r1x63grFvH" ]
iclr_2020_S1xRnxSYwS
Goten: GPU-Outsourcing Trusted Execution of Neural Network Training and Prediction
Before we saw worldwide collaborative efforts in training machine-learning models or widespread deployments of prediction-as-a-service, we need to devise an efficient privacy-preserving mechanism which guarantees the privacy of all stakeholders (data contributors, model owner, and queriers). Slaom (ICLR ’19) preserves privacy only for prediction by leveraging both trusted environment (e.g., Intel SGX) and untrusted GPU. The challenges for enabling private training are explicitly left open – its pre-computation technique does not hide the model weights and fails to support dynamic quantization corresponding to the large changes in weight magnitudes during training. Moreover, it is not a truly outsourcing solution since (offline) pre-computation for a job takes as much time as computing the job locally by SGX, i.e., it only works before all pre-computations are exhausted. We propose Goten, a privacy-preserving framework supporting both training and prediction. We tackle all the above challenges by proposing a secure outsourcing protocol which 1) supports dynamic quantization, 2) hides the model weight from GPU, and 3) performs better than a pure-SGX solution even if we perform the precomputation online. Our solution leverages a non-colluding assumption which is often employed by cryptographic solutions aiming for practical efficiency (IEEE SP ’13, Usenix Security ’17, PoPETs ’19). We use three servers, which can be reduced to two if the pre-computation is done offline. Furthermore, we implement our tailor-made memory-aware measures for minimizing the overhead when the SGX memory limit is exceeded (cf., EuroSys ’17, Usenix ATC ’19). Compared to a pure-SGX solution, our experiments show that Goten can speed up linear-layer computations in VGG up to 40×, and overall speed up by 8.64× on VGG11.
reject
This paper proposes a framework for privacy-preserving training of neural networks within a Trusted Execution Environment (TEE) such as Intel SGX. The reviewers found that this is a valuable research directions, but found that there were significant flaws in the experimental setup that need to be addressed. In particular, the paper does not run all the experiments in the same setup, which leads to the use of scaling factor in some cases. The reviewers found that this made it difficult to make sense of the results. The writing of this paper should be streamlined, along with the experiments before resubmission.
train
[ "S1gAleIhiS", "H1x7p2Nnjr", "rklXMaVnsH", "rygS-TNhsS", "HkekgaE3oB", "BklsAhN3sB", "SJgFs242jH", "rkgoPyQVFr", "HkguQA3BYB", "BkehxucWcH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I thank the authors for the detailed rebuttal.\nNevertheless, I think that many misconceptions remain, which I describe below:\n\n* On the LAN vs WAN setting:\nWhile one issue in a WAN is bandwidth, the much bigger problem is *latency*. After every layer, the server hosting the SGX enclave has to send data to the ...
[ -1, -1, -1, -1, -1, -1, -1, 1, 6, 1 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 1, 1 ]
[ "iclr_2020_S1xRnxSYwS", "SJgFs242jH", "rygS-TNhsS", "HkekgaE3oB", "BklsAhN3sB", "H1x7p2Nnjr", "iclr_2020_S1xRnxSYwS", "iclr_2020_S1xRnxSYwS", "iclr_2020_S1xRnxSYwS", "iclr_2020_S1xRnxSYwS" ]
iclr_2020_Bke02gHYwB
Learn Interpretable Word Embeddings Efficiently with von Mises-Fisher Distribution
Word embedding plays a key role in various tasks of natural language processing. However, the dominant word embedding models don't explain what information is carried with the resulting embeddings. To generate interpretable word embeddings we intend to replace the word vector with a probability density distribution. The insight here is that if we regularize the mixture distribution of all words to be uniform, then we can prove that the inner product between word embeddings represent the point-wise mutual information between words. Moreover, our model can also handle polysemy. Each word's probability density distribution will generate different vectors for its various meanings. We have evaluated our model in several word similarity tasks. Results show that our model can outperform the dominant models consistently in these tasks.
reject
The paper presents an approach to learning interpretable word embeddings. The reviewers put this in the lower half of the submissions. One reason seems to be the size of the training corpora used in the experiments, as well as the limited number of experiments; another that the claim of interpretability seems over-stated. There's also a lack of comparison to related work. I also think it would be interesting to move beyond the standard benchmarks - and either use word embeddings downstream or learn word embeddings for multiple languages [you should do this, regardless] and use Procrustes analysis or the like to learn a mapping: A good embedding algorithm should induce more linearly alignable embedding spaces. NB: While the authors cite other work by these authors, [0] seems relevant, too. Other related work: [1-4]. [0] https://www.aclweb.org/anthology/Q15-1016.pdf [1] https://www.aclweb.org/anthology/Q16-1020.pdf [2] https://www.aclweb.org/anthology/W19-4329.pdf [3] https://www.aclweb.org/anthology/D17-1198/ [4] https://www.aclweb.org/anthology/D15-1183.pdf
train
[ "rkxk7rD6tS", "ryg1XGKOcB", "Byg0bMQ5YS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper addresses the problem the problem in word embeddings where \"word-word\" or \"context-context\" inner products are relied upon in practice when the embeddings are usually only optimized for good properties of \"word-context\" inner products. The new approach to training word embeddings addresses interpre...
[ 8, 1, 1 ]
[ 3, 3, 4 ]
[ "iclr_2020_Bke02gHYwB", "iclr_2020_Bke02gHYwB", "iclr_2020_Bke02gHYwB" ]
iclr_2020_HygkpxStvr
Weakly-Supervised Trajectory Segmentation for Learning Reusable Skills
Learning useful and reusable skill, or sub-task primitives, is a long-standing problem in sensorimotor control. This is challenging because it's hard to define what constitutes a useful skill. Instead of direct manual supervision which is tedious and prone to bias, in this work, our goal is to extract reusable skills from a collection of human demonstrations collected directly for several end-tasks. We propose a weakly-supervised approach for trajectory segmentation following the classic work on multiple instance learning. Our approach is end-to-end trainable, works directly from high-dimensional input (e.g., images) and only requires the knowledge of what skill primitives are present at training, without any need of segmentation or ordering of primitives. We evaluate our approach via rigorous experimentation across four environments ranging from simulation to real world robots, procedurally generated to human collected demonstrations and discrete to continuous action space. Finally, we leverage the generated skill segmentation to demonstrate preliminary evidence of zero-shot transfer to new combinations of skills. Result videos at https://sites.google.com/view/trajectory-segmentation/
reject
The authors present a multiple instance learning-based approach that uses weak supervison (of which skills appear in any given trajectory) to automatically segment a set of skills from demonstrations. The reviewers had significant concerns about the significance and performance of the method, as well as the metrics used for analysis. Most notably, neither the original paper nor the rebuttal provided a sufficient justification or fix for the lack of analysis beyond accuracy scores (as opposed to confusion matrices, precision/recall, etc), which leaves the contribution and claims of the paper unclear. Thus, I recommend rejection at this time.
train
[ "B1eCKJo2or", "BJe6m0c2oH", "Bygjj6choH", "r1lG65CZKB", "ryx5EZ5htB", "BJxlyIDAYB" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the constructive feedback and are glad that the reviewer finds our problem statement proposal interesting and “of clear value”. We address the concerns in detail below.\n\nR3: “Training such sub-skills from weakly supervised skill annotations has been successfully done by Shiarlis et al. ...
[ -1, -1, -1, 3, 3, 1 ]
[ -1, -1, -1, 4, 3, 4 ]
[ "BJxlyIDAYB", "ryx5EZ5htB", "r1lG65CZKB", "iclr_2020_HygkpxStvr", "iclr_2020_HygkpxStvr", "iclr_2020_HygkpxStvr" ]
iclr_2020_BkexaxBKPB
Generative Adversarial Nets for Multiple Text Corpora
Generative adversarial nets (GANs) have been successfully applied to the artificial generation of image data. In terms of text data, much has been done on the artificial generation of natural language from a single corpus. We consider multiple text corpora as the input data, for which there can be two applications of GANs: (1) the creation of consistent cross-corpus word embeddings given different word embeddings per corpus; (2) the generation of robust bag-of-words document embeddings for each corpora. We demonstrate our GAN models on real-world text data sets from different corpora, and show that embeddings from both models lead to improvements in supervised learning problems.
reject
The general consensus amongst the reviewers is that this paper is not quite ready for publication. The reviewers raised several issues with your paper, which I hope will help you as you work towards finding a home for this work.
val
[ "S1eUbIW3YH", "H1gUv3cg5r", "H1eZpcn45H" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This work proposes two models: (1) A model to learn word embeddings from different corpus. (2) A model to generate robust bag-of-words document embeddings. This topic may not be novel enough, considering the current development in GAN and domain adaptation. Some questions:\n-\tFor the word embedding model, why it ...
[ 3, 3, 1 ]
[ 3, 4, 3 ]
[ "iclr_2020_BkexaxBKPB", "iclr_2020_BkexaxBKPB", "iclr_2020_BkexaxBKPB" ]
iclr_2020_Bkel6ertwS
Learning DNA folding patterns with Recurrent Neural Networks
The recent expansion of machine learning applications to molecular biology proved to have a significant contribution to our understanding of biological systems, and genome functioning in particular. Technological advances enabled the collection of large epigenetic datasets, including information about various DNA binding factors (ChIP-Seq) and DNA spatial structure (Hi-C). Several studies have confirmed the correlation between DNA binding factors and Topologically Associating Domains (TADs) in DNA structure. However, the information about physical proximity represented by genomic coordinate was not yet used for the improvement of the prediction models. In this research, we focus on Machine Learning methods for prediction of folding patterns of DNA in a classical model organism Drosophila melanogaster. The paper considers linear models with four types of regularization, Gradient Boosting and Recurrent Neural Networks for the prediction of chromatin folding patterns from epigenetic marks. The bidirectional LSTM RNN model outperformed all the models and gained the best prediction scores. This demonstrates the utilization of complex models and the importance of memory of sequential DNA states for the chromatin folding. We identify informative epigenetic features that lead to the further conclusion of their biological significance.
reject
The authors consider the problem of predicting DNA folding patterns. They use a range of simple, linear models and find that a bi-LSTM architecture yielded best performance. This paper is below acceptance. Reviewers pointed out strong similarity to previously published work. Furthermore the manuscript lacked in clarity, leaving uncertain eg details about experimental details.
train
[ "HygflpY3sH", "r1e_byc2sH", "H1lpF642sS", "rkeXVCKhsH", "HylmAQEc_B", "ryg8ROfaYH", "rJe0SjlScS", "HylET_-0OB", "B1gJ-kcNdH" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Thank you very much for your detailed review! Comments below:\n\n1. The main focused of this work is to predict the information that characterizes the 3D chromatin structure instead of the HI-C full reconstruction. We do not use the Hi-C map as input to our models, on the other hand, we are interested in exploring...
[ -1, -1, -1, -1, 1, 3, 3, -1, -1 ]
[ -1, -1, -1, -1, 5, 3, 4, -1, -1 ]
[ "HylmAQEc_B", "ryg8ROfaYH", "iclr_2020_Bkel6ertwS", "rJe0SjlScS", "iclr_2020_Bkel6ertwS", "iclr_2020_Bkel6ertwS", "iclr_2020_Bkel6ertwS", "B1gJ-kcNdH", "iclr_2020_Bkel6ertwS" ]
iclr_2020_B1xbTlBKwB
Measuring Numerical Common Sense: Is A Word Embedding Approach Effective?
Numerical common sense (e.g., ``a person with a height of 2m is very tall'') is essential when deploying artificial intelligence (AI) systems in society. To predict ranges of small and large values for a given target noun and unit, previous studies have implemented a rule-based method that processed numeric values appearing in a natural language by using template matching. To obtain numerical knowledge, crawled textual data from web pages are frequently used as the input in the above method. Although this is an important task, few studies have addressed the availability of numerical common sense extracted from corresponding textual information. To this end, we first used a crowdsourcing service to obtain sufficient data for a subjective agreement on numerical common sense. Second, to examine whether common sense is attributed to current word embedding, we examined the performance of a regressor trained on the obtained data. In comparison with humans, the performance of an automatic relevance determination regression model was good, particularly when the unit was yen (a maximum correlation coefficient of 0.57). Although all the regression approach with word embedding does not predict values with high correlation coefficients, this word-embedding method could potentially contribute to construct numerical common sense for AI deployment.
reject
The authors tackle an interesting and important problem, developing numerical common-sense. They use a crowdsourcing service to collect a dataset and use regression from word embeddings to numerical common sense. Reviewers were concerned with the size and quality of the dataset, the quality of the prediction methods used, and the analysis of the experimental results. Given the many concerns, I recommend rejecting the paper, but I encourage the authors to revise the paper to address the concerns and resubmit to another venue.
train
[ "S1gt14y3YS", "Skl7ei7gcH", "SyxG5xWcqB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I have read the author response. Thank you for responding to my questions.\n\nThis paper aims to predict typical “common sense” values of quantities using word embeddings. It includes the construction of a data set and some experiments with regression models. The general direction of this work is worthy of stud...
[ 1, 3, 1 ]
[ 4, 3, 1 ]
[ "iclr_2020_B1xbTlBKwB", "iclr_2020_B1xbTlBKwB", "iclr_2020_B1xbTlBKwB" ]
iclr_2020_HJgzpgrYDr
Learning to Reason: Distilling Hierarchy via Self-Supervision and Reinforcement Learning
We present a hierarchical planning and control framework that enables an agent to perform various tasks and adapt to a new task flexibly. Rather than learning an individual policy for each particular task, the proposed framework, DISH, distills a hierarchical policy from a set of tasks by self-supervision and reinforcement learning. The framework is based on the idea of latent variable models that represent high-dimensional observations using low-dimensional latent variables. The resulting policy consists of two levels of hierarchy: (i) a planning module that reasons a sequence of latent intentions that would lead to optimistic future and (ii) a feedback control policy, shared across the tasks, that executes the inferred intention. Because the reasoning is performed in low-dimensional latent space, the learned policy can immediately be used to solve or adapt to new tasks without additional training. We demonstrate the proposed framework can learn compact representations (3-dimensional latent states for a 90-dimensional humanoid system) while solving a small number of imitation tasks, and the resulting policy is directly applicable to other types of tasks, i.e., navigation in cluttered environments.
reject
The authors present a self-supervised framework for learning a hierarchical policy in reinforcement learning tasks that combines a high-level planner over learned latent goals with a shared low-level goal-completing control policy. The reviewers had significant concerns about both problem positioning (w.r.t. existing work) and writing clarity, as well as the fact that all comparative experiments were ablations, rather than comparisons to prior work. While the reviewers agreed that the authors reasonably resolved issues of clarity, there was not agreement that concerns about positioning w.r.t. prior work and experimental comparisons were sufficiently resolved. Thus, I recommend to reject this paper at this time.
train
[ "H1gZ-iP3sB", "rkl_IbunsB", "BJxY32vjjH", "HJlhYAV8iH", "BygMuAE8jS", "B1exE3ELiH", "BkxYlaN8jB", "Skee2aV8sH", "r1lStIxijB", "Byl6ccPwFH", "Hyx-WQZvsr", "r1erI4tctH", "SJgWiAJCFr" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks a lot for your constructive comments. We respect your opinion, it really helped us to figure out our contribution more clearly.\n\nPrimitive labels $\\mathbf{h}\\in\\{-1, 0, 1\\}$ are only used for the initial policy learning ($L=1$) and such labels act just as prior for the internal model learning. For la...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, 1, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, 4, 4 ]
[ "BJxY32vjjH", "iclr_2020_HJgzpgrYDr", "r1erI4tctH", "Byl6ccPwFH", "Byl6ccPwFH", "SJgWiAJCFr", "r1erI4tctH", "r1erI4tctH", "Hyx-WQZvsr", "iclr_2020_HJgzpgrYDr", "Byl6ccPwFH", "iclr_2020_HJgzpgrYDr", "iclr_2020_HJgzpgrYDr" ]
iclr_2020_ryx4TlHKDS
EXACT ANALYSIS OF CURVATURE CORRECTED LEARNING DYNAMICS IN DEEP LINEAR NETWORKS
Deep neural networks exhibit complex learning dynamics due to the highly non-convex loss landscape, which causes slow convergence and vanishing gradient problems. Second order approaches, such as natural gradient descent, mitigate such problems by neutralizing the effect of potentially ill-conditioned curvature on the gradient-based updates, yet precise theoretical understanding on how such curvature correction affects the learning dynamics of deep networks has been lack- ing. Here, we analyze the dynamics of training deep neural networks under a generalized family of natural gradient methods that applies curvature corrections, and derive precise analytical solutions. Our analysis reveals that curvature corrected update rules preserve many features of gradient descent, such that the learning trajectory of each singular mode in natural gradient descent follows precisely the same path as gradient descent, while only accelerating the temporal dynamics along the path. We also show that layer-restricted approximations of natural gradient, which are widely used in most second order methods (e.g. K-FAC), can significantly distort the learning trajectory into highly diverging dynamics that significantly differs from true natural gradient, which may lead to undesirable net- work properties. We also introduce fractional natural gradient that applies partial curvature correction, and show that it provides most of the benefit of full curvature correction in terms of convergence speed, with additional benefit of superior numerical stability and neutralizing vanishing/exploding gradient problems, which holds true also in layer-restricted approximations.
reject
This paper aims to study the effect of curvature correction techniques on training dynamics. The focus is on understanding how natural gradient based methods affect training dynamics of deep linear networks. The main conclusion of the analysis is that it does not fundamentally affect the path of convergence but rather accelerates convergence. They also show that layer correction techniques alone do not suffice. In the discussion the reviewers raised concerns about extrapolating too much based on linear networks and also lack of a cohesive literature review. One reviewer also mentioned that there is not enough technical detail. These issues were partially addressed in the response. I think the topic of the paper is interesting and timely. However, I concur with Reviewer #2 that there are still lots of missing detail and the connection with the nonlinear case is not clear (however the latter is not strictly necessary in my opinion if the rest of the paper is better written). As a result I think the paper in its current form is not ready for publication.
val
[ "S1ehohihsB", "HJgZu9j2oH", "HJgx6PjhoB", "r1ltmPsniB", "Syx3zXsnsr", "Bkgq6wO6KS", "BJxrxSCgqS", "BJx3CVotqH" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your comment on scaling up the analysis.\n\nTo numerically confirm the predictions of our theoretical results, our simulation required estimating and inverting the full Hessian of the system (without approximations). Also, the continuous-time dynamics analysis required using very small learning rate ...
[ -1, -1, -1, -1, -1, 6, 6, 1 ]
[ -1, -1, -1, -1, -1, 1, 3, 3 ]
[ "BJxrxSCgqS", "Bkgq6wO6KS", "r1ltmPsniB", "BJx3CVotqH", "iclr_2020_ryx4TlHKDS", "iclr_2020_ryx4TlHKDS", "iclr_2020_ryx4TlHKDS", "iclr_2020_ryx4TlHKDS" ]
iclr_2020_S1lBTerYwH
Generalized Zero-shot ICD Coding
The International Classification of Diseases (ICD) is a list of classification codes for the diagnoses. Automatic ICD coding is in high demand as the manual coding can be labor-intensive and error-prone. It is a multi-label text classification task with extremely long-tailed label distribution, making it difficult to perform fine-grained classification on both frequent and zero-shot codes at the same time. In this paper, we propose a latent feature generation framework for generalized zero-shot ICD coding, where we aim to improve the prediction on codes that have no labeled data without compromising the performance on seen codes. Our framework generates pseudo features conditioned on the ICD code descriptions and exploits the ICD code hierarchical structure. To guarantee the semantic consistency between the generated features and real features, we reconstruct the keywords in the input documents that are related to the conditioned ICD codes. To the best of our knowledge, this works represents the first one that proposes an adversarial generative model for the generalized zero-shot learning on multi-label text classification. Extensive experiments demonstrate the effectiveness of our approach. On the public MIMIC-III dataset, our methods improve the F1 score from nearly 0 to 20.91% for the zero-shot codes, and increase the AUC score by 3% (absolute improvement) from previous state of the art. We also show that the framework improves the performance on few-shot codes.
reject
This paper proposes a method to do zero-shot ICD coding, which involves assigning natural language labels (ICD codes) to input text. This is an important practical problem in healthcare, and it is not straightforward to solve, because many ICD codes have none or very few training examples due to the long distribution tail. The authors adapt a GAN-based technique previously used in vision to solve this problem. All of the reviewers agree that the paper is well written and well executed, and that the results are good. However, the reviewers have expressed concerns about the novelty of the GAN adaptation step, and left this paper very much borderline based on the scores it received. Due to the capacity restrictions I therefore have to recommend rejection, however I hope that the authors resubmit elsewhere.
train
[ "S1gmBDhjsr", "Bkeaav2isr", "HkxGbwhsiH", "B1gDaU2isr", "HkxLy93pFB", "rJeTy-FAYS", "Sygm8F2p5S" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank Reviewer 1 for the insightful comments and would like to address the specific questions below.\n\nComment #1. Description of the complete system:\nThanks. We have added more detailed description of the complete system in the revised version. Please kindly refer to the second paragraph of Section 1 and Sec...
[ -1, -1, -1, -1, 3, 6, 6 ]
[ -1, -1, -1, -1, 3, 4, 1 ]
[ "HkxLy93pFB", "iclr_2020_S1lBTerYwH", "rJeTy-FAYS", "Sygm8F2p5S", "iclr_2020_S1lBTerYwH", "iclr_2020_S1lBTerYwH", "iclr_2020_S1lBTerYwH" ]
iclr_2020_rJe8pxSFwr
End-to-end learning of energy-based representations for irregularly-sampled signals and images
For numerous domains, including for instance earth observation, medical imaging, astrophysics,..., available image and signal datasets often irregular space-time sampling patterns and large missing data rates. These sampling properties is a critical issue to apply state-of-the-art learning-based (e.g., auto-encoders, CNNs,...) to fully benefit from the available large-scale observations and reach breakthroughs in the reconstruction and identification of processes of interest. In this paper, we address the end-to-end learning of representations of signals, images and image sequences from irregularly-sampled data, {\em i.e.} when the training data involved missing data. From an analogy to Bayesian formulation, we consider energy-based representations. Two energy forms are investigated: one derived from auto-encoders and one relating to Gibbs energies. The learning stage of these energy-based representations (or priors) involve a joint interpolation issue, which resorts to solving an energy minimization problem under observation constraints. Using a neural-network-based implementation of the considered energy forms, we can state an end-to-end learning scheme from irregularly-sampled data. We demonstrate the relevance of the proposed representations for different case-studies: namely, multivariate time series, 2{\sc } images and image sequences.
reject
This work looks at ways to fill in incomplete data, through two different energy terms. Reviewers find the work interesting, however it is very poorly written and nowhere near ready for publication. This comes on top of poorly stated motivation and insufficient comparison to prior work. Authors have chosen not to answer the reviewers' comments. We recommend rejection.
train
[ "rJx-I2HJ9S", "Byetnaz8cr", "B1lNNWYdcr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes an end-to-end learning framework for interpolation problems, motivated by problems such as irregularly-sampled images or time-series.\n\nIt was not clear after reading the paper where the key novelty of the proposal lies. The energy formulations for Uθ, namely an autoencoder and Gibbs model, are...
[ 3, 1, 1 ]
[ 1, 1, 4 ]
[ "iclr_2020_rJe8pxSFwr", "iclr_2020_rJe8pxSFwr", "iclr_2020_rJe8pxSFwr" ]
iclr_2020_rkewaxrtvr
Privacy-preserving Representation Learning by Disentanglement
Deep learning and latest machine learning technology heralded an era of success in data analysis. Accompanied by the ever increasing performance, reaching super-human performance in many areas, is the requirement of amassing more and more data to train these models. Often ignored or underestimated, the big data curation is associated with the risk of privacy leakages. The proposed approach seeks to mitigate these privacy issues. In order to sanitize data from sensitive content, we propose to learn a privacy-preserving data representation by disentangling into public and private part, with the public part being shareable without privacy infringement. The proposed approach deals with the setting where the private features are not explicit, and is estimated though the course of learning. This is particularly appealing, when the notion of sensitive attribute is ``fuzzy''. We showcase feasibility in terms of classification of facial attributes and identity on the CelebA dataset. The results suggest that private component can be removed in the cases where the the downstream task is known a priori (i.e., ``supervised''), and the case where it is not known a priori (i.e., ``weakly-supervised'').
reject
The paper leverages variational auto-encoders (VAEs) and disentanglement to generate data representations that hide sensitive attributes. The reviewers have identified several issues with the paper, including its false claims or statements about differential privacy, unclear privacy guarantee, and lack of related work discussion. The authors have not directly addressed these issues.
train
[ "HJl6GIKeFB", "HJxNUdZ6Fr", "Hkg5Zu9Q5S", "SyegKnHROH", "Hke4nKuFur" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "PRIVACY-PRESERVING REPRESENTATION LEARNING BY DISENTANGLEMENT\n\nSummary\nThis paper introduces a method to disentanglement the private and public attribute information in representation learning.\n\nStrength:\n1. The idea of introducing the confusion term to disentanglement private and public information seems no...
[ 1, 3, 1, -1, -1 ]
[ 4, 1, 5, -1, -1 ]
[ "iclr_2020_rkewaxrtvr", "iclr_2020_rkewaxrtvr", "iclr_2020_rkewaxrtvr", "Hke4nKuFur", "iclr_2020_rkewaxrtvr" ]
iclr_2020_SJgdpxHFvH
Meta-Learning Initializations for Image Segmentation
While meta-learning approaches that utilize neural network representations have made progress in few-shot image classification, reinforcement learning, and, more recently, image semantic segmentation, the training algorithms and model architectures have become increasingly specialized to the few-shot domain. A natural question that arises is how to develop learning systems that scale from few-shot to many-shot settings while yielding human level performance in both. One scalable potential approach that does not require ensembling many models nor the computational costs of relation networks, is to meta-learn an initialization. In this work, we study first-order meta-learning of initializations for deep neural networks that must produce dense, structured predictions given an arbitrary amount of train- ing data for a new task. Our primary contributions include (1), an extension and experimental analysis of first-order model agnostic meta-learning algorithms (including FOMAML and Reptile) to image segmentation, (2) a formalization of the generalization error of episodic meta-learning algorithms, which we leverage to decrease error on unseen tasks, (3) a novel neural network architecture built for parameter efficiency which we call EfficientLab, and (4) an empirical study of how meta-learned initializations compare to ImageNet initializations as the training set size increases. We show that meta-learned initializations for image segmentation smoothly transition from canonical few-shot learning problems to larger datasets, outperforming random and ImageNet-trained initializations. Finally, we show both theoretically and empirically that a key limitation of MAML-type algorithms is that when adapting to new tasks, a single update procedure is used that is not conditioned on the data. We find that our network, with an empirically estimated optimal update procedure yields state of the art results on the FSS-1000 dataset, while only requiring one forward pass through a single model at evaluation time.
reject
The reviewers reached a consensus that the paper was not ready to be accepted in its current form. The main concerns were in regard to clarity, relatively limited novelty, and a relatively unsatisfying experimental evaluation. Although some of the clarity concerns were addressed during the response period, the other issues still remained, and the reviewers generally agreed that the paper should be rejected.
train
[ "HJgcpxe1qH", "H1gmg3Knir", "SJxiZOAiiH", "S1xgLi0osS", "SkeVF5RoiS", "H1e3cYRsiH", "rkxZZYCiiB", "HklJGPAsoB", "rklbnw5qsr", "BJgryYKpFS", "H1x8O66AFB" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes to apply MAML-style meta-learning to few-shot semantic segmentation in images. It argues that this type of algorithm may be more computationally-efficient than existing methods and may offer better performance with a higher number of examples. They further propose to perform hyper-parameter sea...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_SJgdpxHFvH", "BJgryYKpFS", "HklJGPAsoB", "H1e3cYRsiH", "H1e3cYRsiH", "H1x8O66AFB", "BJgryYKpFS", "HJgcpxe1qH", "iclr_2020_SJgdpxHFvH", "iclr_2020_SJgdpxHFvH", "iclr_2020_SJgdpxHFvH" ]
iclr_2020_HJeYalBKvr
Attention over Phrases
How to represent the sentence ``That's the last straw for her''? The answer of the self-attention is a weighted sum of each individual words, i.e. semantics=α1Emb(That)+α2Emb(’s)+⋯+αnEmb(her). But the weighted sum of ``That's'', ``the'', ``last'', ``straw'' can hardly represent the semantics of the phrase. We argue that the phrases play an important role in attention. If we combine some words into phrases, a more reasonable representation with compositions is semantics=α1Emb(That’s)+Emb2(the last straw)+α3Emb(for)+α4Emb(her). While recent studies prefer to use the attention mechanism to represent the natural language, few noticed the word compositions. In this paper, we study the problem of representing such compositional attentions in phrases. In this paper, we proposed a new attention architecture called HyperTransformer. Besides representing the words of the sentence, we introduce hypernodes to represent the candidate phrases in attention. HyperTransformer has two phases. The first phase is used to attend over all word/phrase pairs, which is similar to the standard Transformer. The second phase is used to represent the inductive bias within each phrase. Specially, we incorporate the non-linear attention in the second phase. The non-linearity represents the the semantic mutations in phrases. The experimental performance has been greatly improved. In WMT16 English-German translation task, the BLEU increases from 20.90 (by Transformer) to 34.61 (by HyperTransformer).
reject
This paper incorporates phrases within the transformer architecture. The underlying idea is interesting, but the reviewers have raised serious concerns with both clarity and the trustworthiness of the experimental evaluation, and thus I cannot recommend acceptance at this time.
train
[ "rJxkd5v0Fr", "BkxidjcRtB", "ryg1sde8qB", "SyxgHKR_qS", "rylp9-vO9S", "SJxYE6JI9H", "HJg-aRtxqr", "H1lsJqssFB", "BylzE-XI_r", "B1ec1wCzOS", "B1lGqREgur" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "public", "author", "public", "author", "public", "public" ]
[ "This paper addresses an issue of compositionality in self-attention models such Transformer. A simple idea of composing multiple words into a phrase as a hypernode and representing it using a non-linear function to capture the semantic mutation is proposed. In the machine translation and PoS tagging tasks, the pro...
[ 3, 1, 1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_HJeYalBKvr", "iclr_2020_HJeYalBKvr", "iclr_2020_HJeYalBKvr", "rylp9-vO9S", "SJxYE6JI9H", "HJg-aRtxqr", "H1lsJqssFB", "iclr_2020_HJeYalBKvr", "B1ec1wCzOS", "iclr_2020_HJeYalBKvr", "iclr_2020_HJeYalBKvr" ]
iclr_2020_rJgqalBKvH
Deceptive Opponent Modeling with Proactive Network Interdiction for Stochastic Goal Recognition Control
Goal recognition based on the observations of the behaviors collected online has been used to model some potential applications. Newly formulated problem of goal recognition design aims at facilitating the online goal recognition process by performing offline redesign of the underlying environment with hard action removal. In this paper, we propose the stochastic goal recognition control (S-GRC) problem with two main stages: (1) deceptive opponent modeling based on maximum entropy regularized Markov decision processes (MDPs) and (2) goal recognition control under proactively static interdiction. For the purpose of evaluation, we propose to use the worst case distinctiveness (wcd) as a measure of the non-distinctive path without revealing the true goals, the task of S-GRC is to interdict a set of actions that improve or reduce the wcd. We empirically demonstrate that our proposed approach control the goal recognition process based on opponent's deceptive behavior.
reject
This paper has been withdrawn by the authors.
train
[ "B1eh40NfjH", "BJxVeg3kKS", "BklzD_02Yr", "HJeoRzyD9r" ]
[ "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.", "This paper studies goal recognition control given a deceptive opponent, who selects actions to intentionally mislead or confusing the learner to learn the true goal. The problem has been studied in the security gam...
[ -1, 1, 1, 1 ]
[ -1, 3, 4, 1 ]
[ "iclr_2020_rJgqalBKvH", "iclr_2020_rJgqalBKvH", "iclr_2020_rJgqalBKvH", "iclr_2020_rJgqalBKvH" ]
iclr_2020_BklsagBYPS
A GOODNESS OF FIT MEASURE FOR GENERATIVE NETWORKS
We define a goodness of fit measure for generative networks which captures how well the network can generate the training data, which is necessary to learn the true data distribution. We demonstrate how our measure can be leveraged to understand mode collapse in generative adversarial networks and provide practitioners with a novel way to perform model comparison and early stopping without having to access another trained model as with Frechet Inception Distance or Inception Score. This measure shows that several successful, popular generative models, such as DCGAN and WGAN, fall very short of learning the data distribution. We identify this issue in generative models and empirically show that overparameterization via subsampling data and using a mixture of models improves performance in terms of goodness of fit.
reject
This paper proposes to measure the distance of the generator manifold to the training data. The proposed approach bears significant similarity to past studies that also sought to analyze the behavior of generative models that define a low-dimensional manifold (e.g. Webster 2019, and in particular, Xiang 2017). I recommend that the authors perform a broader literature search to better contextualize the claims and experiments put forth in the paper. The proposed method also suffers from some limitations that are not made clear in the paper. First, the measure depends only on the support of the generator, but not the density. For models that have support everywhere (exact likelihood models tend to have this property by construction), the measure is no longer meaningful. Even for VAEs, the measure is only easily applicable if the decoder is non-autoregressive so that the procedure can be applied only to the mean decoding. In this current state, I do not recommend the paper for submission. Xiang (2017). On the Effects of Batch and Weight Normalization in Generative Adversarial Networks Webster (2019). Detecting Overfitting of Deep Generative Networks via Latent Recovery
train
[ "SygOTfanYH", "Sklvl9H5sB", "SyeYCUzKsB", "H1x3_8ztjH", "r1lF7UfFsS", "Bkx7i9i1KH", "HJlxTZnitB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper defines a goodness of fit measure F for generative networks, that reflects how well a model can generate the training data. F allows to detect mode collapse: as long as it is strictly positive, mode collapse is observed as parts of the training data have not been memorized. It aims at providing an alter...
[ 3, -1, -1, -1, -1, 1, 3 ]
[ 4, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2020_BklsagBYPS", "H1x3_8ztjH", "Bkx7i9i1KH", "HJlxTZnitB", "SygOTfanYH", "iclr_2020_BklsagBYPS", "iclr_2020_BklsagBYPS" ]
iclr_2020_Syxi6grFwH
HIPPOCAMPAL NEURONAL REPRESENTATIONS IN CONTINUAL LEARNING
The hippocampus has long been associated with spatial memory and goal-directed spatial navigation. However, the region’s independent role in continual learning of navigational strategies has seldom been investigated. Here we analyse populationlevel activity of hippocampal CA1 neurons in the context of continual learning of two different spatial navigation strategies. Demixed Principal Component Analysis (dPCA) is applied on neuronal recordings from 612 hippocampal CA1 neurons of rodents learning to perform allocentric and egocentric spatial tasks. The components uncovered using dPCA from the firing activity reveal that hippocampal neurons encode relevant task variables such decisions, navigational strategies and reward location. We compare this hippocampal features with standard reinforcement learning algorithms, highlighting similarities and differences. Finally, we demonstrate that a standard deep reinforcement learning model achieves similar average performance when compared to animal learning, but fails to mimic animals during task switching. Overall, our results gives insights into how the hippocampus solves reinforced spatial continual learning, and puts forward a framework to explicitly compare biological and machine learning during spatial continual learning.
reject
This paper analyzes neural recording data taken from rodents performing a continual learning task using demixed principal component analysis, and aims to find representations for behaviorally relevant variables. They compare these features with those of a deep RL agent. I am a big fan of papers like this that try to bridge between neuroscience and machine learning. It seems to have a great motivation and there are some interesting results presented. However the reviewers pointed out many issues that lead me to believe this work is not quite ready for publication. In particular, not considering space when analyzing hippocampal rodent data, as R2 points out, seems to be a major oversight. In addition, the sample size is incredibly small (5 rats, only 1 of which was used for the continual learning simulation). This seems to me like more of an exploratory, pilot study than a full experiment that is ready for publication, and therefore I am unfortunately recommending reject. Reviewer comments were very thorough and on point. Sounds like the authors are already working on the next version of the paper with these points in mind, so I look forward to it.
train
[ "ryxgFEOjsB", "rkl8ENusor", "S1gJ_Muijr", "ByxQVQroiB", "S1lC7WG5iS", "SyedURFjFr", "r1gnCnjpYH", "r1xep1EJ9B" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank you for your time and comments.", " We would like to thank the reviewer for the thoughtful comments. Our replies to the major issues pointed out are as below:\n\n(1) We agree that space is crucial when considering the hippocampus and this is very much an element that we are currently investigating. In o...
[ -1, -1, -1, -1, 1, 1, 3, 6 ]
[ -1, -1, -1, -1, 4, 5, 3, 1 ]
[ "r1xep1EJ9B", "SyedURFjFr", "S1lC7WG5iS", "r1gnCnjpYH", "iclr_2020_Syxi6grFwH", "iclr_2020_Syxi6grFwH", "iclr_2020_Syxi6grFwH", "iclr_2020_Syxi6grFwH" ]
iclr_2020_SyxhaxBKPS
Smart Ternary Quantization
Neural network models are resource hungry. Low bit quantization such as binary and ternary quantization is a common approach to alleviate this resource requirements. Ternary quantization provides a more flexible model and often beats binary quantization in terms of accuracy, but doubles memory and increases computation cost. Mixed quantization depth models, on another hand, allows a trade-off between accuracy and memory footprint. In such models, quantization depth is often chosen manually (which is a tiring task), or is tuned using a separate optimization routine (which requires training a quantized network multiple times). Here, we propose Smart Ternary Quantization (STQ) in which we modify the quantization depth directly through an adaptive regularization function, so that we train a model only once. This method jumps between binary and ternary quantization while training. We show its application on image classification.
reject
This paper studies mixed-precision quantization in deep networks where each layer can be either binarized or ternarized. The proposed regularization method is simple and straightforward. However, many details and equations are not stated clearly. Experiments are performed on small-scale image classification data sets. It will also be more convincing to try larger networks or data sets. More importantly, many recent methods that can train mixed-precision networks are not cited nor compared. Figures 3 and 4 are difficult to interpret, and sensitivity on the new hyper-parameters should be studied. The use of "best validation accuracy" as performance metric may not be fair. Finally, writing can be improved. Overall, the proposed idea might have merit, but does not seem to have been developed enough.
val
[ "S6PlF27iLaS", "ryxHK6Hhir", "SygAmaS3sH", "B1e4AFrhoH", "r1l2k3kziH", "r1eEnyfrYS", "rJgg6GphYH", "BJeJHadbcS" ]
[ "official_reviewer", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper studies mixed-precision quantization in deep networks where each layer can be either binarized or ternarized. The authors propose an adaptive regularization function that can be pushed to either 2-bit or 3-bit through different parameterization, in order to automatically determine the precision of each...
[ 3, -1, -1, -1, -1, 6, 1, 3 ]
[ 4, -1, -1, -1, -1, 4, 3, 3 ]
[ "iclr_2020_SyxhaxBKPS", "r1eEnyfrYS", "rJgg6GphYH", "BJeJHadbcS", "iclr_2020_SyxhaxBKPS", "iclr_2020_SyxhaxBKPS", "iclr_2020_SyxhaxBKPS", "iclr_2020_SyxhaxBKPS" ]
iclr_2020_Skl6peHFwS
Best feature performance in codeswitched hate speech texts
How well can hate speech concept be abstracted in order to inform automatic classification in codeswitched texts by machine learning classifiers? We explore different representations and empirically evaluate their predictiveness using both conventional and deep learning algorithms in identifying hate speech in a ~48k human-annotated dataset that contain mixed languages, a phenomenon common among multilingual speakers. This paper espouses a novel approach to handle this challenge by introducing a hierarchical approach that employs Latent Dirichlet Allocation to generate topic models that feed into another high-level feature set that we acronym PDC. PDC groups similar meaning words in word families during the preprocessing stage for supervised learning models. The high-level PDC features generated are based on Ombui et al, (2019) hate speech annotation framework that is informed by the triangular theory of hate (Stanberg,2003). Results obtained from frequency-based models using the PDC feature on the annotated dataset of ~48k short messages comprising of tweets generated during the 2012 and 2017 Kenyan presidential elections indicate an improvement on classification accuracy in identifying hate speech as compared to the baseline
reject
This paper focuses on hate speech detection and compares several classification methods including Naive Bayes, SVM, KNN, CNN, and many others. The most valuable contribution of this work is a dataset of ~400,000 tweets from 2017 Kenyan general election, although it is unclear whether the authors plan to release the dataset in the future. The paper is difficult to follow, uses an incorrect ICLR format, and is full of typos. All three reviewers agree that while this paper deals with an important topic in social media analysis, it is not ready for publication in its current state. The authors did not provide a rebuttal to reviewers' concerns. I recommend rejecting this paper for ICLR.
train
[ "BJgWdenDOr", "r1gV_dm2KB", "SyxCb3k6FH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Comments: \n\n-This paper considers codeswitched hate speech texts from an NLP perspective. The dataset considers mixed languages. \n\n-Focuses on kenyan presidential election. \n\n-Paper has severe formatting issues as well as simple issues like capitalization. Additionally many plots are rather unattractive ...
[ 3, 1, 1 ]
[ 4, 1, 4 ]
[ "iclr_2020_Skl6peHFwS", "iclr_2020_Skl6peHFwS", "iclr_2020_Skl6peHFwS" ]
iclr_2020_S1x6TlBtwB
Mixture Distributions for Scalable Bayesian Inference
Bayesian Neural Networks (BNNs) provides a mathematically grounded framework to quantify uncertainty. However BNNs are computationally inefficient, thus are generally not employed on complicated machine learning tasks. Deep Ensembles were introduced as a Bootstrap inspired frequentist approach to the community, as an alternative to BNN’s. Ensembles of deterministic and stochastic networks are a good uncertainty estimator in various applications (Although, they are criticized for not being Bayesian). We show Ensembles of deterministic and stochastic Neural Networks can indeed be cast as an approximate Bayesian inference. Deep Ensembles have another weakness of having high space complexity, we provide an alternative to it by modifying the original Bayes by Backprop (BBB) algorithm to learn more general concrete mixture distributions over weights. We show our methods and its variants can give better uncertainty estimates at a significantly lower parametric overhead than Deep Ensembles. We validate our hypothesis through experiments like non-linear regression, predictive uncertainty estimation, detecting adversarial images and exploration-exploitation trade-off in reinforcement learning.
reject
This paper proposes to use mixture distributions to improve uncertainty estimates in BNNs. Ensemble methods are interpreted as a Bayesian mixture posterior approximation. To reduce the computation, a modification to BBB is provided based on a concrete mixture distribution. Both R1 and R3 have given useful feedback. It is clear that interpretation of ensemble as a Bayesian posterior is well known, and some of them also have theoretical issues. The experiment to clearly comparing proposed mixture posterior to more commonly used mixture distribution is also necessary. Due to these reasons, I recommend to reject this paper. I encourage the authors to use reviewers feedback to improve the paper.
train
[ "rkeat7C6FS", "HkempcshjH", "ryg67R53jH", "Hygc1jVwor", "ByewHtY3sH", "B1xUVtd3iH", "B1e78-vhoB", "BJgwyrd3sr", "B1eRIM82oH", "H1eQqhE2oB", "Bye1noEnsB", "S1xucsSojS", "rJgAvsSosB", "SJlgDm4iiB", "HJee9LcZqS", "Syg9jcy6cH" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary: This paper proposes to use either relaxed mixture distributions or relaxed mixtures of matrix Gaussians as the approximate posterior for Bayesian neural networks. Naturally, taking the mixture variance to zero allows a stretched interpretation of ensembling to become Bayesian as well. Experiments are perf...
[ 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2020_S1x6TlBtwB", "B1eRIM82oH", "iclr_2020_S1x6TlBtwB", "HJee9LcZqS", "B1xUVtd3iH", "BJgwyrd3sr", "SJlgDm4iiB", "B1e78-vhoB", "Bye1noEnsB", "rJgAvsSosB", "S1xucsSojS", "rJgAvsSosB", "rkeat7C6FS", "Syg9jcy6cH", "iclr_2020_S1x6TlBtwB", "iclr_2020_S1x6TlBtwB" ]
iclr_2020_SkeATxrKwH
Generative Hierarchical Models for Parts, Objects, and Scenes
Hierarchical structure such as part-whole relationship in objects and scenes are the most inherent structure in natural scenes. Learning such representation via unsupervised learning can provide various benefits such as interpretability, compositionality, and transferability, which are important in many downstream tasks. In this paper, we propose the first hierarchical generative model for learning multiple latent part-whole relationships in a scene. During inference, taking top-down approach, our model infers the representation of more abstract concept (e.g., objects) and then infers that of more specific concepts (e.g., parts) by conditioning on the corresponding abstract concept. This makes the model avoid a difficult problem of routing between parts and whole. In experiments on images containing multiple objects with different shapes and part compositions, we demonstrate that our model can learn the latent hierarchical structure between parts and wholes and generate imaginary scenes.
reject
The authors proposes a generative model with a hierarchy of latent variables corresponding to a scene, objects, and object parts. The submission initially received low scores with 2 rejects and 1 weak reject. After the rebuttal, the paper was revised and improved, with significant portions of the paper completely rewritten (the description of the model was rewritten and a new experiment comparing the proposed model to SPAIR was added). While the reviewers acknowledged the improvement in the paper and accordingly adjusted their score upward, the paper is still not sufficiently strong enough to be accepted (it currently has 3 weak rejects). The reviewer expressed the following concerns: 1. The experiments uses only a toy dataset that does not convincingly demonstrate the generalizability of the method to more realistic/varied scenarios. In particular, the reviewers voiced concern that the dataset is tailored to the proposed method 2. Lack of comparisons with baseline methods such as AIR/SPAIR and other work on hierarchical generative models such as SPIRAL. In the revision, the author added an experiment comparing to SPAIR, so this is partially addressed. As a whole, the paper is still weak in experimental rigor. The authors argue that as their main contribution is the design and successful learning of a probabilistic scene graph representation, there is no need for ablation studies or to compare against baselines because their method "can bring better compositionality, interpretability, transferability, and generalization". This argument is unconvincing as in a scientific endeavor, the validity of such claims needs to be shown via empirical comparisons with prior work and ablation studies. 3. Limited novelty The method is a fairly straightforward extension of SPAIR with another hierarchy layer. This would not be a concern if the experimental aspects of the work was stronger. The AC agrees with the issues pointed by the reviewers. In addition, the initial presentation of the paper was very poor. While the paper has been improved, the changes are substantial (with the description of the method and intro almost entirely rewritten). Regardless, despite the improvements in writing, the paper is still not strong enough to be accepted. I would recommend the authors improve the evaluation and resubmit.
train
[ "HJxcv96sFH", "rkxOuHUXcS", "rkgZtVmCtS", "SylZZNK2oB", "BkeYkNY3or", "HkgD6XKnoB", "SJgNVQY3iH", "SyxbW7tnjB", "SkgbAzY3jr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "Contributions: this submission proposes a generative framework which employs a hierarchical decomposition of scene, objects and parts. It modifies the SPAIR framework (Crawford & Pineau) and replaces the recurrent generation with parallel generation, by assuming the number of objects is known and fixed. Results ar...
[ 3, 3, 3, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2020_SkeATxrKwH", "iclr_2020_SkeATxrKwH", "iclr_2020_SkeATxrKwH", "HJxcv96sFH", "rkgZtVmCtS", "rkxOuHUXcS", "iclr_2020_SkeATxrKwH", "iclr_2020_SkeATxrKwH", "iclr_2020_SkeATxrKwH" ]
iclr_2020_BJlA6eBtvH
Differentiable Hebbian Consolidation for Continual Learning
Continual learning is the problem of sequentially learning new tasks or knowledge while protecting previously acquired knowledge. However, catastrophic forgetting poses a grand challenge for neural networks performing such learning process. Thus, neural networks that are deployed in the real world often struggle in scenarios where the data distribution is non-stationary (concept drift), imbalanced, or not always fully available, i.e., rare edge cases. We propose a Differentiable Hebbian Consolidation model which is composed of a Differentiable Hebbian Plasticity (DHP) Softmax layer that adds a rapid learning plastic component (compressed episodic memory) to the fixed (slow changing) parameters of the softmax output layer; enabling learned representations to be retained for a longer timescale. We demonstrate the flexibility of our method by integrating well-known task-specific synaptic consolidation methods to penalize changes in the slow weights that are important for each target task. We evaluate our approach on the Permuted MNIST, Split MNIST and Vision Datasets Mixture benchmarks, and introduce an imbalanced variant of Permuted MNIST --- a dataset that combines the challenges of class imbalance and concept drift. Our proposed model requires no additional hyperparameters and outperforms comparable baselines by reducing forgetting.
reject
The reviewers agreed that this paper tackles an important problem, continual learning, with a method that is well motivated and interesting. The rebuttal was very helpful in terms of relating to other work. However, the empirical evaluation, while good, could be improved. In particular, it is not clear based on the evaluation to what extent more interesting continual learning problems can be tackled. We encourage the authors to continue pursuing this work.
train
[ "Skxx5TqhoH", "r1xvrhmUdr", "r1gc-QI3oB", "BJelLx3osB", "SkxVhenjiB", "Hkl9s0sosB", "HyeEBaissH", "HJgxkaiijr", "H1gf5z-pKS", "ByeRgzjkcB" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you Reviewer 1 for the quick response. We also thank you for validating our observations on this set of class-incremental methods like iCaRL. We would like to acknowledge that in the related work section of the paper, we did include CLS theory inspired strategies based on pseudo-rehearsal, episodic/exact rep...
[ -1, 3, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, 1, 1 ]
[ "r1gc-QI3oB", "iclr_2020_BJlA6eBtvH", "SkxVhenjiB", "Hkl9s0sosB", "r1xvrhmUdr", "H1gf5z-pKS", "HJgxkaiijr", "ByeRgzjkcB", "iclr_2020_BJlA6eBtvH", "iclr_2020_BJlA6eBtvH" ]
iclr_2020_BklDO1HYPS
Accelerated Variance Reduced Stochastic Extragradient Method for Sparse Machine Learning Problems
Recently, many stochastic gradient descent algorithms with variance reduction have been proposed. Moreover, their proximal variants such as Prox-SVRG can effectively solve non-smooth problems, which makes that they are widely applied in many machine learning problems. However, the introduction of proximal operator will result in the error of the optimal value. In order to address this issue, we introduce the idea of extragradient and propose a novel accelerated variance reduced stochastic extragradient descent (AVR-SExtraGD) algorithm, which inherits the advantages of Prox-SVRG and momentum acceleration techniques. Moreover, our theoretical analysis shows that AVR-SExtraGD enjoys the best-known convergence rates and oracle complexities of stochastic first-order algorithms such as Katyusha for both strongly convex and non-strongly convex problems. Finally, our experimental results show that for ERM problems and robust face recognition via sparse representation, our AVR-SExtraGD can yield the improved performance compared with Prox-SVRG and Katyusha. The asynchronous variant of AVR-SExtraGD outperforms KroMagnon and ASAGA, which are the asynchronous variants of SVRG and SAGA, respectively.
reject
This paper proposes a stochastic variance reduced extragradient algorithm. The reviewers had a number of concerns which I feel have been adequately addressed by the authors. That being said, the field of optimizers is crowded and I could not be convinced that the proposed method would be used. In particular, (almost) hyperparameter-free methods are usually preferred (see Adam), which is not the case here. To be honest, this work is borderline and could have gone either way but was rated lower than other borderline submissions.
train
[ "ryxr33Yijr", "HklJuIqjsB", "B1xtsw5isr", "r1eA4OKsoB", "Sye1yhFojS", "S1l6oaWstr", "B1x_ANL1cS", "B1eqF1Ykqr" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Question 1: It would be good and well understood to explain intuitively the steps of the proposed algorithm.\n\nResponse: Thank you for your valuable suggestion. To address your concern, we have added some explanations for the proposed algorithm in Section 3.1 in the revised manuscript and made it easier to unders...
[ -1, -1, -1, -1, -1, 8, 1, 6 ]
[ -1, -1, -1, -1, -1, 1, 3, 4 ]
[ "S1l6oaWstr", "B1x_ANL1cS", "B1x_ANL1cS", "B1eqF1Ykqr", "iclr_2020_BklDO1HYPS", "iclr_2020_BklDO1HYPS", "iclr_2020_BklDO1HYPS", "iclr_2020_BklDO1HYPS" ]
iclr_2021_nIAxjsniDzg
What Matters for On-Policy Deep Actor-Critic Methods? A Large-Scale Study
In recent years, reinforcement learning (RL) has been successfully applied to many different continuous control tasks. While RL algorithms are often conceptually simple, their state-of-the-art implementations take numerous low- and high-level design decisions that strongly affect the performance of the resulting agents. Those choices are usually not extensively discussed in the literature, leading to discrepancy between published descriptions of algorithms and their implementations. This makes it hard to attribute progress in RL and slows down overall progress [Engstrom'20]. As a step towards filling that gap, we implement >50 such ``"choices" in a unified on-policy deep actor-critic framework, allowing us to investigate their impact in a large-scale empirical study. We train over 250'000 agents in five continuous control environments of different complexity and provide insights and practical recommendations for the training of on-policy deep actor-critic RL agents.
oral-presentations
There is a clear consensus over all reviewers that this is a very strong empirical analysis, with actionable insights that should prove quite useful both to researchers and practitioners. I have no doubt that many will use it as a reference when implementing and using RL algorithms (especially since the authors said they would release their code). This is thus a clear accept, that in my opinion would deserve an oral presentation, so as to better disseminate its key findings.
test
[ "L7IvBYx0PqH", "NT03fLlV7DT", "9GFpWQd4Fsh", "kCjqVkK-aln", "Sve8r61QjmN", "87eZ4ihfSZ", "uRDqayVVDBr", "HBC7V6IvvI", "R9kNv-9434Z", "5nkNQUTGEd0", "-alWI4hbVo8" ]
[ "author", "author", "author", "author", "author", "public", "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for their review. Please find our answers below:\n\n1. Performance averaged across training vs final performance:\nWe choose the performance averaged across training as the relevant metric as different practitioners may have different computational budgets and therefore train for a different ...
[ -1, -1, -1, -1, -1, -1, -1, 9, 7, 9, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "-alWI4hbVo8", "R9kNv-9434Z", "5nkNQUTGEd0", "HBC7V6IvvI", "uRDqayVVDBr", "iclr_2021_nIAxjsniDzg", "iclr_2021_nIAxjsniDzg", "iclr_2021_nIAxjsniDzg", "iclr_2021_nIAxjsniDzg", "iclr_2021_nIAxjsniDzg", "iclr_2021_nIAxjsniDzg" ]
iclr_2021_rC8sJ4i6kaH
Theoretical Analysis of Self-Training with Deep Networks on Unlabeled Data
Self-training algorithms, which train a model to fit pseudolabels predicted by another previously-learned model, have been very successful for learning with unlabeled data using neural networks. However, the current theoretical understanding of self-training only applies to linear models. This work provides a unified theoretical analysis of self-training with deep networks for semi-supervised learning, unsupervised domain adaptation, and unsupervised learning. At the core of our analysis is a simple but realistic “expansion” assumption, which states that a low-probability subset of the data must expand to a neighborhood with large probability relative to the subset. We also assume that neighborhoods of examples in different classes have minimal overlap. We prove that under these assumptions, the minimizers of population objectives based on self-training and input-consistency regularization will achieve high accuracy with respect to ground-truth labels. By using off-the-shelf generalization bounds, we immediately convert this result to sample complexity guarantees for neural nets that are polynomial in the margin and Lipschitzness. Our results help explain the empirical successes of recently proposed self-training algorithms which use input consistency regularization.
oral-presentations
The paper looks into theoretical analysis of self-training beyond the existing linear case and considers deep networks under additional assumption on data. namely: expansion and minimal overlap in the neighborhood of examples in different classes. The results shed some light on self-training algorithms that use input consistency regularizers. Although the assumptions are very hard to check for all input distributions, the authors make an attempt by considering output of BigGAN generator. In summary, the paper is a great first step in understanding self-training for deep networks. The paper is overall clearly written. please add the explanation of Assumption 4.1 as requested by Reviewer 4. Pros: - given the extensive use of self-training the paper is of great importance to the community -extending the analysis of self-training to deep networks -the paper is clearly written and easy to follow cons: -the assumptions are very hard to validate on all datasets
test
[ "RjMC7r4P7RH", "1HEEmuW6TFq", "nw7Z6nuJqM0", "5QI3XCORcPs", "j1sV1ocfwxU", "9FtiNaOegLn", "ob2NmpNzjH5", "jcWce-ovV1", "pImZKtEe4UV", "GYCQ3ygXVfI", "Vpp_yDmBCVY", "lXk8Uhgyu9d" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the great suggestion and for reading our response. We will incorporate this suggestion in the next revision of our paper.", "Thanks for the response. I think a brief discussion you provided for my question on Assumption 4.1 would make a nice addition to the paper if there's room!", "Thank you for...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 9, 9 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "1HEEmuW6TFq", "5QI3XCORcPs", "jcWce-ovV1", "lXk8Uhgyu9d", "GYCQ3ygXVfI", "GYCQ3ygXVfI", "Vpp_yDmBCVY", "pImZKtEe4UV", "iclr_2021_rC8sJ4i6kaH", "iclr_2021_rC8sJ4i6kaH", "iclr_2021_rC8sJ4i6kaH", "iclr_2021_rC8sJ4i6kaH" ]
iclr_2021_rALA0Xo6yNJ
Learning to Reach Goals via Iterated Supervised Learning
Current reinforcement learning (RL) algorithms can be brittle and difficult to use, especially when learning goal-reaching behaviors from sparse rewards. Although supervised imitation learning provides a simple and stable alternative, it requires access to demonstrations from a human supervisor. In this paper, we study RL algorithms that use imitation learning to acquire goal reaching policies from scratch, without the need for expert demonstrations or a value function. In lieu of demonstrations, we leverage the property that any trajectory is a successful demonstration for reaching the final state in that same trajectory. We propose a simple algorithm in which an agent continually relabels and imitates the trajectories it generates to progressively learn goal-reaching behaviors from scratch. Each iteration, the agent collects new trajectories using the latest policy, and maximizes the likelihood of the actions along these trajectories under the goal that was actually reached, so as to improve the policy. We formally show that this iterated supervised learning procedure optimizes a bound on the RL objective, derive performance bounds of the learned policy, and empirically demonstrate improved goal-reaching performance and robustness over current RL algorithms in several benchmark tasks.
oral-presentations
The paper leverages concepts coming from hindsight relabelling methods to define a novel "iterated" supervised learning procedure to learn policies to reach different goals. The algorithmic solution is well supported in terms of intuition, preliminary theoretical guarantees, as well as strong empirical validation. There is a general consensus among the reviewers that this is a strong submission and the rebuttal helped in clarifying some aspects of the paper (e.g., the comparison with Go-Explore) and reinforced the empirical analysis. This is a clear accept.
train
[ "OOB6dOiEH3", "vzBkRumGne5", "mWlByX4JVtl", "xUHgxU_bOIg", "kJIHPkf2pGP", "PGfsguNepQf", "_b4v5tYZy6n", "OJehLWbDHLi", "phW78C6hPN7", "2jLzhocAx4m" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a new RL algorithms dedicated to learning goal-oriented policies. In the described setting, a policy has to reach a particular goal in T steps such that s_T = goal i.e the objective is not to reach the goal as soon as possible but in a particular number of steps. The proposed algorithm is very s...
[ 7, -1, 8, -1, -1, -1, -1, -1, 8, 7 ]
[ 4, -1, 4, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_rALA0Xo6yNJ", "_b4v5tYZy6n", "iclr_2021_rALA0Xo6yNJ", "iclr_2021_rALA0Xo6yNJ", "phW78C6hPN7", "OOB6dOiEH3", "mWlByX4JVtl", "2jLzhocAx4m", "iclr_2021_rALA0Xo6yNJ", "iclr_2021_rALA0Xo6yNJ" ]
iclr_2021_m5Qsh0kBQG
Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients
Discovering the underlying mathematical expressions describing a dataset is a core challenge for artificial intelligence. This is the problem of symbolic regression. Despite recent advances in training neural networks to solve complex tasks, deep learning approaches to symbolic regression are underexplored. We propose a framework that leverages deep learning for symbolic regression via a simple idea: use a large model to search the space of small models. Specifically, we use a recurrent neural network to emit a distribution over tractable mathematical expressions and employ a novel risk-seeking policy gradient to train the network to generate better-fitting expressions. Our algorithm outperforms several baseline methods (including Eureqa, the gold standard for symbolic regression) in its ability to exactly recover symbolic expressions on a series of benchmark problems, both with and without added noise. More broadly, our contributions include a framework that can be applied to optimize hierarchical, variable-length objects under a black-box performance metric, with the ability to incorporate constraints in situ, and a risk-seeking policy gradient formulation that optimizes for best-case performance instead of expected performance.
oral-presentations
This paper proposes an approach of generating mathematical expressions with a recurrent neural network, which is trained with risk-seeking policy gradient to maximize the quality of best examples rather than average examples. The proposed approach also enables easily incorporating domain knowledge or constraints to avoid illegal or redundant expressions. In extensive experiments, the proposed method is shown to significantly outperform strong baselines, including commercial software. All of the reviewers find the work interesting and relevant, and there are no major concerns or issues after discussion. The topic is also of interest to a wide range of audience in the ICLR community.
train
[ "OxAF_3-bDOQ", "6Gazor0EiJ", "Gr8ClQMMCpW", "PEeDN1MeSya", "q15LbOgRO4Y", "3VwFL3Fm788", "ORh4FliCHZN", "fWlpyZRTLDK", "OUfQIBM4p8", "Y9iHkAVt3i", "QSjK34Yj1u4", "-6JSOb3_4rr", "W9pGYaYB1Q", "fkXSX0BWQn" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your answers. It is nice to see Table 11 with compute times. I would add results from VPG-convergence.pdf to the appendix as well. ", "#### Summary\n\nThis paper presents a symbolic regression algorithm which uses policy gradient to learn a distribution over the space of mathematical expression str...
[ -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, 9, 8, 8 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "fWlpyZRTLDK", "iclr_2021_m5Qsh0kBQG", "q15LbOgRO4Y", "ORh4FliCHZN", "6Gazor0EiJ", "-6JSOb3_4rr", "fkXSX0BWQn", "OUfQIBM4p8", "W9pGYaYB1Q", "6Gazor0EiJ", "6Gazor0EiJ", "iclr_2021_m5Qsh0kBQG", "iclr_2021_m5Qsh0kBQG", "iclr_2021_m5Qsh0kBQG" ]
iclr_2021_PULSD5qI2N1
Optimal Rates for Averaged Stochastic Gradient Descent under Neural Tangent Kernel Regime
We analyze the convergence of the averaged stochastic gradient descent for overparameterized two-layer neural networks for regression problems. It was recently found that a neural tangent kernel (NTK) plays an important role in showing the global convergence of gradient-based methods under the NTK regime, where the learning dynamics for overparameterized neural networks can be almost characterized by that for the associated reproducing kernel Hilbert space (RKHS). However, there is still room for a convergence rate analysis in the NTK regime. In this study, we show that the averaged stochastic gradient descent can achieve the minimax optimal convergence rate, with the global convergence guarantee, by exploiting the complexities of the target function and the RKHS associated with the NTK. Moreover, we show that the target function specified by the NTK of a ReLU network can be learned at the optimal convergence rate through a smooth approximation of a ReLU network under certain conditions.
oral-presentations
The paper presents some exciting results on the convergence of averaged SGD for overparameterized two-layer neural networks. The AC and reviewers all agree that the contributions are significant and well presented, and appreciate the author feedback to the reviews. The corresponding revisions on assumptions and references, and the added simplified proposition in the introduction have nicely improved the manuscript.
train
[ "YUzOq4nycmF", "MSz971IUftf", "Ir6S-kTqueP", "YHYJHXq0Sd4", "Ob2PLEto_-", "xw8j-37KAxt", "fm_zeKoCLvJ", "4SNkn6In5Vv", "SbgSUVBNoJ_", "NLe1F2VU-Gs", "TNq4397Wg3U", "2XUuEqYTtnb" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary:\n \nThis paper considers the convergence property of averaged stochastic gradient descent on a overparameterized two-layer neural networks for a regression problem. This paper is the first to achieve the optimal convergence rate under the NTK regime. They show that smooth target functions efficiently spec...
[ 7, -1, -1, -1, -1, -1, -1, -1, 7, 8, 8, 8 ]
[ 2, -1, -1, -1, -1, -1, -1, -1, 3, 2, 4, 5 ]
[ "iclr_2021_PULSD5qI2N1", "YHYJHXq0Sd4", "SbgSUVBNoJ_", "NLe1F2VU-Gs", "TNq4397Wg3U", "2XUuEqYTtnb", "YUzOq4nycmF", "iclr_2021_PULSD5qI2N1", "iclr_2021_PULSD5qI2N1", "iclr_2021_PULSD5qI2N1", "iclr_2021_PULSD5qI2N1", "iclr_2021_PULSD5qI2N1" ]
iclr_2021_JWOiYxMG92s
Free Lunch for Few-shot Learning: Distribution Calibration
Learning from a limited number of samples is challenging since the learned model can easily become overfitted based on the biased distribution formed by only a few training examples. In this paper, we calibrate the distribution of these few-sample classes by transferring statistics from the classes with sufficient examples. Then an adequate number of examples can be sampled from the calibrated distribution to expand the inputs to the classifier. We assume every dimension in the feature representation follows a Gaussian distribution so that the mean and the variance of the distribution can borrow from that of similar classes whose statistics are better estimated with an adequate number of samples. Our method can be built on top of off-the-shelf pretrained feature extractors and classification models without extra parameters. We show that a simple logistic regression classifier trained using the features sampled from our calibrated distribution can outperform the state-of-the-art accuracy on three datasets (~5% improvement on miniImageNet compared to the next best). The visualization of these generated features demonstrates that our calibrated distribution is an accurate estimation.
oral-presentations
This paper proposes a novel and powerful data augmentation strategy for few-shot learning, producing convincing improvements over current approaches. The request by the reviewers to include additional ablations, more backbones, and an additional dataset have been satisfactorily resolved, with the results remaining strong. The reviewers are all unanimous in their recommendation that the paper be accepted for publication.
train
[ "zuXW6WDf8ia", "4adsJMz8hA2", "5JluQatbhby", "pW5UgHEp4JU", "2_UN_ouU5CG", "1dxkkh6xBWA", "6oQZy4VR38", "DD7-TM9kZb" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary\n\nThe paper proposes a method to calibrate the underlying distribution of a few samples in the few-shot classification scenario. The idea is to estimate a feature distribution of a few samples of a novel class from base class distributions. The authors assume that every dimension in the feature vector fol...
[ 7, -1, -1, -1, -1, -1, 7, 7 ]
[ 5, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_JWOiYxMG92s", "iclr_2021_JWOiYxMG92s", "pW5UgHEp4JU", "zuXW6WDf8ia", "6oQZy4VR38", "DD7-TM9kZb", "iclr_2021_JWOiYxMG92s", "iclr_2021_JWOiYxMG92s" ]
iclr_2021_HajQFbx_yB
Scalable Learning and MAP Inference for Nonsymmetric Determinantal Point Processes
Determinantal point processes (DPPs) have attracted significant attention in machine learning for their ability to model subsets drawn from a large item collection. Recent work shows that nonsymmetric DPP (NDPP) kernels have significant advantages over symmetric kernels in terms of modeling power and predictive performance. However, for an item collection of size M, existing NDPP learning and inference algorithms require memory quadratic in M and runtime cubic (for learning) or quadratic (for inference) in M, making them impractical for many typical subset selection tasks. In this work, we develop a learning algorithm with space and time requirements linear in M by introducing a new NDPP kernel decomposition. We also derive a linear-complexity NDPP maximum a posteriori (MAP) inference algorithm that applies not only to our new kernel but also to that of prior work. Through evaluation on real-world datasets, we show that our algorithms scale significantly better, and can match the predictive performance of prior work.
oral-presentations
This paper proposes a technique of decomposing the nonsymmetric kernel of determinantal point processes, which enables inference and learning in time and space linear with respect to the size of the ground set. This substantially improves upon existing work. The proposed method is well supported both with theory and experiments. All of the reviewers find that the contributions are significant, and no major flaws are identified through reviews and discussion. The determinantal point process might not be one of the most popular topics in the ICLR community today but certainly is relevant.
train
[ "-jf6rBYHM-u", "KxoyhQagVp7", "KIWKldKHUA", "ws5OnGNcZBe", "54b_th9wRo", "QkgqQlBRSgb", "Dr6gMaGKl7c", "M1EIJELgVi", "mLQ656HfVPt" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "Thanks for the feedback and the revisions to the paper. I am satisfied to the response. Congratulations on the great work!", "Dear reviewers,\n\nThank you very much for your reviews. The authors have given concrete responses to the concerns raised by reviewers. Please acknowledge if you are satisfied with the ...
[ -1, -1, 7, -1, 8, -1, -1, -1, 9 ]
[ -1, -1, 4, -1, 4, -1, -1, -1, 5 ]
[ "QkgqQlBRSgb", "iclr_2021_HajQFbx_yB", "iclr_2021_HajQFbx_yB", "M1EIJELgVi", "iclr_2021_HajQFbx_yB", "KIWKldKHUA", "54b_th9wRo", "mLQ656HfVPt", "iclr_2021_HajQFbx_yB" ]
iclr_2021_xpx9zj7CUlY
Randomized Automatic Differentiation
The successes of deep learning, variational inference, and many other fields have been aided by specialized implementations of reverse-mode automatic differentiation (AD) to compute gradients of mega-dimensional objectives. The AD techniques underlying these tools were designed to compute exact gradients to numerical precision, but modern machine learning models are almost always trained with stochastic gradient descent. Why spend computation and memory on exact (minibatch) gradients only to use them for stochastic optimization? We develop a general framework and approach for randomized automatic differentiation (RAD), which can allow unbiased gradient estimates to be computed with reduced memory in return for variance. We examine limitations of the general approach, and argue that we must leverage problem specific structure to realize benefits. We develop RAD techniques for a variety of simple neural network architectures, and show that for a fixed memory budget, RAD converges in fewer iterations than using a small batch size for feedforward networks, and in a similar number for recurrent networks. We also show that RAD can be applied to scientific computing, and use it to develop a low-memory stochastic gradient method for optimizing the control parameters of a linear reaction-diffusion PDE representing a fission reactor.
oral-presentations
The reviewers agree that this is an interesting and original paper that will be of interest to the ICLR community, and is likely to lead to follow up work.
train
[ "9F1OFkeE1IC", "ag4Yp4_sjfV", "vl4e1-zxkE7", "KlX8xJOsRUm", "TFGYPwz-YU2", "n-3X8UCas0", "WyRNOOR5eb0", "GTR28SvhpUT", "0JXhCpAKD2O", "wMCL09Fa7FE", "2EI6ztnbJkq" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary: \n\nThe paper proposes to subsample the computational graph to obtain an unbiased gradient estimator with less memory requirement, in the same spirit as minibatching reducing the memory of the full-batch gradient descent. The authors propose to do so by modifying the forward pass of a linear layer to only...
[ 7, 7, -1, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_xpx9zj7CUlY", "iclr_2021_xpx9zj7CUlY", "iclr_2021_xpx9zj7CUlY", "TFGYPwz-YU2", "0JXhCpAKD2O", "9F1OFkeE1IC", "2EI6ztnbJkq", "wMCL09Fa7FE", "ag4Yp4_sjfV", "iclr_2021_xpx9zj7CUlY", "iclr_2021_xpx9zj7CUlY" ]
iclr_2021_UuchYL8wSZo
Learning Generalizable Visual Representations via Interactive Gameplay
A growing body of research suggests that embodied gameplay, prevalent not just in human cultures but across a variety of animal species including turtles and ravens, is critical in developing the neural flexibility for creative problem solving, decision making, and socialization. Comparatively little is known regarding the impact of embodied gameplay upon artificial agents. While recent work has produced agents proficient in abstract games, these environments are far removed the real world and thus these agents can provide little insight into the advantages of embodied play. Hiding games, such as hide-and-seek, played universally, provide a rich ground for studying the impact of embodied gameplay on representation learning in the context of perspective taking, secret keeping, and false belief understanding. Here we are the first to show that embodied adversarial reinforcement learning agents playing Cache, a variant of hide-and-seek, in a high fidelity, interactive, environment, learn generalizable representations of their observations encoding information such as object permanence, free space, and containment. Moving closer to biologically motivated learning strategies, our agents' representations, enhanced by intentionality and memory, are developed through interaction and play. These results serve as a model for studying how facets of vision develop through interaction, provide an experimental framework for assessing what is learned by artificial agents, and demonstrates the value of moving from large, static, datasets towards experiential, interactive, representation learning.
oral-presentations
Motivated by the importance of gameplay in the development of critical skills for humans and other biological species, this work aims to explore representation learning via gameplay in a realistic, high fidelity environment. Inspired by childhood psychology, they propose a variant of hide-and-seek game called "Cache" built on top of AI2-THOR, where one agent must place an object in a room such that another agent cannot find it, and demonstrate that the adversarial nature of the game helps the agents learn useful representations of the environment. They examine the difference in representations learned via such a dynamic, interactive adversarial gameplay approach, vs other more passive approaches involving static images. The paper is well written and motivated, and easy to follow. All reviewers agree that the paper will be a great contribution to the ICLR community. I believe this is an important work, because not only does it challenge the traditional way of training many components of our systems passively (via static image recognition models), it synthesizes ideas from various disciplines (psychology, embodiment, ML) and provides an excellent framework for future research. For these reasons I'm recommending we accept this work as an Oral presentation.
val
[ "5xhQnfa6-NN", "LWRpScftEqa", "HfNnClKReyg", "FU5Ufgif0n", "bqTlbMO4KyJ", "E1jKDSim5-", "U-LzNNFC_mH", "jRSk4ylDYyk", "Wrqc9ZeUN3J" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the rebuttal and delineating these modifications. ", "Summary\n-------\n\nThis paper examines the representations learned during adversarial gameplay, specifically a hide-and-seek game called Cache. The hiding agent must place an object in a room such that the seeker agent cannot find it. The aut...
[ -1, 8, -1, -1, -1, -1, 8, 8, 9 ]
[ -1, 4, -1, -1, -1, -1, 3, 4, 3 ]
[ "bqTlbMO4KyJ", "iclr_2021_UuchYL8wSZo", "U-LzNNFC_mH", "LWRpScftEqa", "jRSk4ylDYyk", "Wrqc9ZeUN3J", "iclr_2021_UuchYL8wSZo", "iclr_2021_UuchYL8wSZo", "iclr_2021_UuchYL8wSZo" ]
iclr_2021_KvyxFqZS_D
Global Convergence of Three-layer Neural Networks in the Mean Field Regime
In the mean field regime, neural networks are appropriately scaled so that as the width tends to infinity, the learning dynamics tends to a nonlinear and nontrivial dynamical limit, known as the mean field limit. This lends a way to study large-width neural networks via analyzing the mean field limit. Recent works have successfully applied such analysis to two-layer networks and provided global convergence guarantees. The extension to multilayer ones however has been a highly challenging puzzle, and little is known about the optimization efficiency in the mean field regime when there are more than two layers. In this work, we prove a global convergence result for unregularized feedforward three-layer networks in the mean field regime. We first develop a rigorous framework to establish the mean field limit of three-layer networks under stochastic gradient descent training. To that end, we propose the idea of a neuronal embedding, which comprises of a fixed probability space that encapsulates neural networks of arbitrary sizes. The identified mean field limit is then used to prove a global convergence guarantee under suitable regularity and convergence mode assumptions, which – unlike previous works on two-layer networks – does not rely critically on convexity. Underlying the result is a universal approximation property, natural of neural networks, which importantly is shown to hold at any finite training time (not necessarily at convergence) via an algebraic topology argument.
oral-presentations
This paper provides a global convergence guarantee for feedforward three-layer networks trained with SGD in the MF regime. By introducing the novel concept of neuronal embedding of a random initialization procedure, SGD trajectories of large-width networks are shown to be well approximated by the MF limit, a continuous-time infinite-width limit (Theorem 3). Furthermore, under some additional assumptions the MF limit is shown to converge to the global optimum when the loss is convex (Theorem 8, case 1) and for a generic loss when $y=y(x)$ is a deterministic function of input $x$ (Theorem 8, case 2). The global convergence guarantee presented in this paper is based on less restrictive assumptions compared with existing studies. All the reviewers rated this paper quite positively, with less confidence however, seemingly because of mathematical thickness of the proofs. Although the reviewers did not manage to check every detail of the proofs, they agreed that the reasoning seems mathematically sound as far as they can tell. The authors response adequately addressed minor concerns raised by the reviewers. I am thus glad to recommend acceptance of this paper. Pros: - Introduces the idea of a neuronal embedding, which allows establishing relation between SGD on large-width three-layer networks and its MF limit in a quantitative way with a less restrictive setting. - Provides a global convergence guarantee under the iid initialization, in the sense that if the MF limit converges it attains the global optimum. - Shows that the global convergence guarantee does not require convexity of the loss when a deterministic function is to be learned. In particular, the uniform approximation property, rather than the convexity of the loss, plays a crucial role in proving the global convergence guarantee (it allows translation of the vanishing gradient in expectation at convergence into the almost-sure vanishing gradient), which is a quite original contribution of this paper.
train
[ "TSqtGDLrVyI", "lfFdtwtTD-", "13_iQHuTFkU", "-nkVJ6kksM", "FcX6yNDNQCl", "T6WezFEzdFy", "8O73d2rOD6N", "hvymXpDkiKU" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the thoughtful review.\n\nRegarding the idea of neuronal ensemble / neuronal embedding, we are not aware of similar ideas in the context of neural networks. Perhaps there are some distantly related ideas from other fields, such as limits of combinatorial objects (e.g. graphon, hypergrapho...
[ -1, -1, -1, -1, 7, 7, 7, 9 ]
[ -1, -1, -1, -1, 3, 2, 3, 2 ]
[ "FcX6yNDNQCl", "T6WezFEzdFy", "8O73d2rOD6N", "hvymXpDkiKU", "iclr_2021_KvyxFqZS_D", "iclr_2021_KvyxFqZS_D", "iclr_2021_KvyxFqZS_D", "iclr_2021_KvyxFqZS_D" ]
iclr_2021_Mk6PZtgAgfq
Rao-Blackwellizing the Straight-Through Gumbel-Softmax Gradient Estimator
Gradient estimation in models with discrete latent variables is a challenging problem, because the simplest unbiased estimators tend to have high variance. To counteract this, modern estimators either introduce bias, rely on multiple function evaluations, or use learned, input-dependent baselines. Thus, there is a need for estimators that require minimal tuning, are computationally cheap, and have low mean squared error. In this paper, we show that the variance of the straight-through variant of the popular Gumbel-Softmax estimator can be reduced through Rao-Blackwellization without increasing the number of function evaluations. This provably reduces the mean squared error. We empirically demonstrate that this leads to variance reduction, faster convergence, and generally improved performance in two unsupervised latent variable models.
oral-presentations
The paper presents a variance reduction technique to the Straight-Through version of the Gumbel-Softmax estimator. The technique is relying on the truncated Gumbel of Maddison et al. I share the excitement of the reviewers about this work and I expect this technique to further influence the field.
test
[ "DVdOEf4LS7G", "c621Ks-ECY", "udGDPfEkKXb", "gA-brMwKEl3", "LrpEg-vmpw", "BTxE28VJgT", "b2Ept5Vt6qx", "Pn7sAiLk-9N", "sFYHnImdnFD", "m9Y06a9MerE", "iZrhRtLK7xP", "df7KKqW0LRD" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "\n\nThis paper introduces the Rao-blackwellization technique to reduce the variance of the straight-through gumbel-softmax gradient (STGS) estimator wrt the parameters of discrete distributions. The proposed method introduces almost trivial computational costs (relative to function evaluations) and is empirically ...
[ 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_Mk6PZtgAgfq", "iclr_2021_Mk6PZtgAgfq", "m9Y06a9MerE", "DVdOEf4LS7G", "df7KKqW0LRD", "c621Ks-ECY", "c621Ks-ECY", "c621Ks-ECY", "iclr_2021_Mk6PZtgAgfq", "c621Ks-ECY", "c621Ks-ECY", "iclr_2021_Mk6PZtgAgfq" ]
iclr_2021_Ua6zuk0WRH
Rethinking Attention with Performers
We introduce Performers, Transformer architectures which can estimate regular (softmax) full-rank-attention Transformers with provable accuracy, but using only linear (as opposed to quadratic) space and time complexity, without relying on any priors such as sparsity or low-rankness. To approximate softmax attention-kernels, Performers use a novel Fast Attention Via positive Orthogonal Random features approach (FAVOR+), which may be of independent interest for scalable kernel methods. FAVOR+ can also be used to efficiently model kernelizable attention mechanisms beyond softmax. This representational power is crucial to accurately compare softmax with other kernels for the first time on large-scale tasks, beyond the reach of regular Transformers, and investigate optimal attention-kernels. Performers are linear architectures fully compatible with regular Transformers and with strong theoretical guarantees: unbiased or nearly-unbiased estimation of the attention matrix, uniform convergence and low estimation variance. We tested Performers on a rich set of tasks stretching from pixel-prediction through text models to protein sequence modeling. We demonstrate competitive results with other examined efficient sparse and dense attention methods, showcasing effectiveness of the novel attention-learning paradigm leveraged by Performers.
oral-presentations
This is a solid paper that proposes a new method for approximating softmax attention in transformer architectures that scales linearly with the size of the sequence. Even though linear architectures have been proposed before using a similar idea (Katharopoulos et al 2020), this paper provides a better solution along with theoretical analysis and makes a rigorous empirical comparison against other methods. All reviewers agree that this is a strong paper that should be accepted. I suggest citing the recent paper https://arxiv.org/abs/2011.04006 (Long Range Arena, mentioned in the discussion) which provides further comparisons on long-range benchmarks, including the method presented in this paper and Katharopoulos et al 2020, along with a detailed discussion of the differences between the two methods.
train
[ "hHGju8kG9_0", "Xf1RsiQztjt", "9KYwY7gvMga", "UfTkeNE_oeN", "xdZsdn2Ncy", "uCBXWTkPFVX", "3FqTKLZrdFy", "46aOVjIyrlZ", "Z4mBGPLwtu1", "5uCgwwLmJ9", "XezMg9DYOkZ", "PaQjUjJCTJ0", "MAyJ4XGUF6K", "TRWTizOqxrD" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This is a solid paper that presents a computationally less expensive, unbiased, low-variance estimator of the Transformer architecture. \n\n**Strengths**:\n1. The authors provide mathematical guarantees for the suggested estimator.\n2. The estimator (called FAVOR+) seems a more scalable replacement for regular att...
[ 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 8, 7 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3 ]
[ "iclr_2021_Ua6zuk0WRH", "9KYwY7gvMga", "xdZsdn2Ncy", "TRWTizOqxrD", "46aOVjIyrlZ", "3FqTKLZrdFy", "PaQjUjJCTJ0", "XezMg9DYOkZ", "5uCgwwLmJ9", "MAyJ4XGUF6K", "hHGju8kG9_0", "iclr_2021_Ua6zuk0WRH", "iclr_2021_Ua6zuk0WRH", "iclr_2021_Ua6zuk0WRH" ]
iclr_2021_XSLF1XFq5h
Getting a CLUE: A Method for Explaining Uncertainty Estimates
Both uncertainty estimation and interpretability are important factors for trustworthy machine learning systems. However, there is little work at the intersection of these two areas. We address this gap by proposing a novel method for interpreting uncertainty estimates from differentiable probabilistic models, like Bayesian Neural Networks (BNNs). Our method, Counterfactual Latent Uncertainty Explanations (CLUE), indicates how to change an input, while keeping it on the data manifold, such that a BNN becomes more confident about the input's prediction. We validate CLUE through 1) a novel framework for evaluating counterfactual explanations of uncertainty, 2) a series of ablation experiments, and 3) a user study. Our experiments show that CLUE outperforms baselines and enables practitioners to better understand which input patterns are responsible for predictive uncertainty.
oral-presentations
This paper presents an uncertainty quantification method that is conceptually interesting and practical. All reviewers are in consensus regarding the quality and significance of this manuscript.
train
[ "ij9NqCc8Mt3", "ok-XqUnTsE", "WOq0JFuz8Lt", "K_h12Besq-2", "K878HyK_sAU", "FQsHhAhzo9a", "OimcrcKGOu", "VxbNEL0ipvY", "j0koA1qPmq", "Gc2-av0LIcB", "OKsPmofcrb3", "PTmucu26qM", "YV44NUcNGD_", "3CDLfSZ_Kh", "WvR4GSB5clR", "-jdT0DsmTvL", "3BYvTXRhO0t", "wUcTgPRR7JP", "xBknESk3wi" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "## Summary\n\nThe authors consider the problem of post-hoc explainability for decisions rendered by machine learning models. They focus on addressing uncertain model predictions, producing counterfactual data that is both likely under a generative model of the data, as well as more certain in the classification t...
[ 7, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_XSLF1XFq5h", "Gc2-av0LIcB", "iclr_2021_XSLF1XFq5h", "PTmucu26qM", "YV44NUcNGD_", "iclr_2021_XSLF1XFq5h", "VxbNEL0ipvY", "WOq0JFuz8Lt", "3CDLfSZ_Kh", "OKsPmofcrb3", "wUcTgPRR7JP", "K878HyK_sAU", "WOq0JFuz8Lt", "WvR4GSB5clR", "xBknESk3wi", "3BYvTXRhO0t", "ij9NqCc8Mt3", "ic...
iclr_2021_tW4QEInpni
When Do Curricula Work?
Inspired by human learning, researchers have proposed ordering examples during training based on their difficulty. Both curriculum learning, exposing a network to easier examples early in training, and anti-curriculum learning, showing the most difficult examples first, have been suggested as improvements to the standard i.i.d. training. In this work, we set out to investigate the relative benefits of ordered learning. We first investigate the implicit curricula resulting from architectural and optimization bias and find that samples are learned in a highly consistent order. Next, to quantify the benefit of explicit curricula, we conduct extensive experiments over thousands of orderings spanning three kinds of learning: curriculum, anti-curriculum, and random-curriculum -- in which the size of the training dataset is dynamically increased over time, but the examples are randomly ordered. We find that for standard benchmark datasets, curricula have only marginal benefits, and that randomly ordered samples perform as well or better than curricula and anti-curricula, suggesting that any benefit is entirely due to the dynamic training set size. Inspired by common use cases of curriculum learning in practice, we investigate the role of limited training time budget and noisy data in the success of curriculum learning. Our experiments demonstrate that curriculum, but not anti-curriculum or random ordering can indeed improve the performance either with limited training time budget or in the existence of noisy data.
oral-presentations
This nice paper gives a better understanding of how Curriculum Learning (CL) affects image classification. In particular, it gives insight into cases such as noisy training data and limited training time. It shows that examples can be rated by difficulty to some extent, in that the order in which examples are learned seems to be consistent across runs. The paper is thorough and well-written.
train
[ "tTVUDbmJwtR", "thGXJ_h8wSr", "0oL_OcX1lhj", "pEljTAxMDOy", "bdOEuP_lVvu", "223Yf19Th9K" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "Summary: The paper conducts a large-scale evaluation of the impact of curriculum learning (CL) in image classification. The paper progresses nicely through a sequence of well-thought research questions and experiments, with the key findings stated up front. In particular, the notion of \"implicit curriculum\" is s...
[ 8, 8, -1, -1, -1, 7 ]
[ 3, 4, -1, -1, -1, 3 ]
[ "iclr_2021_tW4QEInpni", "iclr_2021_tW4QEInpni", "223Yf19Th9K", "thGXJ_h8wSr", "tTVUDbmJwtR", "iclr_2021_tW4QEInpni" ]
iclr_2021_B7v4QMR6Z9w
Federated Learning Based on Dynamic Regularization
We propose a novel federated learning method for distributively training neural network models, where the server orchestrates cooperation between a subset of randomly chosen devices in each round. We view Federated Learning problem primarily from a communication perspective and allow more device level computations to save transmission costs. We point out a fundamental dilemma, in that the minima of the local-device level empirical loss are inconsistent with those of the global empirical loss. Different from recent prior works, that either attempt inexact minimization or utilize devices for parallelizing gradient computation, we propose a dynamic regularizer for each device at each round, so that in the limit the global and device solutions are aligned. We demonstrate both through empirical results on real and synthetic data as well as analytical results that our scheme leads to efficient training, in both convex and non-convex settings, while being fully agnostic to device heterogeneity and robust to large number of devices, partial participation and unbalanced data.
oral-presentations
The paper introduces a new federated learning algorithm that ensures that the objective function optimized on each device is asymptotically consistent with the global loss function. Both theoretical analysis and empirical results, evaluating communication efficiency, demonstrate the advantages of the proposed FedDyn method over the baselines. All the reviewers recommend accepting the paper. To summarize the discussion: - R1 mentioned a very recent (NeurIPS 20) related paper and asks several questions. I believe that the authors nicely answered the questions and discussed the relation to the previous paper in detail. - R2 mentioned that the paper focuses solely on minimizing communication costs, ignoring costs of local computations. The authors argued that the local computation costs are comparable to those of the baselines, and, in general, communication costs are the main source of computation energy costs (pointing to previous work), and, thus, are a natural objective to optimize. I believe that this adequately addressed this (and other) reviewer's concerns and the reviewer kept their score unchanged. - R3 had several concerns, which according to the reviewer were addressed in the rebuttal (they increased the score). - R4 points out several limitations of the method and theoretical analysis and believes that the rebuttal did not quite address the concerns. Nevertheless, remains positive about the paper, and believes that the shortcomings can be addressed in follow-up work. We share the reviewers' sentiment: it is a very nice and interesting paper, and should be accepted.
train
[ "bEnICX0_67", "7rz-0_IIlK_", "rjFhSoSae8n", "7R58tBseMG-", "n9nT8-csv9h", "XNMf-sTQq5", "1q_GFctD1P", "B4K-WCiox1", "0LRCJNB3Td", "60eo2LhtLd8", "7mg_KXIP5tO", "fPLEIL8huWh", "4J9CEtjWwj", "4SKnM8yBATM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\n\nThis paper proposes FedDyn, a dynamic regularization method for federated learning. In FedDyn, the objective function of each active device in each round is dynamically updated, so that the device optimum is asymptotically consistent with the global optimum. The authors consider both the convex and non...
[ 7, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7 ]
[ 3, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ "iclr_2021_B7v4QMR6Z9w", "7R58tBseMG-", "iclr_2021_B7v4QMR6Z9w", "n9nT8-csv9h", "XNMf-sTQq5", "4J9CEtjWwj", "B4K-WCiox1", "0LRCJNB3Td", "rjFhSoSae8n", "bEnICX0_67", "fPLEIL8huWh", "4SKnM8yBATM", "iclr_2021_B7v4QMR6Z9w", "iclr_2021_B7v4QMR6Z9w" ]
iclr_2021_iAX0l6Cz8ub
Geometry-aware Instance-reweighted Adversarial Training
In adversarial machine learning, there was a common belief that robustness and accuracy hurt each other. The belief was challenged by recent studies where we can maintain the robustness and improve the accuracy. However, the other direction, whether we can keep the accuracy and improve the robustness, is conceptually and practically more interesting, since robust accuracy should be lower than standard accuracy for any model. In this paper, we show this direction is also promising. Firstly, we find even over-parameterized deep networks may still have insufficient model capacity, because adversarial training has an overwhelming smoothing effect. Secondly, given limited model capacity, we argue adversarial data should have unequal importance: geometrically speaking, a natural data point closer to/farther from the class boundary is less/more robust, and the corresponding adversarial data point should be assigned with larger/smaller weight. Finally, to implement the idea, we propose geometry-aware instance-reweighted adversarial training, where the weights are based on how difficult it is to attack a natural data point. Experiments show that our proposal boosts the robustness of standard adversarial training; combining two directions, we improve both robustness and accuracy of standard adversarial training.
oral-presentations
The paper proposes an insightful study on the robustness and accuracy of the model. It was hard to simultaneously keep the robustness and accuracy. A few works tried to improve accuracy while maintaining the robustness by investigating more data, early stopping or dropout. From a different perspective, this paper aims to improve robustness while maintaining accuracy. There are some interesting findings in this paper, which could deepen our understanding of adversarial training. For example, the authors conducted experiments with different sizes of the network in standard training and adversarial training. The capacity of an overparameterized network can be sufficient for standard training, but it may be far from enough to fit adversarial data, because of the smoothing effect. Hence given the limited model capacity, adversarial data all have unequal importance. Though this technique is simple and widely studied in traditional ML, it is an interesting attempt in adversarial ML and the authors provide extensive experimental results to justify its effectiveness. In the authors' responses, the concerns raised by the reviewers have been well addressed. The new version becomes more complete by including more results on different PGD steps and the insights on designing weight assignment function. Also, the authors gave an interesting discussion on enough model size for the adversarial training, though it is still kind of an open question. I would thus like to recommend the acceptance of this paper.
train
[ "d4JvKmvAuO6", "ezpQKKCDTnu", "Pu4Ywikmhmi", "I1wOPq5cb9", "4dOEDE3sFyG", "SQCuoJoRmyZ", "nb6k5Ow6Ug", "eFk4zWtOKes", "Pji8Q4aybzs" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThe paper focused on the sample importance in the adversarial training. The authors firstly revealed that over-parameterized deep models on natural data may have insufficient model capacity for adversarial data, because the training loss is hard to zero for adversarial training. Then, the authors argued ...
[ 7, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ 4, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ "iclr_2021_iAX0l6Cz8ub", "4dOEDE3sFyG", "nb6k5Ow6Ug", "Pji8Q4aybzs", "eFk4zWtOKes", "d4JvKmvAuO6", "iclr_2021_iAX0l6Cz8ub", "iclr_2021_iAX0l6Cz8ub", "iclr_2021_iAX0l6Cz8ub" ]
iclr_2021_gvxJzw8kW4b
Co-Mixup: Saliency Guided Joint Mixup with Supermodular Diversity
While deep neural networks show great performance on fitting to the training distribution, improving the networks' generalization performance to the test distribution and robustness to the sensitivity to input perturbations still remain as a challenge. Although a number of mixup based augmentation strategies have been proposed to partially address them, it remains unclear as to how to best utilize the supervisory signal within each input data for mixup from the optimization perspective. We propose a new perspective on batch mixup and formulate the optimal construction of a batch of mixup data maximizing the data saliency measure of each individual mixup data and encouraging the supermodular diversity among the constructed mixup data. This leads to a novel discrete optimization problem minimizing the difference between submodular functions. We also propose an efficient modular approximation based iterative submodular minimization algorithm for efficient mixup computation per each minibatch suitable for minibatch based neural network training. Our experiments show the proposed method achieves the state of the art generalization, calibration, and weakly supervised localization results compared to other mixup methods. The source code is available at https://github.com/snu-mllab/Co-Mixup.
oral-presentations
This paper proposes a type of Mixup-style data augmentation that works at the batch level rather than simply between pairs of examples. Each generated example accumulates salient images from potentially many other examples while ensuring diversity across the generated examples. This is achieved through a 4-part objective with submodular and supermodular components. The paper demonstrates the method using extensive experiments, including generalization performance on CIFAR-100, Tiny ImageNet, ImageNet and GoogleCommands. It also explores weakly supervised object localization, expected calibration error, and robustness to random replacement and Gaussian noise. Reviewer 1 thought the approach was interesting but raised some concerns with clarity, thoroughness of experiments and whether the approach was computationally prohibitive to be used in practice. I was surprised myself that a discussion on the trade-off between computational expense and accuracy gain was not discussed in the submission. The authors responded to the review, adding a comparison to the BP algorithm (Narshiman and Bilmes 2005). The empirical result seems to back up the claim that the proposed algorithm finds a better solution and with less variance. It also appears to run much faster. The authors also responded to minor issues raised with respect to clarity and organization. In their response, the authors provided considerable detail with respect to running time and time complexity, and show that models trained with co-mixup are practical, though they do come with a significant added cost. The authors added the requested comparisons to non-mixup baselines and enhanced the ablation study. In my opinion, this is a comprehensive and satisfying response, and the paper has improved in many respects since submission. The review from R2 was largely positive, though limited in its scope. They also expressed concerns with training time (addressed in the response to R1). Clearly the approach extends to an arbitrary (m) number of images; this was explicit in the paper/formulation and clarified by the authors. I have some concern that R2 may have skimmed the paper if they missed this point. Reviewer 4 thought the paper was interesting and asked several clarifying questions. They expressed concern with the significance of the reported gains. Similar to R1, they asked about non-mixup baselines (VAT specifically). This was addressed in the response to R1. The authors responded to the clarifying questions and addressed the issue of significance. Like the reviewers, I think that this is an intriguing, fresh, and elegant way to perform data augmentation. I appreciate that it has been evaluated just not from the pure generalization setting, but from other angles like robustness and calibration. There are still some outstanding concerns regarding the computational effort required to use Co-Mixup, so this would be nice to see in follow-up work.
train
[ "ULaoVjmDSY", "WLSpecz00uh", "gHda-Qzh_Kr", "tCLeJGdNXi", "85QECisqxpd", "ZeZ48nagiAd", "7VJHN3HDGk6", "wta5tdLlwLN" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper proposes a new batch mixup method, co-mixup, to improve the networks’ generalization performance and robustness. It formulates the construction of a batch of mixup data by maximizing the data saliency measure of each individual mixup data and the supermodular diversity among the constructed mixup data. ...
[ 7, 7, -1, -1, -1, -1, -1, 7 ]
[ 3, 3, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_gvxJzw8kW4b", "iclr_2021_gvxJzw8kW4b", "WLSpecz00uh", "ULaoVjmDSY", "iclr_2021_gvxJzw8kW4b", "wta5tdLlwLN", "WLSpecz00uh", "iclr_2021_gvxJzw8kW4b" ]
iclr_2021_DktZb97_Fx
SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness
In this paper, we cast fair machine learning as invariant machine learning. We first formulate a version of individual fairness that enforces invariance on certain sensitive sets. We then design a transport-based regularizer that enforces this version of individual fairness and develop an algorithm to minimize the regularizer efficiently. Our theoretical results guarantee the proposed approach trains certifiably fair ML models. Finally, in the experimental studies we demonstrate improved fairness metrics in comparison to several recent fair training procedures on three ML tasks that are susceptible to algorithmic bias.
oral-presentations
All of the reviewers agree that this paper is well-written, and provides sound theoretical analyses and comprehensive empirical evaluations. Overall, this paper makes a useful contribution in the direction of individual fairness. The authors have also addressed the concerns raised by the reviewers in their response.
train
[ "E4IYIspxME", "IDfhOltfzDz", "Vg93pUEUzr", "CnnUgP4y8Cd", "9GhW0TK-e2B", "Qf5x2FfdJOG", "MgPJj7aCLsf", "InJSxPoP9ev", "6BnYRVb4cC6" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank all reviewers for their time and thoughtful feedback. We address all questions individually and we have revised the paper accordingly. The main changes are: Subsection 2.2 comparing DIF and the original definition of IF; Appendix B.1 providing details regarding the fair metric learning in the experiments....
[ -1, -1, -1, -1, -1, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, 3, 4, 2, 3 ]
[ "iclr_2021_DktZb97_Fx", "Qf5x2FfdJOG", "MgPJj7aCLsf", "InJSxPoP9ev", "6BnYRVb4cC6", "iclr_2021_DktZb97_Fx", "iclr_2021_DktZb97_Fx", "iclr_2021_DktZb97_Fx", "iclr_2021_DktZb97_Fx" ]
iclr_2021_rsf1z-JSj87
End-to-end Adversarial Text-to-Speech
Modern text-to-speech synthesis pipelines typically involve multiple processing stages, each of which is designed or learnt independently from the rest. In this work, we take on the challenging task of learning to synthesise speech from normalised text or phonemes in an end-to-end manner, resulting in models which operate directly on character or phoneme input sequences and produce raw speech audio outputs. Our proposed generator is feed-forward and thus efficient for both training and inference, using a differentiable alignment scheme based on token length prediction. It learns to produce high fidelity audio through a combination of adversarial feedback and prediction losses constraining the generated audio to roughly match the ground truth in terms of its total duration and mel-spectrogram. To allow the model to capture temporal variation in the generated audio, we employ soft dynamic time warping in the spectrogram-based prediction loss. The resulting model achieves a mean opinion score exceeding 4 on a 5 point scale, which is comparable to the state-of-the-art models relying on multi-stage training and additional supervision.
oral-presentations
This paper investigates a speech synthesis approach that directly generates raw audios from text or phoneme inputs in an end-to-end fashion. The approach first maps the input texts/phonemes into a representation sequence that is aligned with the output at a lower sampling frequency by a differentiable aligner and then upsamples the representation sequence to the full audio frequency by a decoder. A number of techniques including adversarial training and soft DTW are applied to improve the training. The experimental results are good. There are raised concerns from the reviewers which are mostly cleared by the rebuttal of the authors. After the rebuttal and discussion, all reviewers are supportive on accepting the paper.
train
[ "DZIoaLtr8C8", "tBRNyZXRi6h", "LzCDSOH3QCe", "NW5i-SHulfT", "IMEl-MxSn3L", "7h227VS-rB6", "6qOADSWl3aJ", "NG05YBWPGgV", "KAiTv-9pk4", "X8I_4qbVlcI", "u2k3RT7lCkw", "8i0Rsbolbje" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the interest! We just ran a CPU benchmark; we get 8.5x realtime inference on a 6 core Intel Xeon E5-1650 v4. We've updated Appendix A again. (It's potentially an imperfect benchmark as it's running on a desktop machine sharing the CPU with other background processes etc., but we're not easily able to be...
[ -1, -1, -1, -1, -1, -1, -1, -1, 8, 7, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "tBRNyZXRi6h", "LzCDSOH3QCe", "IMEl-MxSn3L", "6qOADSWl3aJ", "KAiTv-9pk4", "X8I_4qbVlcI", "u2k3RT7lCkw", "8i0Rsbolbje", "iclr_2021_rsf1z-JSj87", "iclr_2021_rsf1z-JSj87", "iclr_2021_rsf1z-JSj87", "iclr_2021_rsf1z-JSj87" ]
iclr_2021_mSAKhLYLSsl
Dataset Condensation with Gradient Matching
As the state-of-the-art machine learning methods in many fields rely on larger datasets, storing datasets and training models on them become significantly more expensive. This paper proposes a training set synthesis technique for data-efficient learning, called Dataset Condensation, that learns to condense large dataset into a small set of informative synthetic samples for training deep neural networks from scratch. We formulate this goal as a gradient matching problem between the gradients of deep neural network weights that are trained on the original and our synthetic data. We rigorously evaluate its performance in several computer vision benchmarks and demonstrate that it significantly outperforms the state-of-the-art methods. Finally we explore the use of our method in continual learning and neural architecture search and report promising gains when limited memory and computations are available.
oral-presentations
The paper introduces a novel dataset condensation technique that generates synthetic samples (images) by matching model gradients with those obtained on the original input samples (images). The authors also show that these synthetic images are not architecture dependent and can be used to train different deep neural networks. The approach is validated on several smaller datasets like MNIST, SVHN and CIFAR10. This work is well-motivated and the methodological contributions convincing. All reviewers were enthusiastic and indicated that there were no flaws in this work. The rebuttal clarified outstanding questions and made the paper stronger.
train
[ "uh5Ay9WH5Pm", "Qpiz_1WvBbe", "DYO5IP6Lhkt", "3xjW198akhJ", "-L07ABoO8Dn", "zPngfLS4lwn", "iMUcnCQVNhK", "vc97k1L7vqO", "FZLA3xuMTAG", "qzeM2RB6QaW" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "##########################################################################\n\nSummary:\n \nThe paper proposes a novel dataset condensation technique that generates synthetic samples by matching model gradients with those obtained on the original input dataset. This technique is investigated empirically on several ...
[ 8, 8, -1, -1, -1, 9, -1, -1, -1, -1 ]
[ 3, 3, -1, -1, -1, 4, -1, -1, -1, -1 ]
[ "iclr_2021_mSAKhLYLSsl", "iclr_2021_mSAKhLYLSsl", "3xjW198akhJ", "-L07ABoO8Dn", "FZLA3xuMTAG", "iclr_2021_mSAKhLYLSsl", "Qpiz_1WvBbe", "iclr_2021_mSAKhLYLSsl", "zPngfLS4lwn", "uh5Ay9WH5Pm" ]
iclr_2021_PKubaeJkw3
Rethinking Architecture Selection in Differentiable NAS
Differentiable Neural Architecture Search is one of the most popular Neural Architecture Search (NAS) methods for its search efficiency and simplicity, accomplished by jointly optimizing the model weight and architecture parameters in a weight-sharing supernet via gradient-based algorithms. At the end of the search phase, the operations with the largest architecture parameters will be selected to form the final architecture, with the implicit assumption that the values of architecture parameters reflect the operation strength. While much has been discussed about the supernet's optimization, the architecture selection process has received little attention. We provide empirical and theoretical analysis to show that the magnitude of architecture parameters does not necessarily indicate how much the operation contributes to the supernet's performance. We propose an alternative perturbation-based architecture selection that directly measures each operation's influence on the supernet. We re-evaluate several differentiable NAS methods with the proposed architecture selection and find that it is able to extract significantly improved architectures from the underlying supernets consistently. Furthermore, we find that several failure modes of DARTS can be greatly alleviated with the proposed selection method, indicating that much of the poor generalization observed in DARTS can be attributed to the failure of magnitude-based architecture selection rather than entirely the optimization of its supernet.
oral-presentations
This paper proposes a new selection paradigm for selecting the optimal architecture in neural architecture search (NAS), in particular for methods that involve a one-shot model and that deploy gradient-based methods for the search. Basically, the paper focuses on examining the max selection very closely and found the magnitude of architecture weights are misleading. Instead, the paper proposes much more intuitive finalization step, pick the operator that has the largest drop in validation if the edge is removed. All reviewers agreed that the idea is interesting, the paper is well-written, and the results found in the paper are interesting. In addition, author response satisfactorily addressed most of the points raised by the reviewers, and most of them increased their original score. Therefore, I recommend acceptance.
train
[ "AI2NA_QQ9wA", "NWa9BUnzvbE", "6ZHOnpVyIU_", "rKa26ZHA1g2", "an9SPs1yLa7", "Cg4EmuVqSj", "3xgLeRLiu97", "ivz1XrF4Lo_", "yI6eyyKE88X", "1dzb6CrzpQT", "TmGdhFK7C5i", "AiN-zqwqo3X", "-oKQr-yp6Xw", "UwfUB2i6kic", "PIkEMho0dRl" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "# post rebuttal\n\nI have no further concerns and increase the rate to accept.\n\n# Summary\n\nThis paper identifies an interesting phenomenon that on DARTS based method, the operation can not be simply chosen based on the maximum value of trained weights. The authors propose a new selection paradigm. For each ope...
[ 7, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, 10, 7 ]
[ 4, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2021_PKubaeJkw3", "an9SPs1yLa7", "AiN-zqwqo3X", "iclr_2021_PKubaeJkw3", "Cg4EmuVqSj", "rKa26ZHA1g2", "PIkEMho0dRl", "rKa26ZHA1g2", "AI2NA_QQ9wA", "AI2NA_QQ9wA", "AI2NA_QQ9wA", "UwfUB2i6kic", "iclr_2021_PKubaeJkw3", "iclr_2021_PKubaeJkw3", "iclr_2021_PKubaeJkw3" ]
iclr_2021_jWkw45-9AbL
A Distributional Approach to Controlled Text Generation
We propose a Distributional Approach for addressing Controlled Text Generation from pre-trained Language Models (LM). This approach permits to specify, in a single formal framework, both “pointwise’” and “distributional” constraints over the target LM — to our knowledge, the first model with such generality —while minimizing KL divergence from the initial LM distribution. The optimal target distribution is then uniquely determined as an explicit EBM (Energy-BasedModel) representation. From that optimal representation, we then train a target controlled Autoregressive LM through an adaptive distributional variant of PolicyGradient. We conduct a first set of experiments over pointwise constraints showing the advantages of our approach over a set of baselines, in terms of obtaining a controlled LM balancing constraint satisfaction with divergence from the pretrained LM. We then perform experiments over distributional constraints, a unique feature of our approach, demonstrating its potential as a remedy to the problem of Bias in Language Models. Through an ablation study, we show the effectiveness of our adaptive technique for obtaining faster convergence. Code available at https://github.com/naver/gdc
oral-presentations
The paper studies the problem of being able to control text generated by pre-trained language models. The problem is timely and important. The paper frames the problem as constraint satisfaction over a probability distribution. Both pointwise and distributional constraints can be imposed. The proposed algorithm, Generation with Distributional Control (GDC), is elegant, and is an interesting new addition to this line of work. Overall, the paper brings forth news ideas, and could have impact.
train
[ "BnstUZEm_ix", "LTPuqAeR6WN", "aIGqpnOoNR", "VJeZz7s8Clw", "SV4CQDHKuCk", "S1W1JC1HVMv", "u-5rrhMrpgv", "XkRa8Wwy-7e" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors addressed my concern so I increased my score to 8. \n\n-----------------------\n\nThis is a very interesting idea for controlling a pretrained model for some sort desired criteria. The authors argue that existing approaches for this have taken a pointwise view for instance using REINFORCE to optimize f...
[ 8, -1, -1, -1, -1, -1, 7, 7 ]
[ 3, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_jWkw45-9AbL", "SV4CQDHKuCk", "u-5rrhMrpgv", "iclr_2021_jWkw45-9AbL", "XkRa8Wwy-7e", "BnstUZEm_ix", "iclr_2021_jWkw45-9AbL", "iclr_2021_jWkw45-9AbL" ]
iclr_2021_QIRlze3I6hX
Learning Cross-Domain Correspondence for Control with Dynamics Cycle-Consistency
At the heart of many robotics problems is the challenge of learning correspondences across domains. For instance, imitation learning requires obtaining correspondence between humans and robots; sim-to-real requires correspondence between physics simulators and real hardware; transfer learning requires correspondences between different robot environments. In this paper, we propose to learn correspondence across such domains emphasizing on differing modalities (vision and internal state), physics parameters (mass and friction), and morphologies (number of limbs). Importantly, correspondences are learned using unpaired and randomly collected data from the two domains. We propose dynamics cycles that align dynamic robotic behavior across two domains using a cycle consistency constraint. Once this correspondence is found, we can directly transfer the policy trained on one domain to the other, without needing any additional fine-tuning on the second domain. We perform experiments across a variety of problem domains, both in simulation and on real robots. Our framework is able to align uncalibrated monocular video of a real robot arm to dynamic state-action trajectories of a simulated arm without paired data. Video demonstrations of our results are available at: https://sites.google.com/view/cycledynamics .
oral-presentations
The paper proposes a new solution for cross-domain correspondence in control, which combines GANs and cycle-consistency, and separates shifts in observation space and in action space. The paper targets unpaired data / simulations, and discovers alignment of state by enforcing that domains are mappable. The paper was received well by reviewers, who pointed out several strengths: a strong contribution on a fundamental problem, and an interesting formulation; a well written and well positioned paper; This compensates minor weaknesses, in particular the fact that transfer has been tested between two different simulated environments. The reviewers unanimously suggested acceptance, the AC concurs.
train
[ "iVyW-XigpSw", "mX5OJKaw44h", "9F1xvUVH4nf", "gdwioXcmUVL", "T3JNujmaKIR", "DWkYYswD1B", "Kn4TbMgKF86", "eW_l5PSdrrB", "iz4Am8vcayz" ]
[ "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the positive assessment and helpful feedback. We will address each of your comments in the following. \n\n***Q1: “In Table 5, Ours (E) and Ours (J) have shown the performance gaps. It would be great if the reasons are properly mentioned.”***\n\n**A1:** Estimating the end-effector (Ours (E...
[ -1, -1, -1, -1, -1, 10, 7, 8, 6 ]
[ -1, -1, -1, -1, -1, 4, 3, 3, 2 ]
[ "iz4Am8vcayz", "eW_l5PSdrrB", "DWkYYswD1B", "Kn4TbMgKF86", "iclr_2021_QIRlze3I6hX", "iclr_2021_QIRlze3I6hX", "iclr_2021_QIRlze3I6hX", "iclr_2021_QIRlze3I6hX", "iclr_2021_QIRlze3I6hX" ]
iclr_2021_0-uUGPbIjD
Human-Level Performance in No-Press Diplomacy via Equilibrium Search
Prior AI breakthroughs in complex games have focused on either the purely adversarial or purely cooperative settings. In contrast, Diplomacy is a game of shifting alliances that involves both cooperation and competition. For this reason, Diplomacy has proven to be a formidable research challenge. In this paper we describe an agent for the no-press variant of Diplomacy that combines supervised learning on human data with one-step lookahead search via regret minimization. Regret minimization techniques have been behind previous AI successes in adversarial games, most notably poker, but have not previously been shown to be successful in large-scale games involving cooperation. We show that our agent greatly exceeds the performance of past no-press Diplomacy bots, is unexploitable by expert humans, and ranks in the top 2% of human players when playing anonymous games on a popular Diplomacy website.
oral-presentations
All reviewers agree that this paper is very solid work, that presents great progress in no-press diplomacy. The method and presented experiments are of very good quality and the work merits to be presented at ICLR.
train
[ "bYgXlsJt5Si", "2S-vjHkeZK9", "iuzBRmU9fjO", "81MflVZg9nH", "Kv_zceu8OU-", "pouBLQZ1Dy-", "TUbsMp2xBYQ", "KrtKBqN3sEs", "50VoyyYf7eF", "C3MzikM6LY_", "8tuFtum3gBa" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a combination of imitation learning and search applied to the multiplayer, simultaneous-move game of no-press Diplomacy. While both techniques have been used before, even in concert, there are some domain-specific challenges: more than two players, simultaneous moves, and a very large branching...
[ 8, -1, -1, -1, -1, -1, -1, -1, 7, 8, 7 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "iclr_2021_0-uUGPbIjD", "iuzBRmU9fjO", "Kv_zceu8OU-", "KrtKBqN3sEs", "bYgXlsJt5Si", "50VoyyYf7eF", "C3MzikM6LY_", "8tuFtum3gBa", "iclr_2021_0-uUGPbIjD", "iclr_2021_0-uUGPbIjD", "iclr_2021_0-uUGPbIjD" ]
iclr_2021_Ysuv-WOFeKR
Parrot: Data-Driven Behavioral Priors for Reinforcement Learning
Reinforcement learning provides a general framework for flexible decision making and control, but requires extensive data collection for each new task that an agent needs to learn. In other machine learning fields, such as natural language processing or computer vision, pre-training on large, previously collected datasets to bootstrap learning for new tasks has emerged as a powerful paradigm to reduce data requirements when learning a new task. In this paper, we ask the following question: how can we enable similarly useful pre-training for RL agents? We propose a method for pre-training behavioral priors that can capture complex input-output relationships observed in successful trials from a wide range of previously seen tasks, and we show how this learned prior can be used for rapidly learning new tasks without impeding the RL agent's ability to try out novel behaviors. We demonstrate the effectiveness of our approach in challenging robotic manipulation domains involving image observations and sparse reward functions, where our method outperforms prior works by a substantial margin. Additional materials can be found on our project website: https://sites.google.com/view/parrot-rl
oral-presentations
This paper presents an elegant and effective approach to knowledge transfer in RL by learning a policy prior from expert data. The paper is generally well structured and well written. Generally, all the reviewers were favourable about this paper, with its simple idea and convincing results. It was thought that the paper would benefit from the addition of more discussion around related work, and more experimental results, but it remains a strong paper.
train
[ "JH6k-S5Y1S7", "4mzZbAIUvDr", "KcHHYi040WW", "apQCNSxtQHI", "JRTOgi8tGtN", "OvzraQ2pKq2", "hfsmHmo1_lt", "zOJKR1pCmSj", "-kH7ItW5fy", "7gNNmIwfbx", "_C-tNq6Iatp", "5j2TU7N-_R", "XjRrcnvD2W6" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you again for your review. We would like to know if our updated paper (with all major changes detailed in our response below) addresses your concerns, and if you have any additional feedback that you would like to provide. ", "Thank you for your reply!\n\nWe have updated the paper addressing the remaining ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7, 6, 9 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "5j2TU7N-_R", "KcHHYi040WW", "apQCNSxtQHI", "7gNNmIwfbx", "iclr_2021_Ysuv-WOFeKR", "5j2TU7N-_R", "XjRrcnvD2W6", "_C-tNq6Iatp", "5j2TU7N-_R", "iclr_2021_Ysuv-WOFeKR", "iclr_2021_Ysuv-WOFeKR", "iclr_2021_Ysuv-WOFeKR", "iclr_2021_Ysuv-WOFeKR" ]
iclr_2021_-2FCwDKRREu
Learning Invariant Representations for Reinforcement Learning without Reconstruction
We study how representation learning can accelerate reinforcement learning from rich observations, such as images, without relying either on domain knowledge or pixel-reconstruction. Our goal is to learn representations that provide for effective downstream control and invariance to task-irrelevant details. Bisimulation metrics quantify behavioral similarity between states in continuous MDPs, which we propose using to learn robust latent representations which encode only the task-relevant information from observations. Our method trains encoders such that distances in latent space equal bisimulation distances in state space. We demonstrate the effectiveness of our method at disregarding task-irrelevant information using modified visual MuJoCo tasks, where the background is replaced with moving distractors and natural videos, while achieving SOTA performance. We also test a first-person highway driving task where our method learns invariance to clouds, weather, and time of day. Finally, we provide generalization results drawn from properties of bisimulation metrics, and links to causal inference.
oral-presentations
This paper proposed using the state bisimulation metric to learn invariant representations for reinforcement learning. The method is generic, effective, and is supported by both theoretical and experimental results. All reviewers and I think this is a strong contribution to the area.
train
[ "INN1n437foY", "3JuP65TjmUp", "EbpHlkPvleu", "6PJ0jwCtAZ2", "5slSBViBBY4", "RbVptpSUFSy" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for their assessment and constructive suggestions. We address the two suggestions below -- we are not sure what the reviewer had in mind as an experiment for the second suggestion, and are happy to discuss possible tasks.\n\n\n1. \"the paper would definitely benefit if this claim was backed u...
[ -1, -1, -1, 9, 7, 7 ]
[ -1, -1, -1, 5, 3, 4 ]
[ "5slSBViBBY4", "6PJ0jwCtAZ2", "RbVptpSUFSy", "iclr_2021_-2FCwDKRREu", "iclr_2021_-2FCwDKRREu", "iclr_2021_-2FCwDKRREu" ]
iclr_2021_FGqiDsBUKL0
Do 2D GANs Know 3D Shape? Unsupervised 3D Shape Reconstruction from 2D Image GANs
Natural images are projections of 3D objects on a 2D image plane. While state-of-the-art 2D generative models like GANs show unprecedented quality in modeling the natural image manifold, it is unclear whether they implicitly capture the underlying 3D object structures. And if so, how could we exploit such knowledge to recover the 3D shapes of objects in the images? To answer these questions, in this work, we present the first attempt to directly mine 3D geometric cues from an off-the-shelf 2D GAN that is trained on RGB images only. Through our investigation, we found that such a pre-trained GAN indeed contains rich 3D knowledge and thus can be used to recover 3D shape from a single 2D image in an unsupervised manner. The core of our framework is an iterative strategy that explores and exploits diverse viewpoint and lighting variations in the GAN image manifold. The framework does not require 2D keypoint or 3D annotations, or strong assumptions on object shapes (e.g. shapes are symmetric), yet it successfully recovers 3D shapes with high precision for human faces, cats, cars, and buildings. The recovered 3D shapes immediately allow high-quality image editing like relighting and object rotation. We quantitatively demonstrate the effectiveness of our approach compared to previous methods in both 3D shape reconstruction and face rotation. Our code is available at https://github.com/XingangPan/GAN2Shape.
oral-presentations
The paper proposes to use pre-trained 2D (i.e., image) GANs as a mechanism for recovering 3D shape from a single 2D image. The work demonstrates impressive results on not only human and cat faces, but also cars and buildings. The method is demonstrated with qualitative results and quantitative results on multiple datasets and tasks. The reviewers were persuaded by the novelty and "neatness" of the idea (and the AC is in agreement) as well as the results. At submission time, there were some concerns with experimental details. For instance, there was a question of how carefully the settings have to be tuned (always a concern with unsupervised methods) as well as an overarching concern about the initialization and whether the method will work on less clean data. The reviewers (and the AC) seem to think that these have been sorted out in discussion. All three reviewers were in favor of acceptance and the area chair is inclined to agree with the reviewers. In particular, the AC finds the work interesting and compelling. While there is an updated version already uploaded during the discussion, the AC encourages the reviewers to double check all the questions from the reviewers and include the answers from the discussion into the camera ready (even these results are in the appendix).
train
[ "LpyyHa7LkLP", "W5g80NDOPK2", "sD-BEJfe2mX", "mIkFLu6fndW", "a1IcJdGxHD2", "oS_dMnHoC0I" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "Pros:\n1. This is the first work that attempts to reconstruct 3D shape from 2D image in an unsupervised way using GANs. The idea is neat: Use networks to predict four 3D parameters and use GAN to generate / synthesize the images corresponding to a set of parameters. Then these synthesized images can be used as pse...
[ 7, 8, -1, -1, -1, 8 ]
[ 3, 5, -1, -1, -1, 4 ]
[ "iclr_2021_FGqiDsBUKL0", "iclr_2021_FGqiDsBUKL0", "oS_dMnHoC0I", "LpyyHa7LkLP", "W5g80NDOPK2", "iclr_2021_FGqiDsBUKL0" ]
iclr_2021_RmB-88r9dL
VCNet and Functional Targeted Regularization For Learning Causal Effects of Continuous Treatments
Motivated by the rising abundance of observational data with continuous treatments, we investigate the problem of estimating the average dose-response curve (ADRF). Available parametric methods are limited in their model space, and previous attempts in leveraging neural network to enhance model expressiveness relied on partitioning continuous treatment into blocks and using separate heads for each block; this however produces in practice discontinuous ADRFs. Therefore, the question of how to adapt the structure and training of neural network to estimate ADRFs remains open. This paper makes two important contributions. First, we propose a novel varying coefficient neural network (VCNet) that improves model expressiveness while preserving continuity of the estimated ADRF. Second, to improve finite sample performance, we generalize targeted regularization to obtain a doubly robust estimator of the whole ADRF curve.
oral-presentations
The paper designs a new way (in some sense a new perspective) on how neural networks can be used to model intervention variables when the goal is to estimate ADRF. Basically, the idea is to emphasize the importance of the intervention variable by ensuring that it appears not just in every layer but also in every neural of a neural network. Reviewers mostly agree that this is a good paper with varying degrees, although there are some criticisms on e.g., assuming away the confounders. However, I believe the authors address the criticisms of R4 satisfactorily. Overall I find the idea new and interesting and the experimental results strong, hence I happily recommend accepting the paper. I do have a few quips myself and some comments that may help the authors to further improve the paper. 1. Re: the design that models each parameter as a spline. This is equivalent to introducing additional parameters (coefficients for spline basis) and adding a fixed linear layer (spline basis themselves) to every layer of the neural networks. t is taken as an input in all layers thus it makes sure that the model prioritizes on learning the impact of t. 2. If you use a B-spline basis (that comes with kernels of bounded support), then the proposed method is very similar to stratifying the data according to different bins of t, and then fitting a separate model for each t. The only difference is that the different bins are now smooth kernels and they overlap somewhat. As a side note, the authors should clearly write out how they are choosing the knots to specify the basis functions. Otherwise the paper will not be reproducible. 3. I am not sure how this method would compare to naive (non-deep) baselines. Maybe this was considered in a prior work? If not, then I tend to side with Reviewer 4 that the evaluations are mostly ablation studies and they are not really comparing to representative work in this domain. Given that there is a large body of work on this before deep learning takes over, it is important to somehow compare with the right baselines.
train
[ "i--0nedKm8", "m_vlrbwhNud", "pS9LA8rkuQq", "pma_LM2AcV", "PATRXd8IJR", "YG8UY8wP5D8", "ajLRo_Qtacg", "uu55CNxwRwk" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We uploaded revisions of the submission in which\n\n* We reorganize the introduction to make the logic flow more smoothly.\n* We add a sentence to explain the motivation of using loss (1) in section 3.3 of the revision.\n* We change the index of assumptions in Theorem 2.\n* We also checked the proofs thoroughly an...
[ -1, -1, -1, -1, -1, 6, 9, 5 ]
[ -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "iclr_2021_RmB-88r9dL", "ajLRo_Qtacg", "YG8UY8wP5D8", "uu55CNxwRwk", "uu55CNxwRwk", "iclr_2021_RmB-88r9dL", "iclr_2021_RmB-88r9dL", "iclr_2021_RmB-88r9dL" ]
iclr_2021_dYeAHXnpWJ4
Rethinking the Role of Gradient-based Attribution Methods for Model Interpretability
Current methods for the interpretability of discriminative deep neural networks commonly rely on the model's input-gradients, i.e., the gradients of the output logits w.r.t. the inputs. The common assumption is that these input-gradients contain information regarding pθ(y∣x), the model's discriminative capabilities, thus justifying their use for interpretability. However, in this work, we show that these input-gradients can be arbitrarily manipulated as a consequence of the shift-invariance of softmax without changing the discriminative function. This leaves an open question: given that input-gradients can be arbitrary, why are they highly structured and explanatory in standard models? In this work, we re-interpret the logits of standard softmax-based classifiers as unnormalized log-densities of the data distribution and show that input-gradients can be viewed as gradients of a class-conditional generative model pθ(x∣y) implicit in the discriminative model. This leads us to hypothesize that the highly structured and explanatory nature of input-gradients may be due to the alignment of this class-conditional model pθ(x∣y) with that of the ground truth data distribution pdata(x∣y). We test this hypothesis by studying the effect of density alignment on gradient explanations. To achieve this density alignment, we use an algorithm called score-matching, and propose novel approximations to this algorithm to enable training large-scale models. Our experiments show that improving the alignment of the implicit density model with the data distribution enhances gradient structure and explanatory power while reducing this alignment has the opposite effect. This also leads us to conjecture that unintended density alignment in standard neural network training may explain the highly structured nature of input-gradients observed in practice. Overall, our finding that input-gradients capture information regarding an implicit generative model implies that we need to re-think their use for interpreting discriminative models.
oral-presentations
This paper studies why input gradients can give meaningful feature attributions even though they can be changed arbitrarily without affecting the prediction. The claim in this paper is that "the learned logits in fact represent class conditional probabilities and hence input gradients given meaningful feature attributions". The main concern is that this claim is verified very indirectly, by adding a regularization term that promotes logits learning class conditional probabilities and observing that input gradient quality also improves. Nevertheless, there are interesting insights in the paper and the questions it asks are very timely and important, and overall, it could have a significant impact on further research in this area.
test
[ "GI9b5A7VpHn", "5fUBK8HxVB-", "FdspQnv3YV-", "Wu7slEWXxPS", "A1xbABkDKxV", "QH7_DLBsQu5", "ulugIe9JGR-", "hdg7NK6NAtf", "G38mtIAdZW", "fNoXBKSdn65", "nCVa9uRo6C", "YzK1EGqm6t", "kuQNscVqnM", "xawDWIvKxJ7", "J7UZKqL-w1Z" ]
[ "author", "official_reviewer", "author", "author", "author", "public", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Great questions! \n\n1) A small clarification: our justification for the density modelling view arises from these two papers [1,2]. From [1], we also have a similar interpretation for binary logistic classifiers, and by extension, to logistic multi-way classifiers. For a binary classifier, the singular \"logit\" e...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 9, 9 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "5fUBK8HxVB-", "fNoXBKSdn65", "iclr_2021_dYeAHXnpWJ4", "ulugIe9JGR-", "QH7_DLBsQu5", "iclr_2021_dYeAHXnpWJ4", "YzK1EGqm6t", "ulugIe9JGR-", "kuQNscVqnM", "xawDWIvKxJ7", "J7UZKqL-w1Z", "iclr_2021_dYeAHXnpWJ4", "iclr_2021_dYeAHXnpWJ4", "iclr_2021_dYeAHXnpWJ4", "iclr_2021_dYeAHXnpWJ4" ]
iclr_2021_uAX8q61EVRu
Neural Synthesis of Binaural Speech From Mono Audio
We present a neural rendering approach for binaural sound synthesis that can produce realistic and spatially accurate binaural sound in realtime. The network takes, as input, a single-channel audio source and synthesizes, as output, two-channel binaural sound, conditioned on the relative position and orientation of the listener with respect to the source. We investigate deficiencies of the l2-loss on raw waveforms in a theoretical analysis and introduce an improved loss that overcomes these limitations. In an empirical evaluation, we establish that our approach is the first to generate spatially accurate waveform outputs (as measured by real recordings) and outperforms existing approaches by a considerable margin, both quantitatively and in a perceptual study. Dataset and code are available online.
oral-presentations
+ Interesting method for binaural synthesis from moving mono-audio + Nice insight into why l2 isn't the best loss for binaural reconstructions. + Interesting architectural choice with nice results. + Nicely motivated and clearly presented idea -- especially after addressing the reviewers comments. I agree with the idea of a title change. While I think its implied that the source is probably single source, making it explicit would make it clearer for those not working in a closely related topic. Hence, "Neural Synthesis of Binaural Speech from Mono Audio" as suggested in the review process sounds quite reasonable.
train
[ "jFPGOnYqkNZ", "c82CtsGWWA", "u0eU0D8fIpU", "uJ7O63lS9cd", "iDWnHjNAjXL", "k9m3FBiwASM", "fej9KWIFddv", "ryXAREdPT5A", "2AQw9y_hfHA", "S6D8sIwAI98" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper is about a method for synthesizing binaural audio from a mono recording of a single speaker's speech.\n\nFirst, I think the title is too general. The paper does not attempt to convert all possible sounds, but it tries to convert a single speaker's monaural speech signal to binaural audio where the speake...
[ 7, 7, -1, -1, -1, -1, -1, -1, -1, 9 ]
[ 5, 4, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2021_uAX8q61EVRu", "iclr_2021_uAX8q61EVRu", "c82CtsGWWA", "c82CtsGWWA", "jFPGOnYqkNZ", "S6D8sIwAI98", "c82CtsGWWA", "jFPGOnYqkNZ", "S6D8sIwAI98", "iclr_2021_uAX8q61EVRu" ]
iclr_2021_a-xFK8Ymz5J
DiffWave: A Versatile Diffusion Model for Audio Synthesis
In this work, we propose DiffWave, a versatile diffusion probabilistic model for conditional and unconditional waveform generation. The model is non-autoregressive, and converts the white noise signal into structured waveform through a Markov chain with a constant number of steps at synthesis. It is efficiently trained by optimizing a variant of variational bound on the data likelihood. DiffWave produces high-fidelity audios in different waveform generation tasks, including neural vocoding conditioned on mel spectrogram, class-conditional generation, and unconditional generation. We demonstrate that DiffWave matches a strong WaveNet vocoder in terms of speech quality (MOS: 4.44 versus 4.43), while synthesizing orders of magnitude faster. In particular, it significantly outperforms autoregressive and GAN-based waveform models in the challenging unconditional generation task in terms of audio quality and sample diversity from various automatic and human evaluations.
oral-presentations
I join all five reviewers in recommending acceptance. There was some discussion about a comparison with WaveGrad (Chen et al., 2020), a contemporaneous work that explores a similar modelling approach for speech generation. While I agree that such a comparison is a useful addition to the manuscript, I do not think it is reasonable to request anything beyond an acknowledgement and citation of the work from the authors as a condition for acceptance. Further discussion and comparison experiments could be valuable, but I believe that should not factor into the final decision. My position is most similar to Reviewer 4's in this sense. The current version of the manuscript briefly discusses the differences between WaveGrad and DiffWave, which I think is more than sufficient. (As an aside, another difference potentially worth discussing is that the "noise schedule" for WaveGrad can be adapted at inference time, enabling a trade-off between inference speed and sample quality, which I believe is not possible for DiffWave in its current form.) There was some debate about the weakly conditioned generation results; I believe they are a nice addition to the paper, although it would have been suitable for publication without them. They certainly do not detract from it, and might inspire further work in weakly conditioned audio generation (e.g. music). There were also concerns about the clarity of writing, which I believe the authors have addressed in the current version of the manuscript. This work stands out because it applies a relatively fresh idea in generative modelling to a domain of great practical importance, which has long been dominated by traditional likelihood-based models, with compelling results. While this implies a limited degree of technical novelty, I do not think that is grounds for rejection, and in fact I would argue that making new ideas work well for practical problems is just as important.
train
[ "QE_MMv3RmP", "-5fWY8xRqAu", "WI1XAfMgif", "hfUl0fOg8lK", "vqQYGZ0Z7QX", "u_hE0-YuxC", "LTDmB3vxjc0", "Gta7978HCx", "UaanPP5rnWY", "8-3DEPdJmj" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you so much for your review. We will address your comments in the following.\n\n“Lack of novelty in terms of insights/approach”\n* This work focuses on real-world speech/audio synthesis application, and provides successful recipes and non-trivial insights for building state-of-the-art generative models for r...
[ -1, -1, -1, -1, -1, 7, 9, 8, 7, 7 ]
[ -1, -1, -1, -1, -1, 5, 4, 3, 5, 3 ]
[ "u_hE0-YuxC", "Gta7978HCx", "LTDmB3vxjc0", "UaanPP5rnWY", "8-3DEPdJmj", "iclr_2021_a-xFK8Ymz5J", "iclr_2021_a-xFK8Ymz5J", "iclr_2021_a-xFK8Ymz5J", "iclr_2021_a-xFK8Ymz5J", "iclr_2021_a-xFK8Ymz5J" ]
iclr_2021_YicbFdNTTy
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.
oral-presentations
This paper has generated a lot of great discussion and it presents a very different way of doing image recognition at scale compared to current state of the art practices. All reviewers rated this paper as an accept. This work is interesting enough that in my view it really deservers further exposure and discussion and an oral presentation at ICLR would be a good way to achieve that.
val
[ "63xb_a5FFYI", "cetpBnGruQV", "kcMMAfgPISU", "sMQvMqbdpAN", "uRmLjgdl59a", "NE_4f4MZaJ", "VdJrUQ8iO97", "PQDN7qziDoI", "P6z_zIeGz5", "nB9g0C3-wf", "0KXWGeotHPg", "q2sODce1ROY", "eXA6UlaiTPz", "MRG7Aw4c273", "-MByllXmCk6", "d3cPgAwsikF" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "public", "public", "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your detailed review, below we address the key concerns.\n\n> it is not clear how the simple ViT can be extended for the vision tasks which require pixel-level predictions, e.g. image segmentation, depth prediction etc, or 3D vision tasks while being still computationally tractable.\n\nWe do not see ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 5 ]
[ "eXA6UlaiTPz", "MRG7Aw4c273", "-MByllXmCk6", "d3cPgAwsikF", "iclr_2021_YicbFdNTTy", "nB9g0C3-wf", "q2sODce1ROY", "0KXWGeotHPg", "nB9g0C3-wf", "iclr_2021_YicbFdNTTy", "iclr_2021_YicbFdNTTy", "iclr_2021_YicbFdNTTy", "iclr_2021_YicbFdNTTy", "iclr_2021_YicbFdNTTy", "iclr_2021_YicbFdNTTy", ...
iclr_2021_RGJbergVIoO
On the mapping between Hopfield networks and Restricted Boltzmann Machines
Hopfield networks (HNs) and Restricted Boltzmann Machines (RBMs) are two important models at the interface of statistical physics, machine learning, and neuroscience. Recently, there has been interest in the relationship between HNs and RBMs, due to their similarity under the statistical mechanics formalism. An exact mapping between HNs and RBMs has been previously noted for the special case of orthogonal (“uncorrelated”) encoded patterns. We present here an exact mapping in the case of correlated pattern HNs, which are more broadly applicable to existing datasets. Specifically, we show that any HN with N binary variables and p<N potentially correlated binary patterns can be transformed into an RBM with N binary visible variables and p gaussian hidden variables. We outline the conditions under which the reverse mapping exists, and conduct experiments on the MNIST dataset which suggest the mapping provides a useful initialization to the RBM weights. We discuss extensions, the potential importance of this correspondence for the training of RBMs, and for understanding the performance of feature extraction methods which utilize RBMs.
oral-presentations
Two knowledgeable reviewers were positive 7 and very positive 10 about this paper, considering it an important contribution that illuminates previously unknown aspects of two classic models, namely RBMs and Hopfield networks. They considered the work very well developed, theoretically interesting and also of potential practical relevance. A third reviewer initially expressed some reservations in regard to the inverse map from RBMs to HNs and the experiments. Following the authors' responses, which the reviewer found detailed and informative, he/she significantly raised his/her score to 7, also emphasizing that he/she hoped to see the paper accepted. With the unanimously positive feedback, I am recommending the paper to be accepted.
train
[ "mkAx8mWK_S", "b0xM0xCALxM", "lNDKmpzthAl", "PRsCB4-5r62", "qWqEjzOXhGX", "KrLbX7asabB", "qHPewiE3o1", "lSYPM0mjLSJ" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "# Summary\n\nThis paper shows a relationship between the project rule weights of a Hopfield network (HN) and the interaction weights in a corresponding restricted Boltzmann machine (RBM). The mapping from HN to RBM is facilitated by realising that the partition function of BN can be seen as the partition function ...
[ 7, -1, -1, -1, -1, -1, 7, 10 ]
[ 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_RGJbergVIoO", "mkAx8mWK_S", "mkAx8mWK_S", "mkAx8mWK_S", "qHPewiE3o1", "lSYPM0mjLSJ", "iclr_2021_RGJbergVIoO", "iclr_2021_RGJbergVIoO" ]
iclr_2021_cPZOyoDloxl
SMiRL: Surprise Minimizing Reinforcement Learning in Unstable Environments
Every living organism struggles against disruptive environmental forces to carve out and maintain an orderly niche. We propose that such a struggle to achieve and preserve order might offer a principle for the emergence of useful behaviors in artificial agents. We formalize this idea into an unsupervised reinforcement learning method called surprise minimizing reinforcement learning (SMiRL). SMiRL alternates between learning a density model to evaluate the surprise of a stimulus, and improving the policy to seek more predictable stimuli. The policy seeks out stable and repeatable situations that counteract the environment's prevailing sources of entropy. This might include avoiding other hostile agents, or finding a stable, balanced pose for a bipedal robot in the face of disturbance forces. We demonstrate that our surprise minimizing agents can successfully play Tetris, Doom, control a humanoid to avoid falls, and navigate to escape enemies in a maze without any task-specific reward supervision. We further show that SMiRL can be used together with standard task rewards to accelerate reward-driven learning.
oral-presentations
The paper is studying a new intrinsic motivation RL setup in a dynamic environment, where the authors minimize the state entropy instead of the common approach of maximizing it. The resulting idea is simple but also surprising that it works so well. All reviewers appreciated the new problem formulation of using dynamic environments and found the idea very promsing. In addition, they identified the following strengths of the paper: - The experiments are exhaustive, identifying many domains where the approach can be applied - The presented results are compelling - The paper is well written - The paper introduces a new problem setup that has not been studied before I agree with the reviewers that this paper contains many interesting contributions and therefore recommend acceptance.
test
[ "J0VfFbrJ7Vv", "K9NW0mzoYle", "4cOszD_oTbw", "McceVXFRRC6", "2NyNfK3bcb_", "7q8UvEHGZ49", "YvdlzuI9Lm6", "Xlq8cHVusYM", "ilMUnVnMmW7", "GiX41yiyTXH", "TqpDjZl5Atg", "p5A297hPqzi", "nOOu2IVTFHb", "20AbzRwvZpN", "ZbXu1S7QD8a", "RDN2aWp-GKz", "JIZVrl10Sve", "Pjv9HSXro7E", "XssQGoHC2...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_re...
[ "This work proposes an RL approach SMiRL that is able to learn effective policies in unstable environments without the need for external reward. The idea at a high-level is almost the opposite of intrinsic motivation RL approaches, which encourage novelty-seeking behaviors. The proposed method instead aims to minim...
[ 7, 7, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 4, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_cPZOyoDloxl", "iclr_2021_cPZOyoDloxl", "YvdlzuI9Lm6", "iclr_2021_cPZOyoDloxl", "ilMUnVnMmW7", "20AbzRwvZpN", "TqpDjZl5Atg", "TqpDjZl5Atg", "GiX41yiyTXH", "p5A297hPqzi", "ZbXu1S7QD8a", "Pjv9HSXro7E", "20AbzRwvZpN", "RDN2aWp-GKz", "McceVXFRRC6", "K9NW0mzoYle", "XssQGoHC2vD",...
iclr_2021_0XXpJ4OtjW
Evolving Reinforcement Learning Algorithms
We propose a method for meta-learning reinforcement learning algorithms by searching over the space of computational graphs which compute the loss function for a value-based model-free RL agent to optimize. The learned algorithms are domain-agnostic and can generalize to new environments not seen during training. Our method can both learn from scratch and bootstrap off known existing algorithms, like DQN, enabling interpretable modifications which improve performance. Learning from scratch on simple classical control and gridworld tasks, our method rediscovers the temporal-difference (TD) algorithm. Bootstrapped from DQN, we highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games. The analysis of the learned algorithm behavior shows resemblance to recently proposed RL algorithms that address overestimation in value-based methods.
oral-presentations
This paper proposes a meta-learning algorithm for reinforcement learning. The work is very interesting for the RL community, it is clear and well-organized. The work is impressive and it contributes to the state-of-the-art.
train
[ "pOsNS1q0nid", "OlD2K7LiKp", "HT1f5SywkPp", "zbODvtUhCi", "1XCJWKC70of", "EeNCiI0tlSC", "Ccu-qH-9ZJE" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewers for their time and helpful feedback. We summarize the changes we made in response to the reviewers’ feedback, including newly added dataset with learning algorithms:\n\n-- We have released the data for the top 500 algorithms for both learning from scratch and learning from bootstrapping expe...
[ -1, -1, -1, -1, 9, 6, 7 ]
[ -1, -1, -1, -1, 4, 3, 3 ]
[ "iclr_2021_0XXpJ4OtjW", "1XCJWKC70of", "EeNCiI0tlSC", "Ccu-qH-9ZJE", "iclr_2021_0XXpJ4OtjW", "iclr_2021_0XXpJ4OtjW", "iclr_2021_0XXpJ4OtjW" ]
iclr_2021_wb3wxCObbRT
Growing Efficient Deep Networks by Structured Continuous Sparsification
We develop an approach to growing deep network architectures over the course of training, driven by a principled combination of accuracy and sparsity objectives. Unlike existing pruning or architecture search techniques that operate on full-sized models or supernet architectures, our method can start from a small, simple seed architecture and dynamically grow and prune both layers and filters. By combining a continuous relaxation of discrete network structure optimization with a scheme for sampling sparse subnetworks, we produce compact, pruned networks, while also drastically reducing the computational expense of training. For example, we achieve 49.7% inference FLOPs and 47.4% training FLOPs savings compared to a baseline ResNet-50 on ImageNet, while maintaining 75.2% top-1 validation accuracy --- all without any dedicated fine-tuning stage. Experiments across CIFAR, ImageNet, PASCAL VOC, and Penn Treebank, with convolutional networks for image classification and semantic segmentation, and recurrent networks for language modeling, demonstrate that we both train faster and produce more efficient networks than competing architecture pruning or search methods.
oral-presentations
The paper proposes a method to grow deep network architectures over the course of training. The work has been extremely well received and has clear novelty and solid experiment validation.
val
[ "FGN0ZqCdH9L", "rLzfGjbIcJb", "VeMByQnUyJ0", "Z6d22S415ki", "cmrvnWMdM-6", "F6SD7K4HDLz", "_w30sv3yDH", "_HS7uArSE12", "mShRrPsufgN" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\n\n This paper proposes a NAS-type work for growing a small network to a large network by adding channels and layers gradually. The authors apply the method to both CNN and LSTM networks. \n\nStrong points:\n\n 1. This paper is well-written and shows good results.\n\n 2. The proposed algorithm is sound...
[ 7, 7, -1, -1, -1, -1, -1, 7, 8 ]
[ 4, 4, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_wb3wxCObbRT", "iclr_2021_wb3wxCObbRT", "iclr_2021_wb3wxCObbRT", "mShRrPsufgN", "_HS7uArSE12", "FGN0ZqCdH9L", "rLzfGjbIcJb", "iclr_2021_wb3wxCObbRT", "iclr_2021_wb3wxCObbRT" ]
iclr_2021_gZ9hCDWe6ke
Deformable DETR: Deformable Transformers for End-to-End Object Detection
DETR has been recently proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance. However, it suffers from slow convergence and limited feature spatial resolution, due to the limitation of Transformer attention modules in processing image feature maps. To mitigate these issues, we proposed Deformable DETR, whose attention modules only attend to a small set of key sampling points around a reference. Deformable DETR can achieve better performance than DETR (especially on small objects) with 10× less training epochs. Extensive experiments on the COCO benchmark demonstrate the effectiveness of our approach. Code is released at https://github.com/fundamentalvision/Deformable-DETR.
oral-presentations
Accept. The paper proposes Deformable DETR that builds on DETR and solves the slow convergence and limited spatial resolution problem while getting impressive results. The authors should think about comparing with other linear attention mechanisms to show the applicability of the method.
train
[ "LZ9NJonoMnL", "mbHIFotxQy6", "OyGK4BOH5Eb", "Df3QO93G1QK", "8-6OJAZ3Ea9", "uc0foSNbd8u", "jrgsufuYPUz", "WoBeM7mA97O", "x1VT5henOtF", "8In3YKsvOX7", "C2eDl-piNdR", "fya7XUpEcg" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary:\n\nThis paper proposes Deformable DETR with multi-scale deformable attention modules to solve the problems of DETR: slow convergence and limited feature spatial resolution. In particular, it has faster convergence and achieves better performance(especially on small objects) than DETR.\n\nReasons for scor...
[ 8, -1, -1, -1, -1, -1, -1, -1, -1, 7, 8, 9 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "iclr_2021_gZ9hCDWe6ke", "OyGK4BOH5Eb", "8In3YKsvOX7", "C2eDl-piNdR", "LZ9NJonoMnL", "fya7XUpEcg", "WoBeM7mA97O", "x1VT5henOtF", "iclr_2021_gZ9hCDWe6ke", "iclr_2021_gZ9hCDWe6ke", "iclr_2021_gZ9hCDWe6ke", "iclr_2021_gZ9hCDWe6ke" ]
iclr_2021_NzTU59SYbNq
EigenGame: PCA as a Nash Equilibrium
We present a novel view on principal components analysis as a competitive game in which each approximate eigenvector is controlled by a player whose goal is to maximize their own utility function. We analyze the properties of this PCA game and the behavior of its gradient based updates. The resulting algorithm---which combines elements from Oja's rule with a generalized Gram-Schmidt orthogonalization---is naturally decentralized and hence parallelizable through message passing. We demonstrate the scalability of the algorithm with experiments on large image datasets and neural network activations. We discuss how this new view of PCA as a differentiable game can lead to further algorithmic developments and insights.
oral-presentations
This paper introduces a novel game-theoretic view on PCA which yields an algorithm (EigenGame; Algorithm 2) that allows evaluation of singular vectors in a decentralized manner. The proposed algorithm is significant in its scalability, as demonstrated in the experiment on a large-scale dataset (ResNet-200 activations). This paper is generally clearly written, and in particular Section 2 provides an easy-to-follow reasoning leading to the proposed game-theoretic reformulation of PCA. I felt that the later sections are a bit condensed, including the figures. In the authors response major concerns raised by the reviewers have been appropriately addressed. I would thus recommend acceptance of this paper. What I found particularly interesting in their game-theoretic reformulation is that in the utility functions shown in (6) the orthogonality constraints $\hat{u}_j^\top\hat{u}_i=0$ have been removed and replaced with the soft constraints represented as the regularizer terms encouraging the orthogonality. Although several alternative forms for the regularizers would be possible, it is this particular form that allows an efficient gradient-ascent algorithm which does not require explicit orthonormalization or matrix inversion is straightforwardly parallelizable. Pros: - Provides a novel game-theoretic reformulation of PCA. - Proposes a sequential algorithm and a decentralized algorithm for PCA on the basis of the game-theoretic reformulation. - Provides theoretical guarantee for the global convergence of the sequential algorithm. - Demonstrates that the proposed decentralized algorithm is scalable to large-scale problems. Cons: - The latter statement of Theorem 4.1 requires conditions on the initialization, which are hard to satisfy in high-dimensional settings. - Significance of the proposed game-theoretic formulation in the context of game theory does not seem to be well explored.
train
[ "2xNCZziLo-A", "_nesUfS7b4S", "51WFtZOkOl3", "zo44P6Z1GAW", "A1uAEx6l9Ik", "pJQjKaI7dMr", "vJDxhXqJ7Vs", "yk0hInYYKw1", "tb-KwwqKvD", "D-teiZsPsXI" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Principal component analysis (PCA) is a well-known dimensionality reduction and feature learning technique in the literature that leads to uncorrelated features. While there are a plethora of algorithms for PCA, along with accompanying analysis, a majority of these works have been developed from an optimization pe...
[ 8, -1, -1, -1, -1, -1, -1, -1, 7, 8 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_NzTU59SYbNq", "51WFtZOkOl3", "vJDxhXqJ7Vs", "iclr_2021_NzTU59SYbNq", "tb-KwwqKvD", "D-teiZsPsXI", "yk0hInYYKw1", "2xNCZziLo-A", "iclr_2021_NzTU59SYbNq", "iclr_2021_NzTU59SYbNq" ]
iclr_2021_kmG8vRXTFv
Augmenting Physical Models with Deep Networks for Complex Dynamics Forecasting
Forecasting complex dynamical phenomena in settings where only partial knowledge of their dynamics is available is a prevalent problem across various scientific fields. While purely data-driven approaches are arguably insufficient in this context, standard physical modeling based approaches tend to be over-simplistic, inducing non-negligible errors. In this work, we introduce the APHYNITY framework, a principled approach for augmenting incomplete physical dynamics described by differential equations with deep data-driven models. It consists in decomposing the dynamics into two components: a physical component accounting for the dynamics for which we have some prior knowledge, and a data-driven component accounting for errors of the physical model. The learning problem is carefully formulated such that the physical model explains as much of the data as possible, while the data-driven component only describes information that cannot be captured by the physical model, no more, no less. This not only provides the existence and uniqueness for this decomposition, but also ensures interpretability and benefits generalization. Experiments made on three important use cases, each representative of a different family of phenomena, i.e. reaction-diffusion equations, wave equations and the non-linear damped pendulum, show that APHYNITY can efficiently leverage approximate physical models to accurately forecast the evolution of the system and correctly identify relevant physical parameters.
oral-presentations
The authors propose a method for modeling dynamical systems that balances theoretically derived models, which may be grounded in domain knowledge but subject to overly strict assumptions, with neural networks that can pick up the slack. All reviewers were enthusiastic about this work, appreciating its balance of mathematical rigor and experimental assessment. One concern was that this paper follows on decades of related work, which was difficult to adequately summarize. However, changes made throughout discussion phase did address these concerns.
train
[ "vux6qUtjvH", "K5lcCPHJ6yF", "wzTKRFbNf0C", "l6Shv-ZZVHO", "crS6izLUsk", "zXGKFMEfeX2", "foEvNOmCUso", "urtQibqEUUa", "kLGd-T6of1" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear reviewers, we would like to thank you for your careful reading of our work and your detailed comments. We have really appreciated your constructive feedback that has hopefully helped to improve the quality of our submission. We have thus updated it (modifications in blue in the text), including:\n\n- a more c...
[ -1, -1, -1, -1, -1, -1, 8, 7, 9 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "iclr_2021_kmG8vRXTFv", "urtQibqEUUa", "urtQibqEUUa", "urtQibqEUUa", "foEvNOmCUso", "kLGd-T6of1", "iclr_2021_kmG8vRXTFv", "iclr_2021_kmG8vRXTFv", "iclr_2021_kmG8vRXTFv" ]
iclr_2021_Mos9F9kDwkz
Complex Query Answering with Neural Link Predictors
Neural link predictors are immensely useful for identifying missing edges in large scale Knowledge Graphs. However, it is still not clear how to use these models for answering more complex queries that arise in a number of domains, such as queries using logical conjunctions (∧), disjunctions (∨) and existential quantifiers (∃), while accounting for missing edges. In this work, we propose a framework for efficiently answering complex queries on incomplete Knowledge Graphs. We translate each query into an end-to-end differentiable objective, where the truth value of each atom is computed by a pre-trained neural link predictor. We then analyse two solutions to the optimisation problem, including gradient-based and combinatorial search. In our experiments, the proposed approach produces more accurate results than state-of-the-art methods --- black-box neural models trained on millions of generated queries --- without the need of training on a large and diverse set of complex queries. Using orders of magnitude less training data, we obtain relative improvements ranging from 8% up to 40% in Hits@3 across different knowledge graphs containing factual information. Finally, we demonstrate that it is possible to explain the outcome of our model in terms of the intermediate solutions identified for each of the complex query atoms. All our source code and datasets are available online, at https://github.com/uclnlp/cqd.
oral-presentations
The reviewers unanimously agree that this paper is a strong accept; it makes important progress in developing our ability to query relational embedding models.
val
[ "w_unFNZGW2i", "ByZB__4hwC", "F9dOacxrtgx", "CGldkk2zr0f", "Cr9qg30TVsq", "te9aaMuCX8T", "mSaYMOeIjKi", "BSnnFjCdUL-", "Fm1mCn1WNX4", "KimDoIp84Rp" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes Continuous Query Decomposition (CQD), an approach for answering Existential Positive First-Order (EPFO)\nqueries over incomplete knowledge graphs exploiting a neural link predictor for 1-hop-only queries.\nEntities are embedded in a low dimensional space and entity vectors are used to compute th...
[ 9, -1, -1, 8, -1, -1, -1, -1, 6, 9 ]
[ 4, -1, -1, 4, -1, -1, -1, -1, 5, 2 ]
[ "iclr_2021_Mos9F9kDwkz", "BSnnFjCdUL-", "Cr9qg30TVsq", "iclr_2021_Mos9F9kDwkz", "w_unFNZGW2i", "KimDoIp84Rp", "CGldkk2zr0f", "Fm1mCn1WNX4", "iclr_2021_Mos9F9kDwkz", "iclr_2021_Mos9F9kDwkz" ]
iclr_2021_EbIDjBynYJ8
Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding
Disentangling the underlying generative factors from complex data has so far been limited to carefully constructed scenarios. We propose a path towards natural data by first showing that the statistics of natural data provide enough structure to enable disentanglement, both theoretically and empirically. Specifically, we provide evidence that objects in natural movies undergo transitions that are typically small in magnitude with occasional large jumps, which is characteristic of a temporally sparse distribution. To address this finding we provide a novel proof that relies on a sparse prior on temporally adjacent observations to recover the true latent variables up to permutations and sign flips, directly providing a stronger result than previous work. We show that equipping practical estimation methods with our prior often surpasses the current state-of-the-art on several established benchmark datasets without any impractical assumptions, such as knowledge of the number of changing generative factors. Furthermore, we contribute two new benchmarks, Natural Sprites and KITTI Masks, which integrate the measured natural dynamics to enable disentanglement evaluation with more realistic datasets. We leverage these benchmarks to test our theory, demonstrating improved performance. We also identify non-obvious challenges for current methods in scaling to more natural domains. Taken together our work addresses key issues in disentanglement research for moving towards more natural settings.
oral-presentations
This paper proposes a model for learning disentangled representations by assuming the slowness prior over transitions between two frames. The model is well justified theoretically, and evaluated extensively experimentally. The results are good, and all reviewers agree that this paper is among the top papers they have reviewed. For this reason, I am pleased to recommend this paper for an Oral.
train
[ "ikIH9kCP9iG", "Z-t71YHSz4i", "I_jwHx-z5Z", "XfJ2ndm3fp", "8Jms7EBhn5", "5GOPntXE_l", "Ta-hVBOq1KH", "5WgO-sDYerL", "bvuRnl7E3dM", "gvxK-GSbCFo", "g661sTeoTzS", "t2KT4FE1Ioe", "Bt8aO81ZW4F", "06yglnxPmgT", "hffNiYhXXpR" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper introduces the SlowVAE to model transitions (of position, size, etc) in single-object videos. A Laplace prior conditioned on the latents at the previous step is used to learn the transitions, which the authors argue are naturally sparse.\n\nMy biggest concern regarding this work is the construction of d...
[ 7, 9, -1, -1, -1, 9, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 3, 4, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_EbIDjBynYJ8", "iclr_2021_EbIDjBynYJ8", "iclr_2021_EbIDjBynYJ8", "06yglnxPmgT", "Bt8aO81ZW4F", "iclr_2021_EbIDjBynYJ8", "ikIH9kCP9iG", "ikIH9kCP9iG", "5GOPntXE_l", "XfJ2ndm3fp", "hffNiYhXXpR", "Z-t71YHSz4i", "5GOPntXE_l", "ikIH9kCP9iG", "iclr_2021_EbIDjBynYJ8" ]
iclr_2021_O3Y56aqpChA
Self-training For Few-shot Transfer Across Extreme Task Differences
Most few-shot learning techniques are pre-trained on a large, labeled “base dataset”. In problem domains where such large labeled datasets are not available for pre-training (e.g., X-ray, satellite images), one must resort to pre-training in a different “source” problem domain (e.g., ImageNet), which can be very different from the desired target task. Traditional few-shot and transfer learning techniques fail in the presence of such extreme differences between the source and target tasks. In this paper, we present a simple and effective solution to tackle this extreme domain gap: self-training a source domain representation on unlabeled data from the target domain. We show that this improves one-shot performance on the target domain by 2.9 points on average on the challenging BSCD-FSL benchmark consisting of datasets from multiple domains.
oral-presentations
The paper introduces an approach to self-train a source domain classifier on unlabeled data from the target domain, considering the few-shot learning setting when there is significant discrepancy between the source and target domains. While the reviewers pointed out a few weaknesses, such as somewhat limited methodological novelty and lack of comparisons with other methods, they all recommend acceptance as final decision. The paper is beautifully written. The proposed method is very simple, but yields excellent results in a very practical problem, which should be of wide interest to the ICLR community. The experimental evaluation is rigorous and the ablation studies are convincing. The AC agrees with the decision made by the reviewers and recommends acceptance.
train
[ "AVmnv9TF8q", "9KqsGQoMYk3", "qY5S-4tFSkq", "YO0lZSy-Joy" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "**Summary of the paper**\n\nThis paper studies transfer learning in an episode learning setting, where at evaluation time a few-shot few-example task is generated. In contrast to the standard setting, two modifications are made:\n1) large domain differences (base dataset is (mini)ImageNet, target datasets are plan...
[ 6, 8, 7, 8 ]
[ 4, 5, 4, 5 ]
[ "iclr_2021_O3Y56aqpChA", "iclr_2021_O3Y56aqpChA", "iclr_2021_O3Y56aqpChA", "iclr_2021_O3Y56aqpChA" ]
iclr_2021_PxTIG12RRHS
Score-Based Generative Modeling through Stochastic Differential Equations
Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (a.k.a., score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024×1024 images for the first time from a score-based generative model.
oral-presentations
All reviewers agree that this is a well-written and interesting paper that will be of interest to the ICLR and broader ML community.
train
[ "7j3xdLuSZN", "TcpwfhjAZ3u", "BP5tFx3_p0c", "bHEI3Z3LHSJ", "43rVB1LO1z", "uiWg7hSErOY", "jpryUM7w7ml", "SGRrXQmYuQs", "7-AEi3izVVK", "zLwPct3Kc3k", "qL0RSCfEg8t" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "#### Summary and contributions\nThis paper proposes a generalized framework for score-based generative modeling (SBGM). The proposed method subsumes previous SBGM techniques of score matching with Langevin dynamics (SMLD aka NCSN) and denoising diffusion probabilistic modeling (DDPM) and shows how they correspond ...
[ 8, -1, -1, -1, -1, -1, -1, -1, 9, 7, 8 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "iclr_2021_PxTIG12RRHS", "uiWg7hSErOY", "7j3xdLuSZN", "43rVB1LO1z", "zLwPct3Kc3k", "7-AEi3izVVK", "qL0RSCfEg8t", "iclr_2021_PxTIG12RRHS", "iclr_2021_PxTIG12RRHS", "iclr_2021_PxTIG12RRHS", "iclr_2021_PxTIG12RRHS" ]
iclr_2021_Wj4ODo0uyCF
Share or Not? Learning to Schedule Language-Specific Capacity for Multilingual Translation
Using a mix of shared and language-specific (LS) parameters has shown promise in multilingual neural machine translation (MNMT), but the question of when and where LS capacity matters most is still under-studied. We offer such a study by proposing conditional language-specific routing (CLSR). CLSR employs hard binary gates conditioned on token representations to dynamically select LS or shared paths. By manipulating these gates, it can schedule LS capacity across sub-layers in MNMT subject to the guidance of translation signals and budget constraints. Moreover, CLSR can easily scale up to massively multilingual settings. Experiments with Transformer on OPUS-100 and WMT datasets show that: 1) MNMT is sensitive to both the amount and the position of LS modeling: distributing 10%-30% LS computation to the top and/or bottom encoder/decoder layers delivers the best performance; and 2) one-to-many translation benefits more from CLSR compared to many-to-one translation, particularly with unbalanced training data. Our study further verifies the trade-off between the shared capacity and LS capacity for multilingual translation. We corroborate our analysis by confirming the soundness of our findings as foundation of our improved multilingual Transformers. Source code and models are available at https://github.com/googleinterns/cct-m4.
oral-presentations
This paper proposes a conditional language-specific routing (CLSR) mechanism for multilingual NMT, which also considers the trade-off between language specificity and generality. All of the reviewers think this paper is interesting for both idea and empirical findings. Therefore, it is a clear acceptance.
train
[ "XLn-GRLCRat", "UtH739qczBl", "bG0vQ7C2Dh5", "evoGdfTLIUX", "FTBVywGCvVv", "LJClxqO4GXl", "Igws6zyqoNZ", "lYGH7w5rKp_", "l_Gg-p4Mt2r", "UUMF0tx7frV", "zvfRbxLUcGq" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your constructive comments!\n\n(1) We updated our paper with a discussion of your mentioned studies in the related work.\n\n(2) Thanks for your suggestion! We toned down the performance improvement claim in the updated version.\n\n(3) Your understanding of the gate parameters is correct: these parameter...
[ -1, -1, -1, -1, -1, -1, -1, 8, 9, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 3, 4, 4 ]
[ "UtH739qczBl", "FTBVywGCvVv", "iclr_2021_Wj4ODo0uyCF", "lYGH7w5rKp_", "UUMF0tx7frV", "l_Gg-p4Mt2r", "zvfRbxLUcGq", "iclr_2021_Wj4ODo0uyCF", "iclr_2021_Wj4ODo0uyCF", "iclr_2021_Wj4ODo0uyCF", "iclr_2021_Wj4ODo0uyCF" ]
iclr_2021_yWkP7JuHX1
Image GANs meet Differentiable Rendering for Inverse Graphics and Interpretable 3D Neural Rendering
Differentiable rendering has paved the way to training neural networks to perform “inverse graphics” tasks such as predicting 3D geometry from monocular photographs. To train high performing models, most of the current approaches rely on multi-view imagery which are not readily available in practice. Recent Generative Adversarial Networks (GANs) that synthesize images, in contrast, seem to acquire 3D knowledge implicitly during training: object viewpoints can be manipulated by simply manipulating the latent codes. However, these latent codes often lack further physical interpretation and thus GANs cannot easily be inverted to perform explicit 3D reasoning. In this paper, we aim to extract and disentangle 3D knowledge learned by generative models by utilizing differentiable renderers. Key to our approach is to exploit GANs as a multi-view data generator to train an inverse graphics network using an off-the-shelf differentiable renderer, and the trained inverse graphics network as a teacher to disentangle the GAN's latent code into interpretable 3D properties. The entire architecture is trained iteratively using cycle consistency losses. We show that our approach significantly outperforms state-of-the-art inverse graphics networks trained on existing datasets, both quantitatively and via user studies. We further showcase the disentangled GAN as a controllable 3D “neural renderer", complementing traditional graphics renderers.
oral-presentations
The paper proposes to bring together a GAN, a differentiable renderer, and an inverse graphics model. This combined model learns 3D-aware image analysis and synthesis with very limited annotation effort (order of minutes). The results look impressive, even compared to training on a labeled dataset annotation of which took several orders of magnitude more time. The reviewers point out the novelty of the proposed system and the very high quality of the results. On the downside, R2 mentions that the model appears over-engineered and some important experimental results are missing. The authors’ response addresses these concerns quite well. Overall, this is a really strong work with compelling results, taking an important step towards employing generative models and neural renderers “in the wild”. I think it can make for a good oral.
train
[ "DjxMybcgm5b", "3S96rTALgFc", "oCA7BSQ7UP5", "Cs8KT0JX18q", "PRpbc3jGwOc", "0DYLDDo8as", "g0fsfF2Kcx0" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nThis paper proposes to couple a GAN, an inverse graphics network, and a differentiable renderer. The authors base their work on StyleGAN, and use the observation that a specific part of the latent code corresponds to camera view-point to rapidly annotate a large amount of synthetic images with approximate camera...
[ 8, -1, -1, -1, -1, 6, 8 ]
[ 3, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_yWkP7JuHX1", "0DYLDDo8as", "3S96rTALgFc", "DjxMybcgm5b", "g0fsfF2Kcx0", "iclr_2021_yWkP7JuHX1", "iclr_2021_yWkP7JuHX1" ]
iclr_2021_UH-cmocLJC
How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks
We study how neural networks trained by gradient descent extrapolate, i.e., what they learn outside the support of the training distribution. Previous works report mixed empirical results when extrapolating with neural networks: while feedforward neural networks, a.k.a. multilayer perceptrons (MLPs), do not extrapolate well in certain simple tasks, Graph Neural Networks (GNNs) -- structured networks with MLP modules -- have shown some success in more complex tasks. Working towards a theoretical explanation, we identify conditions under which MLPs and GNNs extrapolate well. First, we quantify the observation that ReLU MLPs quickly converge to linear functions along any direction from the origin, which implies that ReLU MLPs do not extrapolate most nonlinear functions. But, they can provably learn a linear target function when the training distribution is sufficiently diverse. Second, in connection to analyzing the successes and limitations of GNNs, these results suggest a hypothesis for which we provide theoretical and empirical evidence: the success of GNNs in extrapolating algorithmic tasks to new data (e.g., larger graphs or edge weights) relies on encoding task-specific non-linearities in the architecture or features. Our theoretical analysis builds on a connection of over-parameterized networks to the neural tangent kernel. Empirically, our theory holds across different training settings.
oral-presentations
This paper studies how (two layer) neural nets extrapolates. The paper is beautifully written and the authors very successfully answered all the questions. They managed to update the paper, clarify the assumptions and add additional experiments.
val
[ "kGVmcGiGJqY", "wcAlmzMiuA6", "sP3shlpXqlP", "LQfVfXXzfT", "GVs8ta3-S2W", "FxBmvb_LhUK", "T2lcmtQuaN0", "_8lP2qbBpGL", "hQuIfgDRma4", "16iWXj6OMC_", "ErscdGWPVMS", "qmzcnHg2Pu" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you! We are glad you like our paper, and we appreciate your insightful comments.", "Thank you for your detailed response!\nI really like the paper and my concerns were addressed so I updated the score to 9.", "## Summary\n\nThe paper studies how neural networks extrapolate. The authors theoretically\nexa...
[ -1, -1, 9, -1, -1, -1, -1, -1, -1, 8, 9, 9 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "wcAlmzMiuA6", "LQfVfXXzfT", "iclr_2021_UH-cmocLJC", "sP3shlpXqlP", "iclr_2021_UH-cmocLJC", "16iWXj6OMC_", "ErscdGWPVMS", "qmzcnHg2Pu", "iclr_2021_UH-cmocLJC", "iclr_2021_UH-cmocLJC", "iclr_2021_UH-cmocLJC", "iclr_2021_UH-cmocLJC" ]
iclr_2021_Ud3DSz72nYR
Contrastive Explanations for Reinforcement Learning via Embedded Self Predictions
We investigate a deep reinforcement learning (RL) architecture that supports explaining why a learned agent prefers one action over another. The key idea is to learn action-values that are directly represented via human-understandable properties of expected futures. This is realized via the embedded self-prediction (ESP) model, which learns said properties in terms of human provided features. Action preferences can then be explained by contrasting the future properties predicted for each action. To address cases where there are a large number of features, we develop a novel method for computing minimal sufficient explanations from an ESP. Our case studies in three domains, including a complex strategy game, show that ESP models can be effectively learned and support insightful explanations.
oral-presentations
This paper tackles the important problem of endowing deep RL agents with added interpretability. Action values are decomposed as the combination of GVFs learned on externally-specified features, offering action explanations in terms of discounted future returns in the space of interpretable quantities. Reviewers praised the approach, as well as the level of detail for reproducibility purposes. R3 had concerns about the generality of the method but follow-up experiments have allayed these concerns. Given the reviewer response and the central importance of the problem considered to the field, I can wholeheartedly recommend acceptance.
train
[ "NNbbsJSpEf-", "iM50lwTKme7", "IcsQExHxyCk", "ZFA0cR0foAq", "Z6JfID4AVcj", "mB7GEoU8EK", "DEar7tddQqn" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Edit: \n\nI have read the authors' response as well as the other reviews. Based on the additional results and added feature selection details, I now agree that ESP is generally applicable. \n\nSummary: \n\nThe authors present ESP, an RL system that can then explain action choices in terms of future feature values....
[ 7, -1, -1, -1, -1, 8, 7 ]
[ 5, -1, -1, -1, -1, 4, 2 ]
[ "iclr_2021_Ud3DSz72nYR", "NNbbsJSpEf-", "mB7GEoU8EK", "DEar7tddQqn", "iclr_2021_Ud3DSz72nYR", "iclr_2021_Ud3DSz72nYR", "iclr_2021_Ud3DSz72nYR" ]
iclr_2021_rJA5Pz7lHKb
Improved Autoregressive Modeling with Distribution Smoothing
While autoregressive models excel at image compression, their sample quality is often lacking. Although not realistic, generated images often have high likelihood according to the model, resembling the case of adversarial examples. Inspired by a successful adversarial defense method, we incorporate randomized smoothing into autoregressive generative modeling. We first model a smoothed version of the data distribution, and then reverse the smoothing process to recover the original data distribution. This procedure drastically improves the sample quality of existing autoregressive models on several synthetic and real-world image datasets while obtaining competitive likelihoods on synthetic datasets.
oral-presentations
All reviewers recommend acceptance. Some concerns were raised about the precision of theorem 2 (now renamed to proposition 1), as well as the analysis of hyperparameter choices and quantitative evaluation, which I believe the authors have adequately addressed. Based on a suggestion of reviewer 1, experiments with flow-based models were also added, which demonstrates that the method is not strictly tied to autoregressive models. Personally, I was also curious about the connection between noise injection and quantisation, which the authors responded to by adding a paragraph discussing this connection in the manuscript. I would recommend that the authors also add the kernel inception distance (KID) results reported in the comments to the manuscript. This work stands out to me in that it combines a relatively simple, easy to understand idea with nice results, which is a trait of many impactful papers. I will therefore join the reviewers in recommending acceptance.
train
[ "BgtnXaLSxF9", "vWh4UgeXY6u", "X-UCVdAoyN", "EujIwmWPZkd", "8OCuyR1mS7p", "CSHXlcl3MJp", "ysYRKsGUOCA", "n2nNvRoTi71" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "**Summary.** Autoregressive models have demonstrate their potential utility for modeling images and other types of complex data with high flexibility (particularly in density estimation). However, its sampling ability is not that good as explained in the paper. Authors show that one of the main weaknesses of autor...
[ 7, 7, -1, -1, -1, -1, 8, 7 ]
[ 3, 4, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_rJA5Pz7lHKb", "iclr_2021_rJA5Pz7lHKb", "BgtnXaLSxF9", "vWh4UgeXY6u", "ysYRKsGUOCA", "n2nNvRoTi71", "iclr_2021_rJA5Pz7lHKb", "iclr_2021_rJA5Pz7lHKb" ]
iclr_2021_wWK7yXkULyh
MONGOOSE: A Learnable LSH Framework for Efficient Neural Network Training
Recent advances by practitioners in the deep learning community have breathed new life into Locality Sensitive Hashing (LSH), using it to reduce memory and time bottlenecks in neural network (NN) training. However, while LSH has sub-linear guarantees for approximate near-neighbor search in theory, it is known to have inefficient query time in practice due to its use of random hash functions. Moreover, when model parameters are changing, LSH suffers from update overhead. This work is motivated by an observation that model parameters evolve slowly, such that the changes do not always require an LSH update to maintain performance. This phenomenon points to the potential for a reduction in update time and allows for a modified learnable version of data-dependent LSH to improve query time at a low cost. We use the above insights to build MONGOOSE, an end-to-end LSH framework for efficient NN training. In particular, MONGOOSE is equipped with a scheduling algorithm to adaptively perform LSH updates with provable guarantees and learnable hash functions to improve query efficiency. Empirically, we validate MONGOOSE on large-scale deep learning models for recommendation systems and language modeling. We find that it achieves up to 8% better accuracy compared to previous LSH approaches, with 6.5× speed-up and 6× reduction in memory usage.
oral-presentations
Thanks for your submission to ICLR. When the initial reviews were written, three of the four reviewers were positive about the paper. Everyone felt it was overall a solid contribution, but there were some concerns about the clarity and presentation, as well as some suggestions for additional experiments. During the rebuttal/response period, the authors did a very nice job in responding to the concerns of the reviewers. Ultimately, all of the reviews were in agreement after discussion that the paper is strong and ready for publication. I also like this paper a lot, and find it to be a nice way to combine LSH with NN training. I am happy to recommend this paper for publication.
train
[ "DWM-xFtZlNL", "tVmc8rRA7Vj", "PXRfhCdvPZ", "HoOYB8CSC2P", "F-O03j6eqq", "BIVs4zjKyFz", "pdxkWtrShCE", "GbE9SrmbgMM", "K5v82DUOeUS", "C8NYfnlvRH", "Q77OG_pL8qY", "S98rDYi9iur", "AKKQCeBxBTF", "RF1cnQ_Rg8F", "Ar-iE15aBR1", "1R222uVdrJG" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We thank all the reviewers for the time and effort in helping us improve the quality of the paper. We were glad that the reviewers found the problem **interesting, necessary and critical** ( R2, R4), the observation **smart, inspiring and impressive** (R1, R2, R3, R4), and the approach or algorithm **principle, n...
[ -1, -1, -1, -1, 7, -1, 7, -1, -1, -1, -1, -1, -1, -1, 8, 7 ]
[ -1, -1, -1, -1, 3, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_wWK7yXkULyh", "PXRfhCdvPZ", "AKKQCeBxBTF", "F-O03j6eqq", "iclr_2021_wWK7yXkULyh", "GbE9SrmbgMM", "iclr_2021_wWK7yXkULyh", "Q77OG_pL8qY", "pdxkWtrShCE", "Ar-iE15aBR1", "K5v82DUOeUS", "AKKQCeBxBTF", "1R222uVdrJG", "F-O03j6eqq", "iclr_2021_wWK7yXkULyh", "iclr_2021_wWK7yXkULyh" ...
iclr_2021_3AOj0RCNC2
Gradient Projection Memory for Continual Learning
The ability to learn continually without forgetting the past tasks is a desired attribute for artificial learning systems. Existing approaches to enable such learning in artificial neural networks usually rely on network growth, importance based weight update or replay of old data from the memory. In contrast, we propose a novel approach where a neural network learns new tasks by taking gradient steps in the orthogonal direction to the gradient subspaces deemed important for the past tasks. We find the bases of these subspaces by analyzing network representations (activations) after learning each task with Singular Value Decomposition (SVD) in a single shot manner and store them in the memory as Gradient Projection Memory (GPM). With qualitative and quantitative analyses, we show that such orthogonal gradient descent induces minimum to no interference with the past tasks, thereby mitigates forgetting. We evaluate our algorithm on diverse image classification datasets with short and long sequences of tasks and report better or on-par performance compared to the state-of-the-art approaches.
oral-presentations
The paper proposes a new approach to continual learning with known task boundaries that is scalable and highly performant, while preserving data privacy. To mitigate forgetting the proposed approach restricts gradient updates to fall in the orthogonal direction to the gradient space that are important for the past tasks. The main novelty of the approach is to estimate these subspaces by analysing the activations for the inputs linked for each given task. All reviewers give accepting scores. R2, R3 and R4 strongly recommend accepting the paper, while R1 considers it borderline. The authors provided an extensive response carefully considering all reviewers' comments. New experiments were introduced (training time analysis and comparisons with expansion-based methods), and several clarifications were added. All reviewers agree that the paper is well written and its literature review adequate. The main concern of R1 was the similarities with OGD (Farajtabar et al. 2020). R1 considered the authors’ response acceptable. R2, R3 and R4 consider the contribution well motivated and significant and highlight its simplicity. The AC agrees with this assessment. The empirical evaluation covers most of the typical benchmarks in CL. Very strong results are reported on a variety of tasks both in terms of performance and memory efficiency, as agreed by R2, R3 and R4. Overall the paper makes a strong contribution to the field of CL.
train
[ "TjNWJeGmmOT", "1tZSm-tMrxp", "oRkjfaImnAX", "gGwPoNZoVz", "aAmi8xXaVXx", "DQ9Ryl_371U", "y4Vy4rEgUh", "PizwZdKwfNI", "WTvLWaCshJ7", "fIFtCpFWrJ", "TCMNAaYLj4d", "5WFvxgQz4He", "ll2LpqSlJK-", "MyVwO0izm_j" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This manuscript proposes a new approach for continual learning problems. The key idea is to maintain bases of subspaces using SVD in the Gradient Projection Memory (GPM), in which the update direction is orthogonal to the gradient subspaces deemed important for the past tasks. Image classification experiment was c...
[ 6, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ "iclr_2021_3AOj0RCNC2", "iclr_2021_3AOj0RCNC2", "iclr_2021_3AOj0RCNC2", "aAmi8xXaVXx", "fIFtCpFWrJ", "ll2LpqSlJK-", "ll2LpqSlJK-", "TjNWJeGmmOT", "TjNWJeGmmOT", "1tZSm-tMrxp", "MyVwO0izm_j", "1tZSm-tMrxp", "iclr_2021_3AOj0RCNC2", "iclr_2021_3AOj0RCNC2" ]
iclr_2021_uCY5MuAxcxU
Why Are Convolutional Nets More Sample-Efficient than Fully-Connected Nets?
Convolutional neural networks often dominate fully-connected counterparts in generalization performance, especially on image classification tasks. This is often explained in terms of \textquotedblleft better inductive bias.\textquotedblright\ However, this has not been made mathematically rigorous, and the hurdle is that the sufficiently wide fully-connected net can always simulate the convolutional net. Thus the training algorithm plays a role. The current work describes a natural task on which a provable sample complexity gap can be shown, for standard training algorithms. We construct a single natural distribution on Rd×{±1} on which any orthogonal-invariant algorithm (i.e. fully-connected networks trained with most gradient-based methods from gaussian initialization) requires Ω(d2) samples to generalize while O(1) samples suffice for convolutional architectures. Furthermore, we demonstrate a single target function, learning which on all possible distributions leads to an O(1) vs Ω(d2/ε) gap. The proof relies on the fact that SGD on fully-connected network is orthogonal equivariant. Similar results are achieved for ℓ2 regression and adaptive training algorithms, e.g. Adam and AdaGrad, which are only permutation equivariant.
oral-presentations
The paper analyzes the sample complexity of convolutional architectures, proving a gap between it and that of fully connected (fc) networks. The approach builds on certain invariances of fc nets. The reviewers appreciated the technical content and its contribution to understanding the relative advantages of different architecture, as well as the role of invariance.
train
[ "uFCqE-aXBOy", "eEtVx0qv-36", "60-fth8TlC", "UOAB68EzAd0", "KzJPodBIPwZ", "kChyY-GGr_L", "8pUheFzp8Q", "YW-4Dp1f2E1", "s0DleVmXn16" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper studies simple distributional settings in which convolutional neural networks give a provable sample complexity advantage over fully connected networks. This perspective is a valuable complement to prior work in statistical learning theory that often focuses on distribution-free results, which make it ha...
[ 7, -1, -1, -1, -1, -1, 7, 8, 7 ]
[ 4, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2021_uCY5MuAxcxU", "s0DleVmXn16", "8pUheFzp8Q", "uFCqE-aXBOy", "YW-4Dp1f2E1", "iclr_2021_uCY5MuAxcxU", "iclr_2021_uCY5MuAxcxU", "iclr_2021_uCY5MuAxcxU", "iclr_2021_uCY5MuAxcxU" ]
iclr_2021_Pd_oMxH8IlF
Iterated learning for emergent systematicity in VQA
Although neural module networks have an architectural bias towards compositionality, they require gold standard layouts to generalize systematically in practice. When instead learning layouts and modules jointly, compositionality does not arise automatically and an explicit pressure is necessary for the emergence of layouts exhibiting the right structure. We propose to address this problem using iterated learning, a cognitive science theory of the emergence of compositional languages in nature that has primarily been applied to simple referential games in machine learning. Considering the layouts of module networks as samples from an emergent language, we use iterated learning to encourage the development of structure within this language. We show that the resulting layouts support systematic generalization in neural agents solving the more complex task of visual question-answering. Our regularized iterated learning method can outperform baselines without iterated learning on SHAPES-SyGeT (SHAPES Systematic Generalization Test), a new split of the SHAPES dataset we introduce to evaluate systematic generalization, and on CLOSURE, an extension of CLEVR also designed to test systematic generalization. We demonstrate superior performance in recovering ground-truth compositional program structure with limited supervision on both SHAPES-SyGeT and CLEVR.
oral-presentations
This paper presents an original perspective on how to learn layouts and modules of neural module networks jointly, in a way that encourages the emergence of compositional solutions. In particular, layouts are treated as messages from an emergent language, and iterated learning is used to encourage the emergence of structure. The paper shows good performance in inducing compositional structure in two datasets. Summarizing the reviewers' doubts, one is that the idea is tested on relatively toyish data sets, and it is not clear how it would scale up. The second, coming from one reviewer, concerns a lack of originality that, honestly, I do not see. If anything, this is probably the most original paper in my pool. Concerning the first point, that is a fair objection, but I think that getting good results on program learning on datasets such as CLEVER is more than encouraging for a paper that is introducing quite a novel idea for the first time. Finally, the authors added new text and new experiments strenghtening their conclusion during the discussion. I am strongly in favour of accepting this paper.
train
[ "X2W0yhGNtsm", "CYlURuY6hhS", "4f4jRzn-fRU", "4943jk488e_", "u-8AszcfBc6", "rJtgDxcpiKA", "UITGOHQXLZO", "bQb768mfr7V", "r_bxVPq9t9d", "uStfLc_cLAE", "njolLhrWLWe", "hHAsr1Et-0k" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Review:\nThe authors address methods to encourage the emergence of the layout expression structures on the frameworks of neural module networks (NMN) for the visual QA problems. The methods are motivated from the works on language emergence for communication between multi-agents and the language acquisition of new...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_Pd_oMxH8IlF", "u-8AszcfBc6", "rJtgDxcpiKA", "u-8AszcfBc6", "r_bxVPq9t9d", "njolLhrWLWe", "iclr_2021_Pd_oMxH8IlF", "hHAsr1Et-0k", "uStfLc_cLAE", "X2W0yhGNtsm", "iclr_2021_Pd_oMxH8IlF", "iclr_2021_Pd_oMxH8IlF" ]
iclr_2021_F3s69XzWOia
Coupled Oscillatory Recurrent Neural Network (coRNN): An accurate and (gradient) stable architecture for learning long time dependencies
Circuits of biological neurons, such as in the functional parts of the brain can be modeled as networks of coupled oscillators. Inspired by the ability of these systems to express a rich set of outputs while keeping (gradients of) state variables bounded, we propose a novel architecture for recurrent neural networks. Our proposed RNN is based on a time-discretization of a system of second-order ordinary differential equations, modeling networks of controlled nonlinear oscillators. We prove precise bounds on the gradients of the hidden states, leading to the mitigation of the exploding and vanishing gradient problem for this RNN. Experiments show that the proposed RNN is comparable in performance to the state of the art on a variety of benchmarks, demonstrating the potential of this architecture to provide stable and accurate RNNs for processing complex sequential data.
oral-presentations
A novel second order nonlinear oscillator RNN architecture is proposed, analyzed, and evaluated in this paper. The results are solid and impactful. Authors and expert reviewers showed exemplary interactions with each other, improving the manuscript in significant ways. All four reviewers overwhelmingly recommended accept. I recommend that this paper be selected as an oral presentation.
train
[ "ngsOAO-EcZM", "it96G_j8Fwv", "NbD6KYwtnPY", "8TYwNXEu6yJ", "cuUSC-QbiIZ", "whKgaRO4I5O", "kJjWU7FVAs", "_He9M42w1ha", "BhVa0oaBxkF", "DZz-p_h7kE", "TwzW2vPaczv", "ZwGx_zbVBEy", "Vh-y983HOQI", "dByqlSVL5Wz", "zHTzMgTSEab", "t6jo73VdY9Z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper proposes a novel RNN architecture (CorNN) to tackle the infamous problem of vanishing and exploding gradients in RNNs. The novel CorNN architecture is based on time-discretized forced coupled damped nonlinear oscillators. For the gradient norm of CorNN analytical lower and upper bounds are calculated imp...
[ 7, 7, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ 3, 3, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_F3s69XzWOia", "iclr_2021_F3s69XzWOia", "iclr_2021_F3s69XzWOia", "cuUSC-QbiIZ", "it96G_j8Fwv", "kJjWU7FVAs", "zHTzMgTSEab", "BhVa0oaBxkF", "NbD6KYwtnPY", "TwzW2vPaczv", "ngsOAO-EcZM", "iclr_2021_F3s69XzWOia", "t6jo73VdY9Z", "zHTzMgTSEab", "it96G_j8Fwv", "iclr_2021_F3s69XzWOia...
iclr_2021_pBqLS-7KYAF
Sparse Quantized Spectral Clustering
Given a large data matrix, sparsifying, quantizing, and/or performing other entry-wise nonlinear operations can have numerous benefits, ranging from speeding up iterative algorithms for core numerical linear algebra problems to providing nonlinear filters to design state-of-the-art neural network models. Here, we exploit tools from random matrix theory to make precise statements about how the eigenspectrum of a matrix changes under such nonlinear transformations. In particular, we show that very little change occurs in the informative eigenstructure, even under drastic sparsification/quantization, and consequently that very little downstream performance loss occurs when working with very aggressively sparsified or quantized spectral clustering problems. We illustrate how these results depend on the nonlinearity, we characterize a phase transition beyond which spectral clustering becomes possible, and we show when such nonlinear transformations can introduce spurious non-informative eigenvectors.
spotlight-presentations
The paper presents a nice analysis of the spectrum of a matrix that is obtained by applying non-linear functions to a random matrix. The paper is mostly well-written, the result is novel and interesting, and has clear implications for ML problems like spectral clustering. So I would enthusiastically recommend the paper for acceptance at ICLR. It would be important for authors to take into account reviewer comments. In particular, instantiating the theorems for simple ML-centric examples would be very useful.
val
[ "MnS8QWwq1mw", "ziAhVq8tE0", "Ndq9pJE9WLB", "ofhK7IeKOR", "CFPZqN_psRn", "Yr5hvZ6XxSj", "e2e03rjvj5", "5tufK-HfTKX" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for his/her time reviewing our work and for the pertinent and constructive comments on our paper.\n\nIn line with our comments to Reviewer 7 and 5, we agree that the article would heavily gain in readability if our core messages are simplified, beyond the mere exposition of our (possibly seen...
[ -1, -1, -1, -1, 7, 7, 7, 6 ]
[ -1, -1, -1, -1, 4, 2, 3, 3 ]
[ "Yr5hvZ6XxSj", "e2e03rjvj5", "5tufK-HfTKX", "CFPZqN_psRn", "iclr_2021_pBqLS-7KYAF", "iclr_2021_pBqLS-7KYAF", "iclr_2021_pBqLS-7KYAF", "iclr_2021_pBqLS-7KYAF" ]
iclr_2021_HHSEKOnPvaO
Graph-Based Continual Learning
Despite significant advances, continual learning models still suffer from catastrophic forgetting when exposed to incrementally available data from non-stationary distributions. Rehearsal approaches alleviate the problem by maintaining and replaying a small episodic memory of previous samples, often implemented as an array of independent memory slots. In this work, we propose to augment such an array with a learnable random graph that captures pairwise similarities between its samples, and use it not only to learn new tasks but also to guard against forgetting. Empirical results on several benchmark datasets show that our model consistently outperforms recently proposed baselines for task-free continual learning.
spotlight-presentations
This paper presents an interesting idea for task-free continual learning, which makes use of random graphs to represent relational structures among contextual and target samples. The reviewers agreed that the technical idea is novel, the experiments are extensive and the presentation is good. The authors addressed the reviewers' concerns in the rebuttal. I recommend to accept.
train
[ "BmA2M3glQ9U", "UXPWzD6Yr5r", "5_7zKPriFy", "Re3kft2ORn", "e3EMvjX5Ak", "2p_Y6d8W0qn", "NQNzTH8uHIt", "JEj54l1Xyp-" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the detailed review and suggestions. Our responses to the questions are below.\n\n* **Edge Importance in Graph Regularization**. For simplicity, we treat the edges in Equation 7 the same way. However, the cross-entropy loss that we use to penalize deviations from learned edges does accoun...
[ -1, -1, -1, -1, 8, 7, 7, 6 ]
[ -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "2p_Y6d8W0qn", "JEj54l1Xyp-", "e3EMvjX5Ak", "NQNzTH8uHIt", "iclr_2021_HHSEKOnPvaO", "iclr_2021_HHSEKOnPvaO", "iclr_2021_HHSEKOnPvaO", "iclr_2021_HHSEKOnPvaO" ]
iclr_2021_Vfs_2RnOD0H
Dynamic Tensor Rematerialization
Checkpointing enables the training of deep learning models under restricted memory budgets by freeing intermediate activations from memory and recomputing them on demand. Current checkpointing techniques statically plan these recomputations offline and assume static computation graphs. We demonstrate that a simple online algorithm can achieve comparable performance by introducing Dynamic Tensor Rematerialization (DTR), a greedy online algorithm for checkpointing that is extensible and general, is parameterized by eviction policy, and supports dynamic models. We prove that DTR can train an N-layer linear feedforward network on an Ω(N) memory budget with only O(N) tensor operations. DTR closely matches the performance of optimal static checkpointing in simulated experiments. We incorporate a DTR prototype into PyTorch merely by interposing on tensor allocations and operator calls and collecting lightweight metadata on tensors.
spotlight-presentations
The paper presents an online algorithm for dynamic tensor rematerialization. The theoretic analysis on the tensor operation and memory budget bound of the proposed method, as well as on the relationship between the proposed method and optimal static analysis method is novel and interesting. It covers a pretty comprehensive study across theory, simulation and system implementation. In addition, the paper is well written.
train
[ "cle3QQD1HKM", "VFL_xa-nw_5", "MEd9eHlmyNC", "vqemzaTQi8U", "XpS9lK4_Twh", "JhBE7JjK9P_", "h0fmv_6rYmz", "aXFcfjqDjqp", "TId0UKb-YCS", "Vyzy7fCTQe" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your reply. Could you please clarify what additional ablation study you feel would strengthen the submission? We have included Capuchin's MSPS heuristic in our simulated evaluation in Section 4; it would be feasible to extend the ablation study in Appendix D to include variants of MSPS. We could also...
[ -1, -1, -1, -1, -1, -1, 7, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 5, 3 ]
[ "VFL_xa-nw_5", "JhBE7JjK9P_", "h0fmv_6rYmz", "TId0UKb-YCS", "aXFcfjqDjqp", "Vyzy7fCTQe", "iclr_2021_Vfs_2RnOD0H", "iclr_2021_Vfs_2RnOD0H", "iclr_2021_Vfs_2RnOD0H", "iclr_2021_Vfs_2RnOD0H" ]
iclr_2021_F1vEjWK-lH_
Gradient Vaccine: Investigating and Improving Multi-task Optimization in Massively Multilingual Models
Massively multilingual models subsuming tens or even hundreds of languages pose great challenges to multi-task optimization. While it is a common practice to apply a language-agnostic procedure optimizing a joint multilingual task objective, how to properly characterize and take advantage of its underlying problem structure for improving optimization efficiency remains under-explored. In this paper, we attempt to peek into the black-box of multilingual optimization through the lens of loss function geometry. We find that gradient similarity measured along the optimization trajectory is an important signal, which correlates well with not only language proximity but also the overall model performance. Such observation helps us to identify a critical limitation of existing gradient-based multi-task learning methods, and thus we derive a simple and scalable optimization procedure, named Gradient Vaccine, which encourages more geometrically aligned parameter updates for close tasks. Empirically, our method obtains significant model performance gains on multilingual machine translation and XTREME benchmark tasks for multilingual language models. Our work reveals the importance of properly measuring and utilizing language proximity in multilingual optimization, and has broader implications for multi-task learning beyond multilingual modeling.
spotlight-presentations
This paper proposes a scalable optimization method for multi-task learning in multilingual models. Pros: 1) Addresses a problem which has not been explored much in the past 2) Presents very good analysis to show the limitations of existing methods. 3) Good results. 4) Well written Cons: 1) Some missing details about various choices made in the experiments (mostly addressed in the rebuttal) This is a very interesting and useful work and I recommend that it should be accepted.
train
[ "6D-NG_9jZJT", "45WrhqyZL89", "G6kn1Jns2aX", "XLFwNkC3wEx", "VNMUOkEMG54", "QZCE3bakV-S", "_qCn5THrkcV", "asPunsSAari" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a novel method, GradientVaccine, to improve multi-task optimization on a massive multilingual translation and named entity recognition model. They investigate the loss function geometry on many language pairs and use the idea to encourage more geometrical parameter updates. This approach extends...
[ 7, -1, -1, -1, -1, 6, 6, 8 ]
[ 4, -1, -1, -1, -1, 3, 3, 4 ]
[ "iclr_2021_F1vEjWK-lH_", "asPunsSAari", "_qCn5THrkcV", "QZCE3bakV-S", "6D-NG_9jZJT", "iclr_2021_F1vEjWK-lH_", "iclr_2021_F1vEjWK-lH_", "iclr_2021_F1vEjWK-lH_" ]
iclr_2021_87ZwsaQNHPZ
CPT: Efficient Deep Neural Network Training via Cyclic Precision
Low-precision deep neural network (DNN) training has gained tremendous attention as reducing precision is one of the most effective knobs for boosting DNNs' training time/energy efficiency. In this paper, we attempt to explore low-precision training from a new perspective as inspired by recent findings in understanding DNN training: we conjecture that DNNs' precision might have a similar effect as the learning rate during DNN training, and advocate dynamic precision along the training trajectory for further boosting the time/energy efficiency of DNN training. Specifically, we propose Cyclic Precision Training (CPT) to cyclically vary the precision between two boundary values which can be identified using a simple precision range test within the first few training epochs. Extensive simulations and ablation studies on five datasets and eleven models demonstrate that CPT's effectiveness is consistent across various models/tasks (including classification and language modeling). Furthermore, through experiments and visualization we show that CPT helps to (1) converge to a wider minima with a lower generalization error and (2) reduce training variance which we believe opens up a new design knob for simultaneously improving the optimization and efficiency of DNN training.
spotlight-presentations
All of the reviewers are impressed by this paper's empirical results and they agree that this is a good paper and should be accepted. Some questions about the theoretical justification of the proposed method and its potential practical impact remain open, but the empirical results are impressive and can result in more research in understanding Cyclic Precision Training (CPT) and improving quantized training of neural nets. I suggest acceptance as a spotlight presentation.
train
[ "y1ns1iksnAN", "254P0jrcF-t", "HjAMWbR7Mt1", "9Iq2u7wl6WF", "7YZimlffyWt", "npOpaLkgWWh", "atnlx6-t1fV", "4OgSVSmO6PJ", "wx7ES3QgCO0", "uIzXrOumyN_" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "A simple yet apparently effective technique. The paper is clear and well written.\n\nThe authors demonstrate that cyclically changing the precision of weights and activations during training leads to better results (both accuracy and training cost) than static quantisation methods. The method is simple and easy to...
[ 7, 7, 7, -1, -1, -1, -1, -1, -1, 7 ]
[ 3, 3, 5, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2021_87ZwsaQNHPZ", "iclr_2021_87ZwsaQNHPZ", "iclr_2021_87ZwsaQNHPZ", "y1ns1iksnAN", "y1ns1iksnAN", "254P0jrcF-t", "254P0jrcF-t", "HjAMWbR7Mt1", "uIzXrOumyN_", "iclr_2021_87ZwsaQNHPZ" ]