paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2021_AM0PBmqmojH
Warpspeed Computation of Optimal Transport, Graph Distances, and Embedding Alignment
Optimal transport (OT) is a cornerstone of many machine learning tasks. The current best practice for computing OT is via entropy regularization and Sinkhorn iterations. This algorithm runs in quadratic time and requires calculating the full pairwise cost matrix, which is prohibitively expensive for large sets of objec...
withdrawn-rejected-submissions
The authors propose to approximate the kernel matrix used in the Sinkhorn algorithm by a combination of sparse + low rank approximation. To do so, the authors propose to compute a low rank approximation of a sparsified (thresholded below a certain value to be 0) kernel matrix using Nyström, and then correct it by addin...
train
[ "7E8KZoQ0t-C", "tznp4Sy9iN1", "srDf56ikag", "XGprfy8b9aw", "vBXK-WHwiZ3", "V8nR8bgkpPk", "95U99K-fuY2", "od4lWjpTVhU", "q6Ej0jW_lXX", "8NHbV8PrT4O", "R9jkFCs5AWk", "SJ_ybYduWQ", "RMShAVkNfCr", "dbtvdqdH5cT" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Overall, I found this paper interesting, and I think it does address a relevant problem for the community.\n\n\nI have been playing with Nystrom approximations myself and I know the results are a bit disappointing, but it is grounded on strong theory. Then, it is pretty much welcome that attempts to patch the prob...
[ 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ 4, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_AM0PBmqmojH", "dbtvdqdH5cT", "7E8KZoQ0t-C", "iclr_2021_AM0PBmqmojH", "V8nR8bgkpPk", "od4lWjpTVhU", "q6Ej0jW_lXX", "XGprfy8b9aw", "RMShAVkNfCr", "dbtvdqdH5cT", "iclr_2021_AM0PBmqmojH", "7E8KZoQ0t-C", "iclr_2021_AM0PBmqmojH", "iclr_2021_AM0PBmqmojH" ]
iclr_2021_HdX654Yn81
Improving the Unsupervised Disentangled Representation Learning with VAE Ensemble
Variational Autoencoder (VAE) based frameworks have achieved the state-of-the-art performance on the unsupervised disentangled representation learning. A recent theoretical analysis shows that such success is mainly due to the VAE implementation choices that encourage a PCA-like behavior locally on data samples. Despit...
withdrawn-rejected-submissions
This paper proposes to use an ensemble of VAEs to learn better disentangled representations by aligning their representations through additional losses. This training method is based on recent work by Rolinek et al (2019) and Duan et al (2020), which suggests that VAEs tend to approximate PCA-like behaviour when they a...
train
[ "nRK0FwFceL9", "IkYZyU1zin", "bO9wmlOyAF", "UG1NyLjA8Vc", "phj4efZoleR", "iCQCSwahp0", "yHpFuN6KgdN", "dtZA3z8vOvr", "hqz4snc_J6x", "kQGjVmFksY5" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author" ]
[ "This paper proposes a simple and effective technique to improve disentanglement by coupling the latent spaces of different VAE models. It builds on Duan et al. (2019)’s proposed method to rank the representations of different models. By learning a VAE ensemble with linear transformations between the latent spaces ...
[ 5, 3, 7, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2021_HdX654Yn81", "iclr_2021_HdX654Yn81", "iclr_2021_HdX654Yn81", "kQGjVmFksY5", "dtZA3z8vOvr", "hqz4snc_J6x", "iclr_2021_HdX654Yn81", "IkYZyU1zin", "bO9wmlOyAF", "nRK0FwFceL9" ]
iclr_2021_7apQQsbahFz
Intention Propagation for Multi-agent Reinforcement Learning
A hallmark of an AI agent is to mimic human beings to understand and interact with others. In this paper, we propose a \emph{collaborative} multi-agent reinforcement learning algorithm to learn a \emph{joint} policy through the interactions over agents. To make a joint decision over the group, each agent makes an init...
withdrawn-rejected-submissions
The paper describes a framework for multi-agent reinforcement learning that uses Markov Random Fields. Unfortunately, the paper is not clearly written and would benefit from significant revisions that improve its structure and make the model and approximations more explicit. In particular, the paper says a graph says ...
train
[ "oRva9CJyCcS", "20JhlFxTogY", "QGERUw110U3", "HmBZc9q5_1y", "e4tNXK4icmx", "TxgwM_l1r73", "0ot1nXZ_A1q", "DYs-YBihuwW", "edwv3xVPsxl", "n5hApQGlxcS", "Kp-Zctwk1p7", "1Px3lMM8Kk-", "BmULoYp2ypC", "aCOqCrFw5wT" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer" ]
[ "This paper proposes a method for generating policies in cooperative games, using a neighbourhood-based factorisation of reward, and an iterative algorithm which independently updates policies based on neighbour policies and then propagates the policy to neighbours using function space embedding.\n\nThe experimenta...
[ 6, 7, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 2, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_7apQQsbahFz", "iclr_2021_7apQQsbahFz", "iclr_2021_7apQQsbahFz", "QGERUw110U3", "0ot1nXZ_A1q", "0ot1nXZ_A1q", "HmBZc9q5_1y", "n5hApQGlxcS", "20JhlFxTogY", "Kp-Zctwk1p7", "BmULoYp2ypC", "aCOqCrFw5wT", "oRva9CJyCcS", "iclr_2021_7apQQsbahFz" ]
iclr_2021_TmkN9JmDJx1
Thinking Like Transformers
What is the computational model behind a transformer? Where recurrent neural networks have direct parallels in finite state machines, allowing clear discussion and thought around architecture variants or trained models, transformers have no such familiar parallel. In this paper we aim to change that, proposing a comput...
withdrawn-rejected-submissions
The paper presents a computational model for transformer encoders in the form of a programming language (called RASP), shows how to use this language to "program" tasks solvable by transformers, and describes how to use this model to explain known facts about transformer models. While the reviewers appreciated the nov...
train
[ "CrGxcfjTUWE", "nwhl6sgHKmR", "MM0KBYQFAm", "KLeVdRrXRMy", "Y8_1bi1VVSi", "K4oA5D7frL7", "yqW3p6HyRWu", "ru-35fk6Ne5", "MxIjOAJvgp", "6CMRRksMC2G", "YW6fEjmugfh", "lPmNi68cHNM", "uv3c9YGHXms", "nXDj_IJqITL", "nun4TbwCQxr", "_EEVUfUO4s" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a restricted programming language containing rough analogues of operations used in transformers. Using this language, the authors show how some algorithms can be implemented, which gives some insights about the limitations of transformers.\n\nOverall, the paper is badly written, with many typos...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "iclr_2021_TmkN9JmDJx1", "nun4TbwCQxr", "nXDj_IJqITL", "Y8_1bi1VVSi", "CrGxcfjTUWE", "iclr_2021_TmkN9JmDJx1", "ru-35fk6Ne5", "MxIjOAJvgp", "nXDj_IJqITL", "YW6fEjmugfh", "CrGxcfjTUWE", "nun4TbwCQxr", "_EEVUfUO4s", "iclr_2021_TmkN9JmDJx1", "iclr_2021_TmkN9JmDJx1", "iclr_2021_TmkN9JmDJx1"...
iclr_2021_uqD-un_Mzd-
Polynomial Graph Convolutional Networks
Graph Convolutional Neural Networks (GCNs) exploit convolution operators, based on some neighborhood aggregating scheme, to compute representations of graphs. The most common convolution operators only exploit local topological information. To consider wider topological receptive fields, the mainstream approach is to n...
withdrawn-rejected-submissions
All four reviewers expressed significant concerns on this submission during review. None of them is willing to change their evaluations and supports this work during discussions. Thus a reject is recommended.
train
[ "srTgOEkK8YW", "wfX447KggK", "lOTGbX8zcb3", "5rGh8FyTdQ0", "S1xv6pBSIiN", "MC4AzfezyAY", "xIcz7tMEHc", "oOX1gKTsZI", "7pxlTickWzk", "U8u3JrAMY81", "0g5GjuFpqB", "IAFCJs-8yST", "pzMAoE468Lb" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary: The paper proposes Polynomial Graph Convolution (PGC), which enjoys a larger-than-one-hop receptive field within a single layer. This is done by first propagating information with a fixed (not learned) propagation matrix (e.g. adjacency matrix or graph Laplacian), and then projecting the information from ...
[ 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2021_uqD-un_Mzd-", "iclr_2021_uqD-un_Mzd-", "iclr_2021_uqD-un_Mzd-", "srTgOEkK8YW", "IAFCJs-8yST", "5rGh8FyTdQ0", "pzMAoE468Lb", "MC4AzfezyAY", "xIcz7tMEHc", "0g5GjuFpqB", "wfX447KggK", "iclr_2021_uqD-un_Mzd-", "iclr_2021_uqD-un_Mzd-" ]
iclr_2021_eHG7asK_v-k
Multi-Agent Trust Region Learning
Trust-region methods are widely used in single-agent reinforcement learning. One advantage is that they guarantee a lower bound of monotonic payoff improvement for policy optimization at each iteration. Nonetheless, when applied in multi-agent settings, such guarantee is lost because an agent's payoff is also determin...
withdrawn-rejected-submissions
There was some slight disagreement on the paper, but the majority of reviewers agree that although some answers of the authors on questions brought good clarification, other issues still remain problematic. Some of the assumptions remain unclear (w.r.t CDTE), and reviewers still have doubts about the global convergence...
train
[ "ZTl5LVYYn6u", "FEcsUi9QRdr", "OjAZn4DQOTc", "J0a9yaF6IS", "NRpVRcJDN3r", "vkx03Glvsck", "FdUNZqrdXdY", "WCSLuYhJs2m", "oMlpI0JK_ms", "x-WzOWcFRjl", "EKsxTqfBb5Z", "coFUd2X0s9", "L0ZRF9Ubise", "2_u4bPkjy5", "xPFXEeKDrzz" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear reviewers, as promised, the experiments with five more random seeds have been updated in Fig. 5, the latest revision paper.", "Thanks for your kind suggestion.\n\nWe have added 5 more random seeds for all the models in the experiments, and new learning curves with 10 seeds are updated in Fig. 5, the latest ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 8, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "FdUNZqrdXdY", "vkx03Glvsck", "vkx03Glvsck", "NRpVRcJDN3r", "x-WzOWcFRjl", "WCSLuYhJs2m", "iclr_2021_eHG7asK_v-k", "coFUd2X0s9", "L0ZRF9Ubise", "2_u4bPkjy5", "xPFXEeKDrzz", "iclr_2021_eHG7asK_v-k", "iclr_2021_eHG7asK_v-k", "iclr_2021_eHG7asK_v-k", "iclr_2021_eHG7asK_v-k" ]
iclr_2021_VNJUTmR-CaZ
Learning to Solve Multi-Robot Task Allocation with a Covariant-Attention based Neural Architecture
This paper presents a new graph neural network architecture over which reinforcement learning can be performed to yield online policies for an important class of multi-robot task allocation (MRTA) problems, one that involves tasks with deadlines, and robots with ferry range and payload constraints and multi-tour capabi...
withdrawn-rejected-submissions
This paper presents a GNN architecture for policies that solve multi-robot task allocation problems. The proposed architecture extends Koul et al (2019) by adding payload constraints and task deadlines. The paper looks at routing problems of medium-to-large size, e.g. 20 robots and 200 tasks. The reviewers are happy t...
train
[ "yXggKkUrhc1", "hwtms_hoZhx", "ThkUfgiYDSA", "REIbOB4yN2S", "huai1t0_Z3", "EbNI8u9xeTY", "67omBdOHOXr", "FF9RhlPjXNB", "gox93ALVhwx", "BsNqfrG0Xw0", "9lz4f285TXi", "duSh91TBIKj", "PooH6OMP1Tv", "M4OSBIO3yQ_", "GIYGfRkbzIQ", "Rnwzdwgiu9T", "U9_KRDAyU6", "EDTbD0WWvcB", "YH4KWqDagw"...
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewe...
[ "Summary\n-------\nThe authors adapt an existing RL approach to combinatorial optimization to be used for their particular application of optimizing a fleet of UAVs (simulation) to deliver supplies. For this the problem is presented as a graph, a cost function is defined and an optimization is applied. \n\n\nCritiq...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_VNJUTmR-CaZ", "yXggKkUrhc1", "F-npOzNkFM", "yXggKkUrhc1", "yXggKkUrhc1", "F-npOzNkFM", "F-npOzNkFM", "F-npOzNkFM", "F-npOzNkFM", "F-npOzNkFM", "F-npOzNkFM", "F-npOzNkFM", "F-npOzNkFM", "F-npOzNkFM", "F-npOzNkFM", "yXggKkUrhc1", "yXggKkUrhc1", "yXggKkUrhc1", "yXggKkUrhc...
iclr_2021_P42rXLGZQ07
Direct Evolutionary Optimization of Variational Autoencoders with Binary Latents
Discrete latent variables are considered important to model the generation process of real world data, which has motivated research on Variational Autoencoders (VAEs) with discrete latents. However, standard VAE training is not possible in this case, which has motivated different strategies to manipulate discrete distr...
withdrawn-rejected-submissions
The paper presents an evolutionary optimization framework for training discrete VAEs, which is different to the standard way of training VAEs. One of the main criticism of the paper was the choice of experiments, but the authors addressed this point by adding an inpainting benchmark. Unfortunately, the reviewers' scor...
train
[ "Jw1sNnXiUBa", "UPzW2ZV8e7G", "PUl8LBFiJj4", "zuelLJ7oC5i", "-GlrY_ZYbvR", "jPkBhdEwad", "SV6EqcA_Il", "UA1Tq4xW3q", "WRiOFYszNFf", "F6HG5MP3Xzv" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes to use evolutionary algorithm to learn truncated deep latent variable model. The method get good performance in denoising task. \n\nPros:\nQuality: The method seems correct \nOn denoising task, the performance of the proposed model is good. \nSignificance: Inference of Discrete VAE is an impor...
[ 5, -1, 6, -1, -1, -1, -1, -1, 6, 6 ]
[ 3, -1, 4, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_P42rXLGZQ07", "iclr_2021_P42rXLGZQ07", "iclr_2021_P42rXLGZQ07", "WRiOFYszNFf", "F6HG5MP3Xzv", "PUl8LBFiJj4", "Jw1sNnXiUBa", "iclr_2021_P42rXLGZQ07", "iclr_2021_P42rXLGZQ07", "iclr_2021_P42rXLGZQ07" ]
iclr_2021_Mf4ZSXMZP7
Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming
Lately, post-training quantization methods have gained considerable attention, as they are simple to use, and require only a small unlabeled calibration set. This small dataset cannot be used to fine-tune the model without significant over-fitting. Instead, these methods only use the calibration set to set the activati...
withdrawn-rejected-submissions
This paper received mixed reviews, 3 positives (7, 6, 6) and 2 negatives (4, 4). Due to the divergence of the reviews, I carefully read the paper and made my best efforts to understand the paper and the review comments. This paper proposes to learn a quantization network using a small calibration set given a network tr...
train
[ "29f6RlrgQKc", "4ByFF1n0oB", "Em63tUvdvms", "vXy2_e6kF9A", "qwhQbPqkhV7", "uIQGtlotTI", "O7Q946pkuZG", "mQUDoVjTXEX", "mHbL4_hmJnF", "r34FANQGLp", "3PQxXLv3lW6", "LGgSIt4lRt0", "VH858vlzD7x", "gKjagp4qWvK" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This work presents a quite comprehensive multi-step scheme for post-training neural quantization that does not rely on large datasets or large computational resources. \n\nThe work is has significance in the domain of post-training neural quantization, especially in cases where only a small calibration set or lim...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 7, 4 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 5, 5 ]
[ "iclr_2021_Mf4ZSXMZP7", "mQUDoVjTXEX", "iclr_2021_Mf4ZSXMZP7", "gKjagp4qWvK", "vXy2_e6kF9A", "LGgSIt4lRt0", "3PQxXLv3lW6", "29f6RlrgQKc", "VH858vlzD7x", "mHbL4_hmJnF", "iclr_2021_Mf4ZSXMZP7", "iclr_2021_Mf4ZSXMZP7", "iclr_2021_Mf4ZSXMZP7", "iclr_2021_Mf4ZSXMZP7" ]
iclr_2021_ESVGfJM9a7
Neural Point Process for Forecasting Spatiotemporal Events
Forecasting events occurring in space and time is a fundamental problem. Existing neural point process models are only temporal and are limited in spatial inference. We propose a family of deep sequence models that integrate spatiotemporal point processes with deep neural networks. Our novel Neural Spatiotemporal Point...
withdrawn-rejected-submissions
There was a consensus among all the reviewers that the methodological contribution is not significant enough for publication at ICLR. In short, the main contribution of the paper is to include spatial modeling into a deep temporal point process model. However, to do that, they just use a well-known method (KDE) on top ...
train
[ "YTvhEDP8omn", "gOTuwL7ExMM", "FlnDAzEaw05", "98jkeXr59h", "kgl-67SR_FT", "2OW_FmyeOHe", "brhmHwmblEc", "hKj5rFzGyb9", "G8UvVgxv7FA" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your review.\n\n__Contribution of the work is not significant enough for publishing on ICLR...__\n\nA novel temporal component is not the focus of this paper; we clarified this and made explicit reference to Du et al in the revised paper. Our paper’s main goal is to fill the gap of spatial modeling a...
[ -1, -1, -1, -1, -1, 4, 4, 5, 8 ]
[ -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "2OW_FmyeOHe", "brhmHwmblEc", "hKj5rFzGyb9", "iclr_2021_ESVGfJM9a7", "G8UvVgxv7FA", "iclr_2021_ESVGfJM9a7", "iclr_2021_ESVGfJM9a7", "iclr_2021_ESVGfJM9a7", "iclr_2021_ESVGfJM9a7" ]
iclr_2021_49mMdsxkPlD
Iterative Amortized Policy Optimization
Policy networks are a central feature of deep reinforcement learning (RL) algorithms for continuous control, enabling the estimation and sampling of high-value actions. From the variational inference perspective on RL, policy networks, when employed with entropy or KL regularization, are a form of amortized optimizatio...
withdrawn-rejected-submissions
I think there is a lot to commend in this paper: the general approach for training f_phi in this way is creative and interesting, the discussion of the amortization gap is thought-provoking, and the general idea is not something that I have seen in the literature before. That said, the reviewers raise a number of impor...
train
[ "gzGi8cufAnu", "BvbFJUFeiHz", "IYg2r-rohYW", "_qmb6AxxgXq", "bgquG73xmyv", "4KKg3C71qNy", "RAt9vmQ8LC5", "F3OAXfmGith", "A3v0LnahxR", "xkUJq8TvKW_", "1kwvyTNPDQ", "R7Ed6Xcgc4P", "VAmQQXF_evW", "_23hHUPn0XZ", "7kdClKSvHaF" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for doing more runs. \n\nAlso, I've checked the appendix, and only one seed is used to compare the four combinations of direct/iterative, A/B architecture (Figure A.2). How did you compare and determine the best performing value networks for each case? Could you include all runs you did when comparing them?...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 3, 3 ]
[ "xkUJq8TvKW_", "F3OAXfmGith", "_qmb6AxxgXq", "bgquG73xmyv", "A3v0LnahxR", "R7Ed6Xcgc4P", "R7Ed6Xcgc4P", "VAmQQXF_evW", "_23hHUPn0XZ", "7kdClKSvHaF", "iclr_2021_49mMdsxkPlD", "iclr_2021_49mMdsxkPlD", "iclr_2021_49mMdsxkPlD", "iclr_2021_49mMdsxkPlD", "iclr_2021_49mMdsxkPlD" ]
iclr_2021_PpOtGYNVT6A
A Probabilistic Model for Discriminative and Neuro-Symbolic Semi-Supervised Learning
Strong progress has been achieved in semi-supervised learning (SSL) by combining several methods, some of which relate to properties of the data distribution p(x), others to the model outputs p(y|x), e.g. minimising the entropy of unlabelled predictions. Focusing on the latter, we fill a gap in the standard text by int...
withdrawn-rejected-submissions
Reviewers agree that this is a very promising paper, with an excellent overview of existing techniques for semi-supervised and neuro-symbolic learning. However, reviewers also agree that the paper is not ready. With one more revision for clarity, some limited empirical validation and illustration of the theory, and foc...
train
[ "jsJLJPLJZA", "KZVXlj7iPbr", "2025ctAZgzK", "ClLRXmerarC", "vA3KLg0nWY6", "5sPI_yMITMG", "qMV_4clWQo", "_JhSZwk6Ms1", "8Z-bMoTKKEz", "tY6jOWxVsgx", "Ri6de9pFn9a", "bjrSuO8tCpn" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper aims at proposing a theoretical rationale for discriminative semi-supervised learning that is comparable with that of generative models. Moreover,\nthe paper aims at theoretically justifying a family of neuro-symbolic SSL approaches.\nFor the first task, the paper states that the proposal justifies entro...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 3 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4 ]
[ "iclr_2021_PpOtGYNVT6A", "2025ctAZgzK", "_JhSZwk6Ms1", "vA3KLg0nWY6", "5sPI_yMITMG", "jsJLJPLJZA", "tY6jOWxVsgx", "Ri6de9pFn9a", "bjrSuO8tCpn", "iclr_2021_PpOtGYNVT6A", "iclr_2021_PpOtGYNVT6A", "iclr_2021_PpOtGYNVT6A" ]
iclr_2021_S9MPX7ejmv
Approximating Pareto Frontier through Bayesian-optimization-directed Robust Multi-objective Reinforcement Learning
Many real-word decision or control problems involve multiple conflicting objectives and uncertainties, which requires learned policies are not only Pareto optimal but also robust. In this paper, we proposed a novel algorithm to approximate a representation for robust Pareto frontier through Bayesian-optimization-direct...
withdrawn-rejected-submissions
The paper studied multi-objective reinforcement learning (MORL), and provided a Bayesian optimization approach for challenging MORL scenarios in several simulation environments. The reviewers generally find it interesting to account for robustness in a MORL setup, and all appreciate the algorithmic contributions. Howev...
train
[ "X-7-qOHSTSi", "IId0S4t0fKr", "QI_1HVVwCm4", "oFVpylWu7jl", "SSsXIOh4BQp", "uACXJ6ORVcN", "SzFmsRGi7zv", "VgMc7gdpgYh", "POAGwkh1zs9", "Yhu4WsOSnEK", "5dUSx_Qvwfi", "PJ2zMFSSoAd", "YwSONhDTPIg", "uD4dHZz2H9y", "hgdEoLKiO7I", "eeWHdavKAgW", "3oquVarRdEw", "LDpBKwkKwkQ", "ZbD7Rh40E...
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "public", "author", "public"...
[ "This paper proposes a framework to tackle uncertainty in multi-objective optimization of reinforcement learning problems. Uncertainty is represented as an adversary over preferences. Fitness is measured by a multi-objective quality indicators while Bayesian optimization is used to bring improvements. The proposed ...
[ 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_S9MPX7ejmv", "iclr_2021_S9MPX7ejmv", "iclr_2021_S9MPX7ejmv", "Yhu4WsOSnEK", "Yhu4WsOSnEK", "VgMc7gdpgYh", "Yhu4WsOSnEK", "POAGwkh1zs9", "Yhu4WsOSnEK", "uD4dHZz2H9y", "X-7-qOHSTSi", "X-7-qOHSTSi", "QdaHHLQmoTJ", "QdaHHLQmoTJ", "6isphtcv46i", "6isphtcv46i", "6isphtcv46i", ...
iclr_2021_lbc44k2jgnX
Random Coordinate Langevin Monte Carlo
Langevin Monte Carlo (LMC) is a popular Markov chain Monte Carlo sampling method. One drawback is that it requires the computation of the full gradient at each iteration, an expensive operation if the dimension of the problem is high. We propose a new sampling method: Random Coordinate LMC (RC-LMC). At each iteration, ...
withdrawn-rejected-submissions
This paper proposes a new sampling method named Random Coordinate LMC (RC-LMC), which integrates the idea of randomized coordinate descent and Langenvine dynamic. The authors prove the total complexity of RC-LMC for log-concave probability distributions, which are better than that of LMC under different settings. The i...
train
[ "moVB59VNzVL", "MU5a6Cl1xD7", "7tJ_xtVqBK0", "FhOby0U8vGR", "RrlaG0nHJSL", "yJlmuVHcDd-", "DFDHWMbuaJj", "690BiJB-Mgs", "GCCRbRwfTUB", "AIaCOKT9z13" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Post rebuttal update:\nI read the other reviewers' responses, and, although I am still positive about this paper, I agree with R2 and R4 that safely fixing the theoretical proofs would require a full revision. For this reason, I am lowering my score to 6.\n\n%%%%%%%%%%%%%%%%%%%%%%%%%\n\nThe authors propose a varia...
[ 6, 6, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ 4, 5, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_lbc44k2jgnX", "iclr_2021_lbc44k2jgnX", "FhOby0U8vGR", "RrlaG0nHJSL", "MU5a6Cl1xD7", "moVB59VNzVL", "GCCRbRwfTUB", "AIaCOKT9z13", "iclr_2021_lbc44k2jgnX", "iclr_2021_lbc44k2jgnX" ]
iclr_2021_XZDeL25T12l
Can Students Outperform Teachers in Knowledge Distillation based Model Compression?
Knowledge distillation (KD) is an effective technique to compress a large model (teacher) to a compact one (student) by knowledge transfer. The ideal case is that the teacher is compressed to the small student without any performance dropping. However, even for the state-of-the-art (SOTA) distillation approaches, there...
withdrawn-rejected-submissions
The paper studies Knowledge Distillation (KD) to better understand the reasons behind the performance gap between student and teacher models. The analysis is done by conducting exploratory experiments. The paper establishes that the distillation data used for training a student can play a critical role in the performan...
train
[ "ZeFnNJGnuJD", "pCKPZBzY2R0", "Yvmczj1hpSp", "aZw_VUeL-bM", "La_d3U_w4Fu", "3QP7GLaIuoo", "CJFzlLHfjF", "woZr9_dadW1", "yP57gYnSTCB" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nR1: Is the effect of improved student performance coming from having out-of-distribution data or simply more data for distillation ...\\\nA1: KD+ benefits from both. About out-of-distribution data: the ablation study in Table 3 demonstrates that out-of-distribution data are beneficial to KD. However, not all out...
[ -1, -1, -1, -1, -1, 6, 6, 3, 5 ]
[ -1, -1, -1, -1, -1, 4, 5, 5, 4 ]
[ "3QP7GLaIuoo", "CJFzlLHfjF", "woZr9_dadW1", "yP57gYnSTCB", "iclr_2021_XZDeL25T12l", "iclr_2021_XZDeL25T12l", "iclr_2021_XZDeL25T12l", "iclr_2021_XZDeL25T12l", "iclr_2021_XZDeL25T12l" ]
iclr_2021_Peg7mkjzvyP
iPTR: Learning a representation for interactive program translation retrieval
Program translation contributes to many real world scenarios, such as porting codebases written in an obsolete or deprecated language to a modern one or re-implementing existing projects in one's preferred programming language. Existing data-driven approaches either require large amounts of training data or neglect sig...
withdrawn-rejected-submissions
I found the setup for this paper a bit contrived. The tool is presented as a code translation tool, but it really functions more as a multi-language code search tool. The Idea is that one has a program in language A, and a database that contains the same program in language B, so one can translate from A to B simply by...
train
[ "e7Y8JxwmJaQ", "IBhfkl8aGs", "6T7BDUYosvb", "i8LsYGN8SIV", "sMakWDUezz", "e3jvjpk3uT", "X7yshqaya6j", "yvosYpMGFed", "GnfiX7LMoUr" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for your further suggestions. \n- About the novelty, our program representation can be implemented for many other applications. But of course we will conduct more experiments to validate its application scope in our future work. In this paper we aim to support program translation.\n- About the ...
[ -1, -1, -1, -1, -1, -1, 5, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "IBhfkl8aGs", "sMakWDUezz", "X7yshqaya6j", "X7yshqaya6j", "GnfiX7LMoUr", "yvosYpMGFed", "iclr_2021_Peg7mkjzvyP", "iclr_2021_Peg7mkjzvyP", "iclr_2021_Peg7mkjzvyP" ]
iclr_2021_rd_bm8CK7o0
Q-Value Weighted Regression: Reinforcement Learning with Limited Data
Sample efficiency and performance in the offline setting have emerged as among the main challenges of deep reinforcement learning. We introduce Q-Value Weighted Regression (QWR), a simple RL algorithm that excels in these aspects. QWR is an extension of Advantage Weighted Regression (AWR), an o...
withdrawn-rejected-submissions
This paper proposed Q-value-weighted regression approach for improving the sample efficiency of DRL. It is related to recent papers on advantage-weighted regression methods for RL. The approach is interesting, intuitive, and bears merits. Developing a simple yet sample-efficient algorithm using weighted regression woul...
train
[ "coRiV0czfki", "T1omqjfGq3q", "nwpcQj8PNvW", "9RSYzGCSCkn", "QjEFBod5GdR", "wVlmZhYY2IZ", "bvP1QVR8JsG", "TRjBxhv-SjR", "0QStGn6fT1L", "s3u2_kmsbw6", "fol77HnePlM", "F-u7Li3kGqR" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary: This paper presents a Q-value weighted regression (QWR) on top of the advantage weighted regression (AWR) to improve the sample efficiency for offline RL settings. Through the analysis to the AWR, the authors claim that it performs poorly in scenarios with discrete actions, which motivates the development...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 4 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "iclr_2021_rd_bm8CK7o0", "nwpcQj8PNvW", "9RSYzGCSCkn", "QjEFBod5GdR", "0QStGn6fT1L", "F-u7Li3kGqR", "coRiV0czfki", "s3u2_kmsbw6", "fol77HnePlM", "iclr_2021_rd_bm8CK7o0", "iclr_2021_rd_bm8CK7o0", "iclr_2021_rd_bm8CK7o0" ]
iclr_2021_ZHJlKWN57EQ
Revisiting BFfloat16 Training
State-of-the-art generic low-precision training algorithms use a mix of 16-bit and 32-bit precision, creating the folklore that 16-bit precision alone is not enough to maximize model accuracy. As a result, deep learning accelerators are forced to support both 16-bit and 32-bit compute units which is more costly than on...
withdrawn-rejected-submissions
After reading the paper, reviews and authors’ feedback. The meta-reviewer agrees with reviewers that the paper has limited novelty and could be more clear about mix precision training. Therefore this paper is rejected. Thank you for submitting the paper to ICLR.
train
[ "AHYkIuHCjdF", "J0YJ0DUYyJ5", "V-88kr3GuG", "ZAf-MZFZym0", "HwjASu5bXP", "wmdtwNgJYJv", "LFsxTemk0-Y", "wHZOvB0gg_w", "d_R4CHHSBNp", "J0TtxpQEccc", "HCiENkTrQEe", "6Wc6yv-7zWF", "6T1y9Alj2t", "7frBEYS7OlR", "PiMZzwLC_9a" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "During the rebuttal, I concluded that this submission is highly confusing, rather misleading. I was led to believe the authors are in fact talking 'pure 16b MAC' - meaning 16b FP multiplies and 16b accumulate. After reading their responses to R4, I now learnt that they in fact are using 32b accumulates as is alre...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 3 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "iclr_2021_ZHJlKWN57EQ", "wmdtwNgJYJv", "iclr_2021_ZHJlKWN57EQ", "HwjASu5bXP", "6Wc6yv-7zWF", "wHZOvB0gg_w", "AHYkIuHCjdF", "6T1y9Alj2t", "iclr_2021_ZHJlKWN57EQ", "d_R4CHHSBNp", "7frBEYS7OlR", "PiMZzwLC_9a", "iclr_2021_ZHJlKWN57EQ", "iclr_2021_ZHJlKWN57EQ", "iclr_2021_ZHJlKWN57EQ" ]
iclr_2021_wTWLfuDkvKp
Should Ensemble Members Be Calibrated?
Underlying the use of statistical approaches for a wide range of applications is the assumption that the probabilities obtained from a statistical model are representative of the “true” probability that event, or outcome, will occur. Unfortunately, for modern deep neural networks this is not the case, they are often ob...
withdrawn-rejected-submissions
This paper studies ensemble calibration and the relationship between the calibration of individual ensemble member models with the calibration of the resulting ensemble prediction. The main theoretical result is that individual ensemble members should not be individually calibrated in order to have a well-calibrated e...
test
[ "MjSf19FfIDl", "fFjq2GB9XEQ", "uB8KJi0x2u", "0jghtXjhC5Y", "2G17lz8QMr4", "pDqnFQstNQk", "TJcLI_L2jzT", "wd7pHHlg-K", "S4RynW2xubP", "8pTE7sm6jpm", "jDeESJO9ru5", "3UXJERKjP3t", "RmJc7BTvx8P", "bHUEOz6PAsr", "iY7l-cMvp2D", "JffRQVZ9bBA", "nItpPWy-Bp1", "e3FxYKshAlt", "2Ag2-IMdU0O...
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "...
[ "Update after the author response: I've read the other reviews, and agree with R2 and R3. I think the paper is useful (emphasizes you need to calibrate the final ensemble, not enough to calibrate members), and has some nice conceptual contributions (explaining that if ensemble accuracy > average member accuracy (wh...
[ 6, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4 ]
[ 3, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_wTWLfuDkvKp", "uB8KJi0x2u", "0jghtXjhC5Y", "pDqnFQstNQk", "iclr_2021_wTWLfuDkvKp", "3UXJERKjP3t", "nItpPWy-Bp1", "nItpPWy-Bp1", "nItpPWy-Bp1", "bHUEOz6PAsr", "Voo1SV1epj4", "iclr_2021_wTWLfuDkvKp", "JffRQVZ9bBA", "iY7l-cMvp2D", "2G17lz8QMr4", "2G17lz8QMr4", "MjSf19FfIDl", ...
iclr_2021_BIwkgTsSp_8
Learning to Noise: Application-Agnostic Data Sharing with Local Differential Privacy
In recent years, the collection and sharing of individuals’ private data has become commonplace in many industries. Local differential privacy (LDP) is a rigorous approach which uses a randomized algorithm to preserve privacy even from the database administrator, unlike the more standard central differential privacy. F...
withdrawn-rejected-submissions
The paper considers the problem of private data sharing under local differential privacy. (1) it assumes having access to a public unlabeled dataset for learning a VAE, so it reduces the dimensionality in a more meaningful way than simply running PCA. (2) the LDP guarantee is coming from the standard Laplace mechanis...
train
[ "K3wVhp5Zjzu", "HoNE9YA8Pa6", "HC2Qcy0YqAq", "1L3Zu9UFt6m", "VNRXBBtppFG", "f0J47wsypJN", "O862yo4ae-w", "HW9iYOc5dFP", "DmV3pbA-t2", "8Z9gRfIuMF_", "f_mbRtHTzik", "wYQfuiwrdPP", "MVl-mSVeuBm" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The VLM requires a clean, unlabelled training dataset that follows a similar distribution to the dataset one wishes to share under LDP. In many cases this would be a dataset that the organisation already has access to, rather than a public dataset. For example, it is highly likely that a public health body would h...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 4, 5 ]
[ "HoNE9YA8Pa6", "O862yo4ae-w", "1L3Zu9UFt6m", "VNRXBBtppFG", "wYQfuiwrdPP", "MVl-mSVeuBm", "8Z9gRfIuMF_", "f_mbRtHTzik", "iclr_2021_BIwkgTsSp_8", "iclr_2021_BIwkgTsSp_8", "iclr_2021_BIwkgTsSp_8", "iclr_2021_BIwkgTsSp_8", "iclr_2021_BIwkgTsSp_8" ]
iclr_2021_QKbS9KXkE_y
Data-efficient Hindsight Off-policy Option Learning
Hierarchical approaches for reinforcement learning aim to improve data efficiency and accelerate learning by incorporating different abstractions. We introduce Hindsight Off-policy Options (HO2), an efficient off-policy option learning algorithm, and isolate the impact of action and temporal abstraction in the option f...
withdrawn-rejected-submissions
There was a fair amount of discussion about the paper. Several reviewers felt that the paper would have been stronger if it tried to do less but better. The reviews describe in detail what the reviewers would have found compelling, but the key suggestion is to remove the complexity that is not essential for the appro...
test
[ "0uiGvO2uKSW", "wjYjqwWtlh7", "bnosUaXa3yB", "DfqNN8e_vEj", "VZOL1dQI2Ky", "VHeUjNEOlPN", "ZVq7yzLIIX", "yKYKyVgmn9m", "VziVlhBSNSW", "YhiSk-jYJFy", "_smAERO5jtK", "SgzYra0F3j", "rFeLvUG0zvU", "mO056NnKmAa", "nHFvU4AxKGO", "1RvEnlJgP1E", "_4127M8OzA5", "yss9T9HBI0Z", "P1wv9_OyfV3...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", ...
[ "## Summary\nThis paper introduces a novel option-learning policy gradient method, HO2. The method learns a parameterized joint distribution over options and actions and uses a soft-continuation based approach to interrupt or \"switch\" between options before option termination. The method introduces a new meta-par...
[ 3, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 3, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_QKbS9KXkE_y", "iclr_2021_QKbS9KXkE_y", "iclr_2021_QKbS9KXkE_y", "ZVq7yzLIIX", "ZVq7yzLIIX", "ZVq7yzLIIX", "_smAERO5jtK", "VziVlhBSNSW", "rFeLvUG0zvU", "1RvEnlJgP1E", "YhiSk-jYJFy", "bnosUaXa3yB", "SgzYra0F3j", "1HdufbjlTEQ", "yss9T9HBI0Z", "_4127M8OzA5", "0uiGvO2uKSW", "...
iclr_2021_SRzz6RtOdKR
Batch Inverse-Variance Weighting: Deep Heteroscedastic Regression
In model learning, when the training dataset on which the parameters are optimized and the testing dataset on which the model is evaluated are not sampled from identical distributions, we say that the datasets are misaligned. It is well-known that this misalignment can negatively impact model performance. A common sour...
withdrawn-rejected-submissions
The manuscript presents a deep network approach for heteroscedastic regression problem. It assumes the variance of heteroscedastic noise is known as privileged information and suggests to reweight the samples by their noise variance in the loss. Three reviewers agreed that the manuscript is not ready for publication. ...
train
[ "9MV-YVGu7Wl", "y__JPxMgQmO", "Vjaj2eiIPLN", "eZ_RUP2pekQ", "h4zrdPUIybz", "cpV5jZq1zzO", "crmGCKUiKcU", "SWDLaeYWkYz", "A4bZxRsMaEP", "hBcSXDthu8R", "HSmOH7DumWn" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to notify the reviewer that we have updated our manuscript with new results, as announced in our previous response. \n\nWe show that BIV is robust to moderate to high levels of noise in the variance. We also show that the number of samples in a mini-batch is not critical to the performance of BIV. Th...
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3 ]
[ "crmGCKUiKcU", "eZ_RUP2pekQ", "cpV5jZq1zzO", "h4zrdPUIybz", "A4bZxRsMaEP", "hBcSXDthu8R", "HSmOH7DumWn", "iclr_2021_SRzz6RtOdKR", "iclr_2021_SRzz6RtOdKR", "iclr_2021_SRzz6RtOdKR", "iclr_2021_SRzz6RtOdKR" ]
iclr_2021_h9XgC7JzyHZ
Efficient estimates of optimal transport via low-dimensional embeddings
Optimal transport distances (OT) have been widely used in recent work in Machine Learning as ways to compare probability distributions. These are costly to compute when the data lives in high dimension. Recent work aims specifically at reducing this cost by computing OT using low-rank projections of the data (see...
withdrawn-rejected-submissions
We thank the authors for their submission. The paper feels more like an early draft, with several fundamental factual mistakes (mistake on computational and statistical complexities) as highlighted by the reviewers. There's plenty of material in the reviews to help authors improve their submission, we encourage them to...
train
[ "AcdlLXPgvMa", "RaPagfyGNjz", "0hNXn0MARw", "ks5E4y8nGZ", "BnCQQnJaEyE", "UejE_cpyaca", "tQgm2nhJUv", "Oi5lrxPp8V", "hnlK5BJR8Lz" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank all the reviewers for their detailed feedback and appreciate the help. We would like to answer first with some general comments. \nIndeed, the curse of dimensionality is attributed to the statistical estimation of OT distances, and to the sample size of the empirical distribution, not to the dimension of ...
[ -1, -1, -1, -1, -1, 4, 2, 4, 4 ]
[ -1, -1, -1, -1, -1, 4, 5, 4, 3 ]
[ "iclr_2021_h9XgC7JzyHZ", "UejE_cpyaca", "tQgm2nhJUv", "Oi5lrxPp8V", "hnlK5BJR8Lz", "iclr_2021_h9XgC7JzyHZ", "iclr_2021_h9XgC7JzyHZ", "iclr_2021_h9XgC7JzyHZ", "iclr_2021_h9XgC7JzyHZ" ]
iclr_2021_MkrAyYVmt7b
Perfect density models cannot guarantee anomaly detection
Thanks to the tractability of their likelihood, some deep generative models show promise for seemingly straightforward but important applications like anomaly detection, uncertainty estimation, and active learning. However, the likelihood values empirically attributed to anomalies conflict with the expectations these p...
withdrawn-rejected-submissions
Firstly, thank you authors for your thought-provoking submission and discussion. The key point of disagreement clearly is the fundamental assumption that "the result of an anomaly detection method should be invariant to any continuous invertible reparametrization f." All reviewers found this assumption to be too stro...
train
[ "1v3mI-oKtFQ", "Fjl-8WsSjAW", "LCH4rpwgWSm", "fxjBKoLwuS", "HFgZoJ2LHPU", "uWbPHWn9910", "VU9OWhVlnZE", "egkOEoH9Lp2", "tmVzNpKoyBU", "fTrbfxUZbGo", "W6U7jEkoa6", "iWnw9Yvb3V", "MHZgwKmNjcY", "r58xSDD68HZ", "zlGfjHYBvWw", "-Vbvrtbtc44", "SBSs8X0D66-", "wbmz3VlHNRM", "xuxK6Er2pRK"...
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "...
[ "**Update**\n\nMy impression after the extensive discussion is that the remaining differences are possibly too subjective to come to an agreement:\n\n1) Whether the fact that the invertible reparameterization principle does not hold for anomaly detection represents a significant theoretical contribution. To me, I s...
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2021_MkrAyYVmt7b", "LCH4rpwgWSm", "fxjBKoLwuS", "HFgZoJ2LHPU", "uWbPHWn9910", "VU9OWhVlnZE", "iWnw9Yvb3V", "-Vbvrtbtc44", "1v3mI-oKtFQ", "W6U7jEkoa6", "wbmz3VlHNRM", "zlGfjHYBvWw", "HxfVX36hwJZ", "VSLlvpErFwb", "SBSs8X0D66-", "4SRWU8S_u6f", "YP6Rf_DwmLc", "BZc4sdjuA27", "f4...
iclr_2021_ZN3s7fN-bo
Interactive Visualization for Debugging RL
Visualization tools for supervised learning (SL) allow users to interpret, introspect, and gain an intuition for the successes and failures of their models. While reinforcement learning (RL) practitioners ask many of the same questions while debugging agent policies, existing tools aren't a great fit for the RL setting...
withdrawn-rejected-submissions
We also had some discussions about the paper that are not visible to the authors. To summarize: the reviewers appreciated the efforts the authors put into the replies and updates. While those clarifies quite a few points, the paper unfortunately is still not publishable in its current form at ICLR. Overall the paper t...
train
[ "uK47hEbGcwu", "3hD4vjB-If", "cFDcq4QBKaV", "h9ZxeeJT74-", "iaaJ35HR0P", "fjKDdnP_oms", "4AW79rY48nQ", "a_ZoKmgLLj_" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper deals with debugging of black-box deep reinforcement learning (RL) agents to better understand and fix their policies. The authors propose diverse tools for, among others, visualizing the state space in terms of calculated statistics, analyzing the taken actions across learning episodes or exploring the ...
[ 3, -1, -1, -1, -1, 5, 4, 6 ]
[ 4, -1, -1, -1, -1, 3, 3, 5 ]
[ "iclr_2021_ZN3s7fN-bo", "fjKDdnP_oms", "4AW79rY48nQ", "uK47hEbGcwu", "a_ZoKmgLLj_", "iclr_2021_ZN3s7fN-bo", "iclr_2021_ZN3s7fN-bo", "iclr_2021_ZN3s7fN-bo" ]
iclr_2021_OjUsDdCpR5
Inferring Principal Components in the Simplex with Multinomial Variational Autoencoders
Covariance estimation on high-dimensional data is a central challenge across multiple scientific disciplines. Sparse high-dimensional count data, frequently encountered in biological applications such as DNA sequencing and proteomics, are often well modeled using multinomial logistic normal models. In many cases, thes...
withdrawn-rejected-submissions
Authors extend the probabilistic PCA framework to multinomial-distributed data. Scalable estimation of principal components in the model is achieved using a multinomial variational autoencoder in combination with an isometric log-ratio (ILR) transform. The reviewers did not agree on the degree of novelty of the paper t...
train
[ "T6QXUWxZZLH", "Nm_rRHAqcu", "aVEc0Zmu9X2", "1bCLIdYBZmA", "iktkQ1ErtyO", "CVLdQJCHIc", "8TbGG_y0451", "FcuvI6zUkY7" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper extends prior results, namely that VAEs are able to learn the principal components. The novelty is the extension to a new distribution: multinomial logistic-normal distribution. This is achieved by using the Isometric log-ratio (ILR) transform. While prior results were derived analytically, this paper p...
[ 4, -1, -1, -1, -1, 5, 6, 7 ]
[ 3, -1, -1, -1, -1, 3, 3, 3 ]
[ "iclr_2021_OjUsDdCpR5", "CVLdQJCHIc", "FcuvI6zUkY7", "8TbGG_y0451", "T6QXUWxZZLH", "iclr_2021_OjUsDdCpR5", "iclr_2021_OjUsDdCpR5", "iclr_2021_OjUsDdCpR5" ]
iclr_2021_KOtxfjpQsq
Meta-Model-Based Meta-Policy Optimization
Model-based reinforcement learning (MBRL) has been applied to meta-learning settings and has demonstrated its high sample efficiency. However, in previous MBRL for meta-learning settings, policies are optimized via rollouts that fully rely on a predictive model of an environment. Thus, its performance in ...
withdrawn-rejected-submissions
The paper presents a meta-learning for Model-based RL that introduces branched rollouts to improve sample efficiency of the learned model. While the paper addresses an important topic of sample efficiency in RL, and provides theoretical analysis, the reviewers raised concerns with the novelty and clarity. The extensio...
train
[ "YSvrP0ImDA3", "TWMUzMJyjZA", "cXoVlZ8-htT", "fLGHy_p55IR", "sBU3NuMwZo1", "CUUWcqU3pEm", "4UIsjvTZfEt", "QPFhxk3hokp", "R9qcFIlPDS", "OglSMOcOSnG" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for your comments. \nWe posted our response to common concerns on our work, at the top of this board. \n\nHere is our response to your comments: \n\n> Both Theorem 1 and 2 seem to be a straightforward combination of the results in [1] and the fact that POMDPs can be cast as MDPs with history-st...
[ -1, -1, -1, -1, -1, -1, 5, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 3, 3 ]
[ "4UIsjvTZfEt", "QPFhxk3hokp", "R9qcFIlPDS", "OglSMOcOSnG", "TWMUzMJyjZA", "iclr_2021_KOtxfjpQsq", "iclr_2021_KOtxfjpQsq", "iclr_2021_KOtxfjpQsq", "iclr_2021_KOtxfjpQsq", "iclr_2021_KOtxfjpQsq" ]
iclr_2021_10XWPuAro86
Hamiltonian Q-Learning: Leveraging Importance-sampling for Data Efficient RL
Model-free reinforcement learning (RL), in particular Q-learning is widely used to learn optimal policies for a variety of planning and control problems. However, when the underlying state-transition dynamics are stochastic and high-dimensional, Q-learning requires a large amount of data and incurs a prohibitively high...
withdrawn-rejected-submissions
The paper considers exploiting low-rank structure in Q-function and the Hamiltonian Monte-Carlo (HMC) to approximate the expectation in Q-learning to reduce the stochastic approxiamtion error, and thus, achieves "efficient RL". The authors tested the algorithm empirically within some simple environments. As reviewer...
train
[ "tnrzj926yUP", "AnoLxXRs0cD", "4zRoSnIaksP", "JwiWwUiqsCy", "3mOcpAZPJM", "J-DtPl1ajar", "Tottmja2zb4", "n82y2ogRzNV", "JGuP15h8l30", "AqwrHKLJOK8", "0-XFAMC6z5E", "z0uFo8cdxHk", "Lz7lBzakiEj", "sfh-IWnLOR", "1gCvznIRzok", "8ZEdqv0mltT", "7InAYkVss-", "4UCExaVTxn6", "hd17clZLxvp"...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "We much appreciate your prompt response! In this work the Hamiltonian dynamics has been used only for drawing samples to estimate the expectation of $Q$ values associated with next states. The Hamiltonian dynamics is completely decoupled from the $Q$ function iteration over time. \n\nThe Hamiltonian dynamics depen...
[ -1, -1, -1, -1, 5, -1, -1, 6, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 5 ]
[ -1, -1, -1, -1, 4, -1, -1, 3, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4 ]
[ "Lz7lBzakiEj", "JwiWwUiqsCy", "JGuP15h8l30", "8ZEdqv0mltT", "iclr_2021_10XWPuAro86", "hd17clZLxvp", "3mOcpAZPJM", "iclr_2021_10XWPuAro86", "AqwrHKLJOK8", "0-XFAMC6z5E", "sfh-IWnLOR", "iclr_2021_10XWPuAro86", "7InAYkVss-", "n82y2ogRzNV", "hd17clZLxvp", "3mOcpAZPJM", "z0uFo8cdxHk", "...
iclr_2021_gfwfOskyzSx
Redefining The Self-Normalization Property
The approaches that prevent gradient explosion and vanishing have boosted the performance of deep neural networks in recent years. A unique one among them is the self-normalizing neural network (SNN), which is generally more stable than initialization techniques without explicit normalization. The self-normalization pr...
withdrawn-rejected-submissions
This paper proposed two variants of the SELU activation function, termed the leaky SELU (lSELU) and scaled SELU (sSELU), respectively, in order to yield a stronger self-normalization property. The review process and the discussion find the following issues: - The hyperparameter tuning for the baselines is insufficient...
train
[ "mNs0SmVoC8_", "l-_XsfvMHco", "AORqw9Fa4I1", "zJGAE-jRDw", "9pxUpTMCfLL", "wtBvLme389z", "TkafAuYCW9e", "SDyKOjY_rMf", "v7Sj28XcMkb", "SNfVm7922lv", "nKsmL3S8HrJ", "V0If1Fk5V1W", "tDsG19PqYY", "cF7ES1ya5km", "JzAdr8CSFp", "LfhHXy1txNo", "TOJkaPp1FGo", "NNeHlhEE3ou", "UeGZ29bAmPa"...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "# Update\n\nI thank the authors for extensive replies and updates to the paper. Most of my questions are answered, and the paper quality is substantially improved. I would not be opposed if other reviewers recommends to accept it. Unfortunately I still can't raise the score and advocate for it myself, since:\n\n1)...
[ 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2021_gfwfOskyzSx", "iclr_2021_gfwfOskyzSx", "zJGAE-jRDw", "tDsG19PqYY", "LfhHXy1txNo", "NNeHlhEE3ou", "9pxUpTMCfLL", "v7Sj28XcMkb", "SNfVm7922lv", "UeGZ29bAmPa", "V0If1Fk5V1W", "tDsG19PqYY", "cF7ES1ya5km", "JzAdr8CSFp", "mNs0SmVoC8_", "l-_XsfvMHco", "wtBvLme389z", "2jyUA7LDqF...
iclr_2021_0SPUQoRMAvc
Semantic-Guided Representation Enhancement for Self-supervised Monocular Trained Depth Estimation
Self-supervised depth estimation has shown its great effectiveness in producing high quality depth maps given only image sequences as input. However, its performance usually drops when estimating on border areas or objects with thin structures due to the limited depth representation ability. In this paper, we address t...
withdrawn-rejected-submissions
The authors address the problem of self-supervised monocular depth estimation via training with only monocular videos. They propose to use additional information extracted from semantic segmentation at training time to (i) provide additional “semantic context” supervision and (ii) to improve depth estimation at discont...
train
[ "qGbhXLMUkw", "AGlWaiBd2su", "VXvx5YpGI2Y", "3Xc5eguPSAA", "9ljzkXUL-ft", "lNTwlSK7AV3", "J8f34eBKlcR", "YsI1_MRvF2", "q6rx2YATZ99", "c6HF4M1K9xd", "-AaCrXEW_jv", "zCb3lataGkk", "WP8RSoUyqZm", "c4jUejeK9qx" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Strengths:\n\n1. Writing is good. It is easy to read.\n2. Although limited, the proposed term really improves the performance.\n3. The qualitative results show better depth estimation in object boundary areas. \n\n\n\nWeakness:\n\n4. “How to generate semantic labels” is a very important problem in this paper. Simp...
[ 5, 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ 5, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_0SPUQoRMAvc", "iclr_2021_0SPUQoRMAvc", "iclr_2021_0SPUQoRMAvc", "qGbhXLMUkw", "qGbhXLMUkw", "iclr_2021_0SPUQoRMAvc", "c4jUejeK9qx", "c4jUejeK9qx", "VXvx5YpGI2Y", "VXvx5YpGI2Y", "AGlWaiBd2su", "AGlWaiBd2su", "qGbhXLMUkw", "iclr_2021_0SPUQoRMAvc" ]
iclr_2021_arNvQ7QRyVb
Sharing Less is More: Lifelong Learning in Deep Networks with Selective Layer Transfer
Effective lifelong learning across diverse tasks requires diverse knowledge, yet transferring irrelevant knowledge may lead to interference and catastrophic forgetting. In deep networks, transferring the appropriate granularity of knowledge is as important as the transfer mechanism, and must be driven by the relationsh...
withdrawn-rejected-submissions
The reviewers enjoyed reading about an interesting take on lifelong learning, encapsulating an EM methodology for selecting a transfer configuration and then optimizing the parameters. R3 made valid concerns regarding comparison with previous, recent work. R2 also would prefer to see more thorough experiments (ideally ...
val
[ "Z6LiMXoKarF", "pMwgJk0FbMB", "aVazq8Z_gyT", "LZeGfFh0p31", "5cz3dBawy2U", "DSHBeX1Fqse", "z101zE9LRk7", "THZ4bWOrBD", "lOV99D_8k1n", "ZJjlqIYyplh", "4pH4DkhvrxI" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "-Summary-\nThe paper proposes a method for selective weight sharing per layer during continual learning. The authors show observations that sharing all layers can not be optimal for lifelong learning. Hence, they adopt a layerwise transfer configuration vector which decides activated layer-sharing at specific task...
[ 4, 6, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ 5, 2, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_arNvQ7QRyVb", "iclr_2021_arNvQ7QRyVb", "iclr_2021_arNvQ7QRyVb", "ZJjlqIYyplh", "DSHBeX1Fqse", "THZ4bWOrBD", "4pH4DkhvrxI", "Z6LiMXoKarF", "pMwgJk0FbMB", "iclr_2021_arNvQ7QRyVb", "iclr_2021_arNvQ7QRyVb" ]
iclr_2021_fpJX0O5bWKJ
Estimating Example Difficulty using Variance of Gradients
In machine learning, a question of great interest is understanding what examples are challenging for a model to classify. Identifying atypical examples helps inform safe deployment of models, isolates examples that require further human inspection, and provides interpretability into model behavior. In this work, we pro...
withdrawn-rejected-submissions
This paper provides a new uncertainty measure of examples called "Variance of Gradients" (VoGs); it demonstrates that VoGs are correlated with mistakes, and can be useful for guiding optimization. On the positive side, the reviewers generally think that the ideas of this paper is nice and contribute to the research t...
train
[ "XExNbpXK_7U", "7DVh9t5m-oS", "kNR4csB2Q8f", "loX7rskl0e6", "TrPc8z6Qn8", "kVH61qoL0JR", "LN8zVwGNfmP", "xmSJfO7qBQ", "yDyIl21aTv0", "nD4u8cOZK8e", "MqiNIxGhxd4", "4B8BM8J3jND", "8GOsHpMuRDn", "rORb0u5dCC", "z5TPERrNIul" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors propose to use a scalar measures called Variance of Gradients to discover challenging-to-learn examples on which the model is more likely to make an error. They illustrate that low VoG examples are \"prototypical\" and more easily understood, while high VoG examples typical exhibit occlusions, strange ...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "iclr_2021_fpJX0O5bWKJ", "kNR4csB2Q8f", "loX7rskl0e6", "MqiNIxGhxd4", "4B8BM8J3jND", "8GOsHpMuRDn", "z5TPERrNIul", "XExNbpXK_7U", "rORb0u5dCC", "rORb0u5dCC", "XExNbpXK_7U", "iclr_2021_fpJX0O5bWKJ", "iclr_2021_fpJX0O5bWKJ", "iclr_2021_fpJX0O5bWKJ", "iclr_2021_fpJX0O5bWKJ" ]
iclr_2021_Q5ZxoD2LqcI
On the use of linguistic similarities to improve Neural Machine Translation for African Languages
In recent years, there has been a resurgence in research on empirical methods for machine translation. Most of this research has been focused on high-resource, European languages. Despite the fact that around 30% of all languages spoken worldwide are African, the latter have been heavily under investigated and this, pa...
withdrawn-rejected-submissions
This paper introduces a new multilingual parallel Bible dataset for African languages, a new method for determining similarities between languages, and a collection of experiments to evaluate methods for choosing an additional language based on (a) similarity and (b) language history to include in a multilingual MT sys...
train
[ "qB9zWsnF5pP", "0MSuxulQxz3", "CYBKeGVBAuT", "CO3gAQ8q-b_", "oqnbZgzfteY", "NWyBOq57d6U", "g7IV5s8uQAO" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper motivates clearly the need for research in machine translation of underresourced (and thus underresearched) African languages, and proposes ways to aid the training of MT systems using data from related languages. The main contribution is that two ways (and a random baseline) to select a langauge to add...
[ 5, -1, -1, -1, 3, 4, 4 ]
[ 4, -1, -1, -1, 4, 5, 4 ]
[ "iclr_2021_Q5ZxoD2LqcI", "NWyBOq57d6U", "g7IV5s8uQAO", "qB9zWsnF5pP", "iclr_2021_Q5ZxoD2LqcI", "iclr_2021_Q5ZxoD2LqcI", "iclr_2021_Q5ZxoD2LqcI" ]
iclr_2021_98fWAc-sFkv
A Unified Bayesian Framework for Discriminative and Generative Continual Learning
Continual Learning is a learning paradigm where learning systems are trained on a sequence of tasks. The goal here is to perform well on the current task without suffering from a performance drop on the previous tasks. Two notable directions among the recent advances in continual learning with neural networks are (1) v...
withdrawn-rejected-submissions
This paper proposes a Bayesian non-parametric method for task-incremental continual learning. It is more general than previous work in that it considers the network structure as a random variable and works for both supervised and unsupervised settings. Experimental results show that the proposed method outperforms prio...
train
[ "4eVr-eStwC_", "FiPb3k4K3Hc", "zEpniBqAZnW", "cUXPWENgznX", "Ko5qm0M4qSG", "xICu2xKFvyS", "ij6ucEL7eG1", "-PAznKNr1Ue", "6WnaFVWJ160", "mQ15HreBtb4", "BmByTvczv6w", "rg_-pW9_NJQ", "2LfdwLOv6dG", "jO1GfmPLEML", "N6LbKTPB_H", "o8K36_tPmYq", "_oHLrL7JdQl" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary: The paper proposes a continual learning framework based on Bayesian non-parametric approach. The hidden layer is modeled using Indian Buffet Process prior. The inference uses a structured mean-field approximation with a Gaussian family for the weights, and Beta-Bernoulli for the task-masks. The variat...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7, 4 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "iclr_2021_98fWAc-sFkv", "iclr_2021_98fWAc-sFkv", "xICu2xKFvyS", "ij6ucEL7eG1", "2LfdwLOv6dG", "BmByTvczv6w", "6WnaFVWJ160", "mQ15HreBtb4", "rg_-pW9_NJQ", "jO1GfmPLEML", "_oHLrL7JdQl", "N6LbKTPB_H", "o8K36_tPmYq", "4eVr-eStwC_", "iclr_2021_98fWAc-sFkv", "iclr_2021_98fWAc-sFkv", "iclr...
iclr_2021_Dtahsj2FkrK
A REINFORCEMENT LEARNING FRAMEWORK FOR TIME DEPENDENT CAUSAL EFFECTS EVALUATION IN A/B TESTING
A/B testing, or online experiment is a standard business strategy to compare a new product with an old one in pharmaceutical, technological, and traditional industries. The aim of this paper is to introduce a reinforcement learn- ing framework for carrying A/B testing in two-sided marketplace platforms, while character...
withdrawn-rejected-submissions
This paper proposes a testing procedure to determine whether a policy is better than another policy with respect to long-term treatment effects. The reviewers found the problem interesting and saw a lot of value in this work. One of the key concerns was the lack of clarity throughout the paper. The reviews helped the a...
test
[ "Pv0OdbbffZx", "lPzjrre8GVY", "gMXliYK61M", "TWBMNFXo4ET", "zzMqOBw0dc", "i5jwUf7qqxk", "KCyZHYeL8F" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We greatly thank your valuable comments, many of which will lead to a more readable and self-contained version of our paper. We attempt to address all the points in the following. The revised manuscript taking into accounts all your suggestions has been uploaded. Please refer to the most updated revision for detai...
[ -1, -1, -1, -1, 6, 5, 5 ]
[ -1, -1, -1, -1, 2, 2, 4 ]
[ "KCyZHYeL8F", "i5jwUf7qqxk", "KCyZHYeL8F", "zzMqOBw0dc", "iclr_2021_Dtahsj2FkrK", "iclr_2021_Dtahsj2FkrK", "iclr_2021_Dtahsj2FkrK" ]
iclr_2021_hBxSksqPuOg
Random Network Distillation as a Diversity Metric for Both Image and Text Generation
Generative models are increasingly able to produce remarkably high quality images and text. The community has developed numerous evaluation metrics for comparing generative models. However, these metrics do not effectively quantify data diversity. We develop a new diversity metric that can readily be applied to data...
withdrawn-rejected-submissions
The paper proposes the generalization performance of distillation from random networks as a metric of diversity, named RND. Intuitively, the more diverse the generated datasets, the more difficult it should be for a model to learn a random computation. The reviewers agree that the metric has a novel perspective. Unfort...
train
[ "iCM76vG6yV", "kT6xEXbwgr5", "9kD2QYqKyso", "EFgoDmxP_5V", "IZxG0w1GmI", "5Hbmf_yAi16", "ev7FuUmLZj8", "Kzm-q0M2i59", "JWvuZCMVgyE", "PGKJAwvCBn", "oS68VEzJwOI" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors introduce a new quantitative diversity measure advocating its usage for generative models evaluation. In a nutshell, to measure the diversity of a particular set, the authors split it into disjoint train/val subsets and learn a DNN to predict the outputs of another randomly initialized D...
[ 4, -1, -1, -1, -1, -1, -1, 5, 4, 6, 4 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "iclr_2021_hBxSksqPuOg", "iCM76vG6yV", "Kzm-q0M2i59", "JWvuZCMVgyE", "PGKJAwvCBn", "oS68VEzJwOI", "iclr_2021_hBxSksqPuOg", "iclr_2021_hBxSksqPuOg", "iclr_2021_hBxSksqPuOg", "iclr_2021_hBxSksqPuOg", "iclr_2021_hBxSksqPuOg" ]
iclr_2021_EArH-0iHhIq
ON NEURAL NETWORK GENERALIZATION VIA PROMOTING WITHIN-LAYER ACTIVATION DIVERSITY
During the last decade, neural networks have been intensively used to tackle various problems and they have often led to state-of-the-art results. These networks are composed of multiple jointly optimized layers arranged in a hierarchical structure. At each layer, the aim is to learn to extract hidden patterns needed t...
withdrawn-rejected-submissions
The paper looks into generalization performance of NNs in supervised learning setting. The authors propose a regularizer to enhance neuron diversity in each layer(within-layer activation diversity) as a regularizer to improve generalization. The proposed idea is an extension of Cogswell's work with different regulariz...
train
[ "PSTTIdq8EOf", "tHSOqzp7Z_q", "o05wMyqHb6j", "CZ48wRpYz3f", "82i9Le5EC0", "yFCtAf4Hg6R", "p4gcjKqbZNg", "rJW_nIR5PBH", "G69F_m1IJYG", "rOJ7gTfoxz9", "BayzwwLdUuX" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes adding regularization terms to encourage diversity of the layer outputs in order to improve the generalization performance. The proposed idea is an extension of Cogswell's work with different regularization terms. In addition, the authors performed detailed generalization analysis based on the ...
[ 5, -1, -1, -1, -1, -1, -1, -1, 3, 5, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "iclr_2021_EArH-0iHhIq", "o05wMyqHb6j", "82i9Le5EC0", "PSTTIdq8EOf", "p4gcjKqbZNg", "rOJ7gTfoxz9", "G69F_m1IJYG", "BayzwwLdUuX", "iclr_2021_EArH-0iHhIq", "iclr_2021_EArH-0iHhIq", "iclr_2021_EArH-0iHhIq" ]
iclr_2021_o20_NVA92tK
A Critical Analysis of Distribution Shift
We introduce three new robustness benchmarks consisting of naturally occurring distribution changes in image style, geographic location, camera operation, and more. Using our benchmarks, we take stock of previously proposed hypotheses for out-of-distribution robustness and put them to the test. We find that using large...
withdrawn-rejected-submissions
The authors propose a new dataset to evaluate the robustness of image classifiers. The dataset consists of data from three sources: a crowdsourced dataset collected by the authors called ImageNet-Renditions, images from Google street view, and data sampled from DeepFashion2. This new dataset allows the authors to test ...
train
[ "9UFNIWsIoUE", "8Go16B-VJl", "WJD3ha1QcJq", "6FELUR2Qu9q", "IKH1HjHzVGY", "Rd75RbH-c8y", "ln87ls3ZSs", "xCML7tRveZ", "nZQgngNcbzy", "psPoltv4zLW", "SpbLaGt1-A4", "WV2basFnQSN", "9_4-DIVgXTg", "OOkXTIy-emD", "VEktooGNG6e", "oa9auOee8jr", "cF9zkQx9lWI", "ua_AWmH0gFe" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "o...
[ "This paper provides a empirical study on the robustness of image classification models to distributions shifts. The authors construct three benchmark datasets that control for effects like artistic renditions of common classes, view-point changes, and geographic shifts (among others). The datasets are then used to...
[ 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7 ]
[ 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2021_o20_NVA92tK", "iclr_2021_o20_NVA92tK", "ua_AWmH0gFe", "WJD3ha1QcJq", "Rd75RbH-c8y", "9_4-DIVgXTg", "xCML7tRveZ", "OOkXTIy-emD", "psPoltv4zLW", "cF9zkQx9lWI", "8Go16B-VJl", "WJD3ha1QcJq", "9UFNIWsIoUE", "8Go16B-VJl", "8Go16B-VJl", "cF9zkQx9lWI", "iclr_2021_o20_NVA92tK", "...
iclr_2021_-BA38x6Cf2
Can Kernel Transfer Operators Help Flow based Generative Models?
Flow-based generative models refer to deep generative models with tractable likelihoods, and offer several attractive properties including efficient density estimation and sampling. Despite many advantages, current formulations (e.g., normalizing flow) often have an expensive memory/runtime footpr...
withdrawn-rejected-submissions
All four reviewers were against accepting the paper. A major point shared by everyone was lack of clarity: this included its overall writing, its discussion toward prior work, and imprecise math to explain the ideas. The paper did improve quite a bit over its revisions. Whether this clarified all of the reviewers' unde...
train
[ "Zlsnh_iZ7z", "IykCvK389xu", "mbkarafXdR", "ZMhqoWhhyLB", "yMKZG4jFO0m", "QSfyHe-te-H", "ZNY8RELDNMu", "vRAWpbJ3PS_", "xSK7_GLgpV_", "V5DAy1HrfQ", "6u1O1HKDf5A" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "**** Summary ****\n\nThe authors build a generator that builds on top of the latent space of a “well-trained auto-encoder”. The generator consists of several steps: 1) sampling an latent element from the spherical latent space, 2) using a kernel Perron-Frobenius operator to embed the sampled latent element 3) sele...
[ 5, 5, -1, -1, -1, -1, -1, -1, -1, 2, 5 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_-BA38x6Cf2", "iclr_2021_-BA38x6Cf2", "Zlsnh_iZ7z", "Zlsnh_iZ7z", "Zlsnh_iZ7z", "V5DAy1HrfQ", "IykCvK389xu", "6u1O1HKDf5A", "iclr_2021_-BA38x6Cf2", "iclr_2021_-BA38x6Cf2", "iclr_2021_-BA38x6Cf2" ]
iclr_2021_g6OrH2oT5so
Bridging the Imitation Gap by Adaptive Insubordination
When expert supervision is available, practitioners often use imitation learning with varying degrees of success. We show that when an expert has access to privileged information that is unavailable to the student, this information is marginalized in the student policy during imitation learning resulting in an ''imitat...
withdrawn-rejected-submissions
Reviewers acknowledged that the problem addressed in this paper is interesting and is not solved by the existing literature. They appreciated that the setup was well defined and the paper was clearly written. Yet they kept several concerns after the rebuttal. Especially, they expected the comparison to be done with alg...
train
[ "tJ3jOE3xO4j", "oad-xiAwlNq", "BrA3MKcNtkp", "LxZc93u6f_", "mFfJYFIOna5", "yOqhVhhkZnU", "f2M1MR9XEKE", "ndh9boNBIZc", "PEQRqx5536z", "Z8575NVVuj" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "[Summary]\n\nPaper aims at attacking the \"imitation gap\" in the canonical LfD problem, where the expert and learner have different observation spaces, leading to poor performance if merely adopting imitation learning without self-exploration. A novel learning-based \nweight proposing function is introduced to mi...
[ 6, -1, -1, -1, -1, -1, 5, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, 3, -1, 4, 3 ]
[ "iclr_2021_g6OrH2oT5so", "PEQRqx5536z", "tJ3jOE3xO4j", "mFfJYFIOna5", "Z8575NVVuj", "f2M1MR9XEKE", "iclr_2021_g6OrH2oT5so", "f2M1MR9XEKE", "iclr_2021_g6OrH2oT5so", "iclr_2021_g6OrH2oT5so" ]
iclr_2021_4I5THWNSjC
BasisNet: Two-stage Model Synthesis for Efficient Inference
We present BasisNet which combines recent advancements in efficient neural network architectures, conditional computation, and early termination in a simple new form. Our approach uses a lightweight model to preview an image and generate input-dependent combination coefficients, which are later used to control the synt...
withdrawn-rejected-submissions
The reviews were largely split in the beginning. Some of the concerns are firmly addressed, e.g. new results evaluating the actual latency in real hardware, and one reviewer raised the score from 5 to 6. However, another reviewer is not fully convinced by the response and decided to keep the original score of "3: Clea...
val
[ "9J3vtzCEP7b", "TBKg2V1ar1X", "Z7SbseJjDs0", "-YAkYZIzv5N", "NRfMllcKvc7", "UeLEMRZjMhb", "BcMkRjxC8Xy", "3S5xb5eb73w", "lQ_faOkOvn" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "official_reviewer" ]
[ "This paper present an efficient network, named BasisNet, which combines recent advancements in efficient neural network architectures, conditional computation, and early termination. BasisNet can be applied to any network architectures. BasisNet shows state-of-the-art ImageNet performance in mobile setting.\n\nMy ...
[ 3, 6, -1, -1, -1, -1, -1, -1, 7 ]
[ 5, 4, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2021_4I5THWNSjC", "iclr_2021_4I5THWNSjC", "iclr_2021_4I5THWNSjC", "TBKg2V1ar1X", "9J3vtzCEP7b", "lQ_faOkOvn", "3S5xb5eb73w", "iclr_2021_4I5THWNSjC", "iclr_2021_4I5THWNSjC" ]
iclr_2021_SncSswKUse
Factorized linear discriminant analysis for phenotype-guided representation learning of neuronal gene expression data
A central goal in neurobiology is to relate the expression of genes to the structural and functional properties of neuronal types, collectively called their phenotypes. Single-cell RNA sequencing can measure the expression of thousands of genes in thousands of neurons. How to interpret the data in the context of neuron...
withdrawn-rejected-submissions
The paper introduces a linear projection method, inspired by ANOVA, for finding a supervised low-dimensional embedding. A positive aspect is that the method is straightforward, and it is even slightly surprising that in the family of linear models, there still was an uncovered "niche". The paper was considered useful...
train
[ "lKJGIzg_qX", "kYg7nZyRopU", "dIIOWscmhKt", "FidWP64ITAW", "j9B0saUUCXO", "mw4zy0XUOBr", "khXQ8h6nKu", "axMsl01-D5_", "cRXFjoW1WQF", "7wdl2t-3gNu", "DJhKsWre9LE", "B1Uvhq5nQZy" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "COMMENT1: \"First: I don't understand why switching labels inside one location type should have any impact.\"\n\nResponse: Thanks for the comment. The purpose here is to switch labels of the type T4a with another type, for example, T5a, and see how that affects the metric scores of FLDA. For the reference conditio...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 2, 3 ]
[ "kYg7nZyRopU", "axMsl01-D5_", "B1Uvhq5nQZy", "DJhKsWre9LE", "mw4zy0XUOBr", "cRXFjoW1WQF", "iclr_2021_SncSswKUse", "7wdl2t-3gNu", "iclr_2021_SncSswKUse", "iclr_2021_SncSswKUse", "iclr_2021_SncSswKUse", "iclr_2021_SncSswKUse" ]
iclr_2021_UmrVpylRExB
Dual-Tree Wavelet Packet CNNs for Image Classification
In this paper, we target an important issue of deep convolutional neural networks (CNNs) — the lack of a mathematical understanding of their properties. We present an explicit formalism that is motivated by the similarities between trained CNN kernels and oriented Gabor filters for addressing this problem. The core ide...
withdrawn-rejected-submissions
The paper is motivated by the observed similarity between learned filters at the low layers of a convolutional neural network and oriented Gabor filters. It proposes to replace the lower layers with dual tree wavelet packet transforms, which yield fixed oriented frequency-selective features. Instead of learning filters...
train
[ "S3MbQQC36uA", "I4bD8uezydJ", "bg37UP3JurW", "ak4dnafIwFg", "9UQ2FeGjM3B", "3fVsXsxG3nH", "mN8Xe6jQuhZ", "6r8MdHNbEED", "jDfpY8KDRcn", "ol7JfpTad1g", "VhZTM_z4jnO" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear reviewers,\n\nWe have submitted a revised version of our paper, taking into account your comments and suggestions. The changes appear in blue to facilitate review. Here are the main differences with the first version.\n- Introduction: the main goals have been clarified and the advantages over related works ha...
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "iclr_2021_UmrVpylRExB", "bg37UP3JurW", "6r8MdHNbEED", "jDfpY8KDRcn", "ol7JfpTad1g", "VhZTM_z4jnO", "iclr_2021_UmrVpylRExB", "iclr_2021_UmrVpylRExB", "iclr_2021_UmrVpylRExB", "iclr_2021_UmrVpylRExB", "iclr_2021_UmrVpylRExB" ]
iclr_2021_0jPp4dKp3PL
Integrating linguistic knowledge into DNNs: Application to online grooming detection
Online grooming (OG) of children is a pervasive issue in an increasingly interconnected world. We explore various complementary methods to incorporate Corpus Linguistics (CL) knowledge into accurate and interpretable Deep Learning (DL) models. They provide an implicit text normalisation that adapts embedding spaces to ...
withdrawn-rejected-submissions
Most reviewers did not feel that this paper was ready for publication. I thank the authors for answering all the concerns of the reviewers, running new experiments and submitting a revised version, however, this was not not enough to alleviate the reviewers' concerns, notably relating to the handling of the ethical con...
train
[ "gy4Gi_7rEmp", "4di_quBQOT8", "uxZqjPxNt6n", "u7SUegBnXHM", "ZisDcdTzzt", "JilQqSw6dnl", "KU0DR-w9Eh", "pLn88agtAxJ", "qolJLTrIZXt", "VBEYTYISYj", "YRBgJQF7Imn", "--xgRVQ8YT4", "forJAV2ZHCS", "nmxzn-nfFVl", "5BOnM4u3n88" ]
[ "public", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The initial reviewers for this paper flagged two key issues in ethics:\n1. On “dealing with the PAN-12 dataset and how the data was created”\n2. On “a system that could be used in law enforcement”\n\nOn (1):The author’s understanding of data ethics, based on their reply, may be too simplistic. Just because data i...
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_0jPp4dKp3PL", "iclr_2021_0jPp4dKp3PL", "u7SUegBnXHM", "JilQqSw6dnl", "pLn88agtAxJ", "forJAV2ZHCS", "--xgRVQ8YT4", "4di_quBQOT8", "4di_quBQOT8", "4di_quBQOT8", "4di_quBQOT8", "nmxzn-nfFVl", "5BOnM4u3n88", "iclr_2021_0jPp4dKp3PL", "iclr_2021_0jPp4dKp3PL" ]
iclr_2021_xtKFuhfK1tK
Communication-Efficient Sampling for Distributed Training of Graph Convolutional Networks
Training Graph Convolutional Networks (GCNs) is expensive as it needs to aggregate data recursively from neighboring nodes. To reduce the computation overhead, previous works have proposed various neighbor sampling methods that estimate the aggregation result based on a small number of sampled neighbors. Although these...
withdrawn-rejected-submissions
The paper introduces a new locality-aware importance weighted sampling procedure for distributed training of GNNs. While the paper is interesting, the reviewers raised some fundamental concerns about it. The focus on the paper is on scalable methods and the experiments or only run on medium-size datasets(<2m nodes). F...
train
[ "oQTzbffAw0", "_0zUho_iufj", "wE5octuLDSA", "e07FUIxA0r3", "L45TtxWL0A", "-x1PPh8dev", "xtPcPNvrmt", "YTs-4WBkV0Y" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This work considers the challenge of distributed training for GNNs. The approach is a locality-aware importance weighted sampling procedure. I was not given much time to read the paper but it seems like a decent contribution, albeit too minor of a contribution to the existing literature to be considered a bonafide...
[ 5, -1, -1, -1, -1, 4, 4, 6 ]
[ 3, -1, -1, -1, -1, 5, 5, 3 ]
[ "iclr_2021_xtKFuhfK1tK", "iclr_2021_xtKFuhfK1tK", "oQTzbffAw0", "xtPcPNvrmt", "-x1PPh8dev", "iclr_2021_xtKFuhfK1tK", "iclr_2021_xtKFuhfK1tK", "iclr_2021_xtKFuhfK1tK" ]
iclr_2021_2Ey_1FeNtOC
Minimum Description Length Recurrent Neural Networks
Recurrent neural networks (RNNs) face two well-known challenges: (a) the difficulty of such networks to generalize appropriately as opposed to memorizing, especially from very short input sequences (generalization); and (b) the difficulty for us to understand the knowledge that the network has attained (transparency). ...
withdrawn-rejected-submissions
Rather than using backprop to train RNNs, this paper explores instead using GA's to train them along with an extra Minimal Description Length objective to search in the space of the simplest possible networks that can perform the task at hand. They demonstrate that the method can indeed find minimal RNNs that, when tra...
train
[ "99Kl1fjlMa", "IAG7wbSEgPV", "uIgrqL2kzOL", "TNAQY_ht69", "_vUn7TRwMpQ", "xCGOFHrFxqK", "_5g5ZpJVMxX", "-qXy79O-55" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you so much for such a detailed and constructive review.\n\nRNN competitors can indeed deal with some of the tasks here well. We agree that our claim about addition was misleading, the addition tasks that are completely beyond the scope of RNN have not been attempted here. Yet, the current results however sh...
[ -1, -1, -1, -1, 3, 4, 6, 4 ]
[ -1, -1, -1, -1, 3, 3, 3, 5 ]
[ "_vUn7TRwMpQ", "xCGOFHrFxqK", "_5g5ZpJVMxX", "-qXy79O-55", "iclr_2021_2Ey_1FeNtOC", "iclr_2021_2Ey_1FeNtOC", "iclr_2021_2Ey_1FeNtOC", "iclr_2021_2Ey_1FeNtOC" ]
iclr_2021_0rNLjXgchOC
Dissecting Hessian: Understanding Common Structure of Hessian in Neural Networks
Hessian captures important properties of the deep neural network loss landscape. We observe that eigenvectors and eigenspaces of the layer-wise Hessian for neural network objective have several interesting structures -- top eigenspaces for different models have high overlap, and top eigenvectors form low rank matrices ...
withdrawn-rejected-submissions
This paper studies different properties of the top eigenspace of the Hessian of a deep neural network and their overlap. It raised quite a lot of discussion, which finally went in not very constructive way. The reviewers generally agree that the paper has potential, but the actual contribution is limited. Pros: - The...
val
[ "fpOECxB50R", "kFf39y2CUY9", "oDL1xh4LsA", "Oq3285366z6", "FiU9GbEE9X-", "-NVbZeHBkFX", "qFhr3T1OCNr", "xgeqSzqX_e3", "DjCb_hNl30D", "yQDZYq2KPo8", "GT0K-M-85rZ", "JRcpdQPu0RL" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThe paper studies the structure of the Hessian matrix of loss functions by approximating Hessians using Kronecker factorizations. Combining the Kronecker factorization with PAC-Bayes, the authors provide a tighter bound on classification error. \n\nPros:\n- The paper contains many experiments.\n- Interes...
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 4 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 2 ]
[ "iclr_2021_0rNLjXgchOC", "FiU9GbEE9X-", "FiU9GbEE9X-", "iclr_2021_0rNLjXgchOC", "qFhr3T1OCNr", "JRcpdQPu0RL", "GT0K-M-85rZ", "yQDZYq2KPo8", "fpOECxB50R", "iclr_2021_0rNLjXgchOC", "iclr_2021_0rNLjXgchOC", "iclr_2021_0rNLjXgchOC" ]
iclr_2021_otuxSY_QDZ9
Multilayer Dense Connections for Hierarchical Concept Classification
Classification is a pivotal function for many computer vision tasks such as image recognition, object detection, scene segmentation. Multinomial logistic regression with a single final layer of dense connections has become the ubiquitous technique for CNN-based classification. While these classifiers project a mapping...
withdrawn-rejected-submissions
All reviewers recommended rejection after considering the rebuttal from the authors. The main weaknesses of the submission include poorly motivated claims and designs, and insufficient experimental comparisons. The AC did not find sufficient grounds to overturn the reviewers' consensus recommendation.
test
[ "d-tyWDeYi4y", "9RH2tHjlH7s", "xrIf46wPWmE", "Sum7NCkLPf", "wDIHoNMU6mZ", "Ady6P3MK6S", "2at9Wr4q_qW", "Etod9PVtLBt", "wDX6DeunN1s" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the feedback.\n\n**Re: claims on learned representation:** We agree with the reviewers that the statements regarding what representations CNNs learn were sloppy. We will rephrase these statements in the next revision and upload it soon -- we will incorporate the suggestions emerging from ...
[ -1, -1, -1, -1, -1, 3, 5, 5, 2 ]
[ -1, -1, -1, -1, -1, 4, 3, 3, 5 ]
[ "wDX6DeunN1s", "Ady6P3MK6S", "Etod9PVtLBt", "iclr_2021_otuxSY_QDZ9", "2at9Wr4q_qW", "iclr_2021_otuxSY_QDZ9", "iclr_2021_otuxSY_QDZ9", "iclr_2021_otuxSY_QDZ9", "iclr_2021_otuxSY_QDZ9" ]
iclr_2021_dak8uQE6BOG
MVP: Multivariate polynomials for conditional generation
Conditional Generative Adversarial Nets (cGANs) have been widely adopted for image generation. cGANs take i) a noise vector and ii) a conditional variable as input. The conditional variable can be discrete (e.g., a class label) or continuous (e.g., an input image) resulting into class-conditional (image) generation an...
withdrawn-rejected-submissions
This paper proposes a new network architecture that implements higher order multivariate polynomials (MVP). They show that MVP generalizes well to different types of conditional variables, and can be applied to a broad range of tasks. However, unifying discrete and continuous conditions and network without activation...
val
[ "yoXjDxxZUSv", "5cieSyV6La", "nzWfwWdWwa2", "KjQp5p9LB1J", "WHcYEpCuVis", "8juBl5ySKm5", "yvZE6wCWri", "cGj6lDocLL7", "FcKyNpe4uh", "RWVnMN-KV0", "9R79ZI6us_7", "sy1nKIR_0zs", "uAJWOvPPcWN", "uhmQ8612VoM", "-44ArbKDu51", "fulqB8DPGZy", "zECcQBxLUAz", "IBAufCv8RWB" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "**Post rebuttal (round #3)**\n\nThanks to the authors' effort on the rebuttal. Despite the extensive efforts, I feel the review/rebuttal iteration is not satisfactory, possibly due to some miscommunication.\n\nTo be clear, I want to re-emphasize that significant parts of my concerns were about **misleading claims*...
[ 6, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_dak8uQE6BOG", "iclr_2021_dak8uQE6BOG", "iclr_2021_dak8uQE6BOG", "cGj6lDocLL7", "FcKyNpe4uh", "FcKyNpe4uh", "FcKyNpe4uh", "iclr_2021_dak8uQE6BOG", "yoXjDxxZUSv", "5cieSyV6La", "5cieSyV6La", "IBAufCv8RWB", "IBAufCv8RWB", "IBAufCv8RWB", "yoXjDxxZUSv", "yoXjDxxZUSv", "yoXjDxxZ...
iclr_2021_5USOVm2HkfG
Jointly-Trained State-Action Embedding for Efficient Reinforcement Learning
While reinforcement learning has achieved considerable successes in recent years, state-of-the-art models are often still limited by the size of state and action spaces. Model-free reinforcement learning approaches use some form of state representations and the latest work has explored embedding techniques for actions,...
withdrawn-rejected-submissions
Most reviewers believe that the paper is not ready for publication. Among their concerns are: - whether the new experiment with 10 runs are conducted correctly, - the significance of the theoretical part, - correctness of Lemma 2, - generalization claims may not follow from the theoretical results, - comparison with Zh...
test
[ "BxhtWu1DHr", "FVilI55OMt", "_Z9ZiFQQQeq", "ZOO77ZyURUc", "qY-eWMfa-Mt", "isI9Ua7NGeM", "ZbY8WLbkJpO", "TbnIt7ZP4R", "vt4shDwND5e", "ioJFBIEn9_B", "JRTl1vChQcu", "BwW91edUOge", "6yg7lCjQlSP", "Yv48IMd5Bt", "H22zAeVS_A5", "MkZ0YG6E32j", "cFJ7jeI2TqW", "Wc7fgBbvEz", "gisrCSRnDtE", ...
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_revi...
[ "The paper proposes a framework of jointly learning a state and action embedding using the model of the environment, eventually using those embeddings to learn a parameterized control policy using standard policy gradient (PG) methods. Joint learning of state and action embeddings allows us to capture the interacti...
[ 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 6 ]
[ 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 2 ]
[ "iclr_2021_5USOVm2HkfG", "iclr_2021_5USOVm2HkfG", "qY-eWMfa-Mt", "isI9Ua7NGeM", "ukptq1-2qDf", "ZbY8WLbkJpO", "vt4shDwND5e", "BwW91edUOge", "or3WLkpvehE", "nishnuLFbw", "6yg7lCjQlSP", "sb3yVwCeAsn", "fajBSnC5Hoa", "QX-tXplM_K", "FVilI55OMt", "cFJ7jeI2TqW", "ukptq1-2qDf", "iclr_2021...
iclr_2021_I4pQCAhSu62
Balancing Robustness and Sensitivity using Feature Contrastive Learning
It is generally believed that robust training of extremely large networks is critical to their success in real-world applications. However, when taken to the extreme, methods that promote robustness can hurt the model’s sensitivity to rare or underrepresented patterns. In this paper, we discuss this trade-off between r...
withdrawn-rejected-submissions
This paper proposes Feature Contractive Learning (FCL), a training framework that takes a more nuanced view of robustness, refining it to the sensitivity of the feature. There are some differing opinions among the reviewers, with some applauding the simplicity of this new take on robustness while others are unsure of ...
train
[ "v9N4kEkkJgx", "BdLMha3H8_N", "C8MLqDo9YCP", "Aro1gI0Q6zW", "MCuKdUlppiO", "WHA_DJONPK", "s7-IBM9s4Bd", "yCOgIkb0qnB", "1IRNMFhQ7ew", "vULsuFVpd-V", "cCyWIV4tTVN", "QYtn4r57Py" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "### Summary\nIn this work, the authors focus on the robustness against only common corruptions and perturbations by defining a contextual feature utility metric. It measures the magnitude of the change in the loss of a perfect model that an input feature can incur. They leverage this metric to design a utility-awa...
[ 7, 5, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2 ]
[ "iclr_2021_I4pQCAhSu62", "iclr_2021_I4pQCAhSu62", "iclr_2021_I4pQCAhSu62", "BdLMha3H8_N", "v9N4kEkkJgx", "BdLMha3H8_N", "iclr_2021_I4pQCAhSu62", "cCyWIV4tTVN", "v9N4kEkkJgx", "QYtn4r57Py", "iclr_2021_I4pQCAhSu62", "iclr_2021_I4pQCAhSu62" ]
iclr_2021_WC04PD6dFrP
Deep Jump Q-Evaluation for Offline Policy Evaluation in Continuous Action Space
We consider off-policy evaluation (OPE) in continuous action domains, such as dynamic pricing and personalized dose finding. In OPE, one aims to learn the value under a new policy using historical data generated by a different behavior policy. Most existing works on OPE focus on discrete action domains. To handle conti...
withdrawn-rejected-submissions
The paper considers the OPE problem under the contextual bandit model with continuous action. They studied the model of a piecewise constant value function according to the actions. The assumption is new, though still somewhat restrictive as it requires the piecewise constant partitions to be the same for all x. Th...
train
[ "ijamhv400s-", "wrMx81rtqN0", "Dptt-8J09p4", "Rp_wz1Lj40" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary of paper: \nThe main contribution of this paper is a new algorithm to learn the expected reward function for a given target policy using the historical data generated by a different behavior policy in continuous action domains. All current Offline-Policy Evaluation (OPE) methods for handling continuous ac...
[ 8, 5, 6, 6 ]
[ 1, 3, 3, 3 ]
[ "iclr_2021_WC04PD6dFrP", "iclr_2021_WC04PD6dFrP", "iclr_2021_WC04PD6dFrP", "iclr_2021_WC04PD6dFrP" ]
iclr_2021_mYNfmvt8oSv
D2RL: Deep Dense Architectures in Reinforcement Learning
While improvements in deep learning architectures have played a crucial role in improving the state of supervised and unsupervised learning in computer vision and natural language processing, neural network architecture choices for reinforcement learning remain relatively under-explored. We take inspiration from succes...
withdrawn-rejected-submissions
The paper shows that replacing fully connected layers by dense layers in the networks used by actors and critiques in RL can improve the results significantly. The improvements for several RL techniques across several benchmarks are very nice. That being said, replacing fully connected layers by dense layers is not p...
train
[ "4CvrinDpgzo", "zrGuHuRVl4F", "hjT_5TVAuwR", "vd6jKpWqCgu", "VvgRYTTr0R", "ltSrMftUke9", "tPasSKOVkSv", "Yku_mgVb-az", "ChpYk6HpOBT", "CzJnY6lazFR", "-jYdPr_G7Dv", "6SJzesNUSL", "Y8Vj_RmncZO", "0bEVkTj6veM", "2Sy4SSbXGwY", "m96Me7d-7qd", "LZa1cAMS6E4" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this work, the authors propose a neural network architecture that concatenates the input state with hidden state activations over multiple layers in order to train deeper networks in an RL setting. Whilst the work does improve over standard MLP in this setting, is seems like an incremental work that lacks real ...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 8, 4 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2021_mYNfmvt8oSv", "iclr_2021_mYNfmvt8oSv", "vd6jKpWqCgu", "VvgRYTTr0R", "ltSrMftUke9", "ChpYk6HpOBT", "6SJzesNUSL", "LZa1cAMS6E4", "2Sy4SSbXGwY", "LZa1cAMS6E4", "4CvrinDpgzo", "m96Me7d-7qd", "2Sy4SSbXGwY", "2Sy4SSbXGwY", "iclr_2021_mYNfmvt8oSv", "iclr_2021_mYNfmvt8oSv", "iclr_...
iclr_2021_7K0UUL9y9lE
You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling
Transformer-based models have come to dominate the landscape in a wide range of natural language processing (NLP) applications. The heart of the transformer model is the self-attention mechanism, which captures the interactions of token pairs in the input sequences and consequently, depends quadratically on the input s...
withdrawn-rejected-submissions
The paper presents an interesting idea for making self-attention efficient. Several reviewers were not satisfied with the experiments because it did not include runtime and sought after benchmarks. Rebuttal did a good job of clarifying a few of those with newly added experiments that make the paper stronger. However, ...
train
[ "70A1xElh7OE", "_8YhfSo3fNV", "qPscRAJUh6", "UZKs-rFGzes", "O4_v63BskF", "g8pSKWYsDHC", "oadOqxTfv5G", "00OMxMl8jbe", "GdNUZpUxH9L", "eC-sAowKE_", "VTl7JKi4-75", "0WJ9QAJTEUT" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer and answer the main questions below. \n\n-\n\nQ) In evaluation there is never an explicit comparison with respect to both time and performance. I appreciate that given a large enough sequence length YOSO will always be faster but will it be good enough?\n\nIdeally, we would want to train a ra...
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3, 4 ]
[ "eC-sAowKE_", "GdNUZpUxH9L", "eC-sAowKE_", "VTl7JKi4-75", "0WJ9QAJTEUT", "GdNUZpUxH9L", "0WJ9QAJTEUT", "VTl7JKi4-75", "iclr_2021_7K0UUL9y9lE", "iclr_2021_7K0UUL9y9lE", "iclr_2021_7K0UUL9y9lE", "iclr_2021_7K0UUL9y9lE" ]
iclr_2021_oj3bHNSq_2w
Sample weighting as an explanation for mode collapse in generative adversarial networks
Generative adversarial networks were introduced with a logistic MiniMax cost formulation, which normally fails to train due to saturation, and a Non-Saturating reformulation. While addressing the saturation problem, NS-GAN also inverts the generator's sample weighting, implicitly shifting emphasis from higher-scoring t...
withdrawn-rejected-submissions
I think we did learn something new from this paper, and I think the reviewers all seem to agree with this. The observation you make about the objective seems correct and interesting (though reviewers and ACs do sometime miss errors), but I have the following complaints that keep me from recommending acceptance: 1. The...
test
[ "kXfo5M7Nvk0", "R0KcBYMrzVs", "zooT4WuAr16", "DU5YpKtxx29", "oZ_74C3yzMN", "sZDukTplxQn", "MbGqg_Acc-", "TtZ9JVc7qz5", "U67Y5Dylu2H", "AJsBaQvPpid", "R3t02cugnSh", "rjqaJcRcVfT", "UC-X5ZhiOw", "Eb4DB89FOJJ", "uxlTTCddHIV", "vnq1YymHET", "uFuVPtswbXR", "1FevZyomcVd", "RoKA3yx7pPf"...
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Edit final revision: see supplementary L.2, fig 12 and fig 13.\n\nNote also the more in-depth discussion and empirical results in Appendix L.\n\nThe linear model for D’s scores is an oversimplification, and your intuition about the loss and the increase in logits eventually tapering off is correct. However, in our...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "J9xS09WqEpP", "J9xS09WqEpP", "J9xS09WqEpP", "iclr_2021_oj3bHNSq_2w", "uxlTTCddHIV", "iclr_2021_oj3bHNSq_2w", "rjqaJcRcVfT", "J9xS09WqEpP", "sZDukTplxQn", "uxlTTCddHIV", "sZDukTplxQn", "sZDukTplxQn", "sZDukTplxQn", "kZS0cBafc4f", "iclr_2021_oj3bHNSq_2w", "J9xS09WqEpP", "J9xS09WqEpP",...
iclr_2021_mLeIhe67Li6
Learning One-hidden-layer Neural Networks on Gaussian Mixture Models with Guaranteed Generalizability
We analyze the learning problem of fully connected neural networks with the sigmoid activation function for binary classification in the teacher-student setup, where the outputs are assumed to be generated by a ground-truth teacher neural network with unknown parameters, and the learning objective is to estimate the te...
withdrawn-rejected-submissions
This paper gives a way to learn one-hidden-layer neural networks on when the input comes from Gaussian mixture model. The main algorithm uses [Janzamin et al. 2014] as an initialization and then performs gradient descent. The main contribution of this paper is 1. to give a characterization of sample complexity for esti...
test
[ "FWh1zp9Nhtt", "cO4DVNNjdDU", "OO0NTpvWxq", "RWzanFNzIir", "TDv-L36Zn-", "oWbot5WILFv", "ufcB-SWrft1", "M9SVbhzCQdX", "m8VRqzmOEEf", "4cPYOjav1fc", "MVQvosdGb0N", "Dqb-pzvCqXc", "SCjs565Eple", "Qg9lbyIA3nl", "OcLZ-ge8IZ0", "emvOtsH9fh" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper considers the problem of learning one-hidden-layer neural networks with Gaussian mixture input in the teacher-student setting. The authors consider the neural network with sigmoid activation functions and the learning algorithm is gradient descent plus tensor initialization. There is a line of research...
[ 6, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_mLeIhe67Li6", "iclr_2021_mLeIhe67Li6", "oWbot5WILFv", "TDv-L36Zn-", "MVQvosdGb0N", "ufcB-SWrft1", "cO4DVNNjdDU", "MVQvosdGb0N", "Dqb-pzvCqXc", "Qg9lbyIA3nl", "OcLZ-ge8IZ0", "SCjs565Eple", "FWh1zp9Nhtt", "emvOtsH9fh", "iclr_2021_mLeIhe67Li6", "iclr_2021_mLeIhe67Li6" ]
iclr_2021_vcKVhY7AZqK
Quantifying Task Complexity Through Generalized Information Measures
How can we measure the “complexity” of a learning task so that we can compare one task to another? From classical information theory, we know that entropy is a useful measure of the complexity of a random variable and provides a lower bound on the minimum expected number of bits needed for transmitting its state. In th...
withdrawn-rejected-submissions
This paper proposes a measure of task complexity based on a decision-DAG like "encoder" where we iteratively branch on some test on the input and the selection of future tests depends on the answer to previous tests until we reach a terminal node in the DAG. We require that if $x$ and $x'$ reach the same terminal node...
train
[ "Z87CPcxuxK8", "N633b7bsl25", "-peiLn9-uxQ", "91dUCst_8HN", "jB3B7CXYAp1", "Li2lW9rYniw", "Uc2Cs8WgaoR", "w15-fGETE_", "HFDLlTPlyni" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In accordance with the reviews, we have fixed typos and made some minor modifications to the submission. \n\nSome specific modifications:\n1. To make the notation simpler we have changed $q$ to be the function in the query set $Q$ and $q(X)$ the answer evaluated at input $X$. The prior notation of $q$ being the in...
[ -1, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, 1, 3, 5 ]
[ "iclr_2021_vcKVhY7AZqK", "w15-fGETE_", "w15-fGETE_", "Uc2Cs8WgaoR", "HFDLlTPlyni", "HFDLlTPlyni", "iclr_2021_vcKVhY7AZqK", "iclr_2021_vcKVhY7AZqK", "iclr_2021_vcKVhY7AZqK" ]
iclr_2021_3GYfIYvNNhL
Characterizing Structural Regularities of Labeled Data in Overparameterized Models
Humans are accustomed to environments that contain both regularities and exceptions. For example, at most gas stations, one pays prior to pumping, but the occasional rural station does not accept payment in advance. Likewise, deep neural networks can generalize across instances that share common patterns or struc...
withdrawn-rejected-submissions
This paper proposes the c-score, which is the aggregation of a "consistency profile" that measures per-instance generalization. Naive computation of the c-score is expensive and thus requires an approximation. The paper then uses the c-score to analyze several image benchmarks and their learning dynamics. While th...
val
[ "Nu6_ZVjK_3E", "MHL1xo3gPBM", "wrSSKMP5iXq", "IQbubzAtJJ", "DXrJysirAc", "QwbroYz_HHe", "Kfg7hnVKAs", "aWoadjVzCg", "QgIvgxhbwbi" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThis paper formulates a consistency score (C-score, which characterizes\nthe expected accuracy for a held-out instance given training sets of varying size\nsampled from the data distribution) to measure the regularity of an example. It also proposes approximations for C-score and study structural regular...
[ 5, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_3GYfIYvNNhL", "IQbubzAtJJ", "IQbubzAtJJ", "DXrJysirAc", "Nu6_ZVjK_3E", "aWoadjVzCg", "QgIvgxhbwbi", "iclr_2021_3GYfIYvNNhL", "iclr_2021_3GYfIYvNNhL" ]
iclr_2021_YMsbeG6FqBU
The Advantage Regret-Matching Actor-Critic
Regret minimization has played a key role in online learning, equilibrium computation in games, and reinforcement learning (RL). In this paper, we describe a general model-free RL method for no-regret learning based on repeated reconsideration of past behavior: Advantage Regret-Matching Actor-Critic (ARMAC). Rather tha...
withdrawn-rejected-submissions
This paper introduces a new algorithm to solve game, more or less similar (in the general idea, yet differences are interesting) than CFR. The concept is to sample from past policies to generate trajectories and update sequentially (via regret matching). The three reviewers gave rather lukewarm reviews, with possible ...
train
[ "CNcFSEXA1dI", "cH_NUqTVNkv", "rBnNU8mplr", "34pQORTC-LD", "uO9OrSPvnrH", "hEW6ukPgosW" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper considers the problem of counterfactual regret minimization and proposes an algorithm that does not use the importance sampling procedure. The claim is that this helps in reducing the variance usually introduced by the IS procedure. They propose a new algorithm that uses the previously used policies as ...
[ 6, -1, -1, -1, 6, 6 ]
[ 3, -1, -1, -1, 3, 5 ]
[ "iclr_2021_YMsbeG6FqBU", "hEW6ukPgosW", "CNcFSEXA1dI", "uO9OrSPvnrH", "iclr_2021_YMsbeG6FqBU", "iclr_2021_YMsbeG6FqBU" ]
iclr_2021_zq4bt_0z-gz
Latent Programmer: Discrete Latent Codes for Program Synthesis
In many sequence learning tasks, such as program synthesis and document summarization, a key problem is searching over a large space of possible output sequences. We propose to learn representations of the outputs that is specifically meant for search: rich enough to specify the desired output but compact enough to mak...
withdrawn-rejected-submissions
The paper tackles program synthesis using a discrete latent code approach, enabling two-level beam search decoding. The approach is well motivated, as program synthesis requires high-level choices that affect long subsequences of the output, and discrete codes are amenable to heuristic search. Empirical results show im...
train
[ "NibCYzKxZbJ", "hmxE-zpZURs", "GIFzK-Qr4-M", "cMu-4Erwpj3", "mlAlXlL2Soa", "HtsiRzHZ1t", "xGgcT9U8mdI", "WI4oqSPGloK", "OH6e2ENnwbV", "1ojA9_eqxk-", "cKN7zUF6hug", "proGf20QMNf", "qtXF0dRW-9V", "mWUgIp3OUmC", "Kuuft1LDVNO", "QyeHzCorz0d" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "### Summary ###\nThe paper addresses the problem of program synthesis from examples, and also evaluates on program synthesis from natural language descriptions.\nThe paper proposes Latent Programmer, an approach that employs an adapted VQ-VAE to predict a sequence of latent codes, and then generates the output pro...
[ 3, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7 ]
[ 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_zq4bt_0z-gz", "iclr_2021_zq4bt_0z-gz", "iclr_2021_zq4bt_0z-gz", "mWUgIp3OUmC", "proGf20QMNf", "xGgcT9U8mdI", "1ojA9_eqxk-", "NibCYzKxZbJ", "iclr_2021_zq4bt_0z-gz", "WI4oqSPGloK", "proGf20QMNf", "Kuuft1LDVNO", "QyeHzCorz0d", "GIFzK-Qr4-M", "iclr_2021_zq4bt_0z-gz", "iclr_2021_...
iclr_2021_NMgB4CVnMh
Acoustic Neighbor Embeddings
This paper proposes a novel acoustic word embedding called Acoustic Neighbor Embeddings where speech or text of arbitrary length are mapped to a vector space of fixed, reduced dimensions by adapting stochastic neighbor embedding (SNE) to sequential inputs. The Euclidean distance between coordinates in the embedding spa...
withdrawn-rejected-submissions
This paper propose to learn the embedding of audio segment in the framework of stochastic neighbor embedding (SNE), where the embeddings of the same word shall be close to each other. The method was initially demonstrated for name recognition. The use of SNE for acoustic embedding is novel and this is recognized by all...
train
[ "bLQhUNk6YM", "7C1hAEnNf8N", "-LOA5yZg3vM", "RY3w2Ujr0T", "GEkVxFAt9g", "NokC09aenyP", "rekS2YPc2B", "Jcq5qy1Mdg", "ijTfuhBtIz", "b7dA9eXjf9M", "VtjffcwFBDw", "YsOwlp65LL4", "EU1ErXhdOg", "PNzRU6VtVtd", "-il1P9NzU5B", "6AFvQl6LZQV", "Q5nUOUGoiHz", "nXSKvxLbGlU" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "In this work, the authors propose a new training method to learn acoustic embeddings by simultaneously training two encoder networks, one for speech (f) and one for text (g), such that the resulting embeddings from the two networks are in a common subspace. At test time, the embedding for a given speech input usin...
[ 6, -1, 6, -1, 6, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, -1, 4, -1, 4, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_NMgB4CVnMh", "GEkVxFAt9g", "iclr_2021_NMgB4CVnMh", "YsOwlp65LL4", "iclr_2021_NMgB4CVnMh", "Jcq5qy1Mdg", "-LOA5yZg3vM", "-il1P9NzU5B", "6AFvQl6LZQV", "iclr_2021_NMgB4CVnMh", "iclr_2021_NMgB4CVnMh", "GEkVxFAt9g", "GEkVxFAt9g", "bLQhUNk6YM", "b7dA9eXjf9M", "b7dA9eXjf9M", "nXS...
iclr_2021_tq5JAGsedIP
Time-varying Graph Representation Learning via Higher-Order Skip-Gram with Negative Sampling
Representation learning models for graphs are a successful family of techniques that project nodes into feature spaces that can be exploited by other machine learning algorithms. Since many real-world networks are inherently dynamic, with interactions among nodes changing over time, these techniques can be defined both...
withdrawn-rejected-submissions
The paper is concerned with learning representations for time-varying graphs which is an important problem that is relevant to the ICLR community. For this purpose the authors propose a new method to extend skip-gram with negative sampling to higher-order tensors with the goal to perform an implicit tensor factorizatio...
val
[ "7mVekEUSQsz", "rF_BqBhMW3v", "4_buMuYcKc8", "026-jSRFNJK", "HUktZRHmFoH", "1DKRqbqrGW3", "4ivEj_CXxm", "sIRrOR01MZ_", "Q-fhXJOr3T3" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to thank the reviewer for the insightful comments and careful reading. Here we address the main concerns of the review:\n\n1) On comparisons with respect to existing models.\nWe added in section A.10 of the Appendix additional experiments with a 3-order relational learning model (HOLE [1]) and 4-orde...
[ -1, -1, -1, -1, -1, 5, 5, 4, 7 ]
[ -1, -1, -1, -1, -1, 3, 4, 4, 3 ]
[ "sIRrOR01MZ_", "1DKRqbqrGW3", "4ivEj_CXxm", "Q-fhXJOr3T3", "iclr_2021_tq5JAGsedIP", "iclr_2021_tq5JAGsedIP", "iclr_2021_tq5JAGsedIP", "iclr_2021_tq5JAGsedIP", "iclr_2021_tq5JAGsedIP" ]
iclr_2021_EBRTjOm_sl1
Learning Active Learning in the Batch-Mode Setup with Ensembles of Active Learning Agents
Supervised learning models perform best when trained on a lot of data, but annotating training data is very costly in some domains. Active learning aims to chose only the most informative subset of unlabelled samples for annotation, thus saving annotation cost. Several heuristics for choosing this subset have been deve...
withdrawn-rejected-submissions
The authors propose to linearly combine the utility functions of (batch) active learning algorithms. The linear combination coefficients are "learned" with Monte Carlo estimators to adapt the coefficients to different kinds of tasks automatically. The reviewers find the presentation within the papers generally clear. ...
train
[ "SrwFtfy6S0G", "o1s1rmjIu5L", "ljyxBqwOBv", "Sj9QJXIoPWh", "oDwD6JXMuF", "hsfApIuEwls", "rdWBBYsLDO7", "YRsNMf-hbPV", "gTigTRwAlIb", "DIhD0wdeKAr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThe paper proposes learning a batch mode active learning (AL) policy as a weighted ensemble of existing AL techniques (or agents). In the proposed method, the ensemble weight vector (\\beta) is learnt from data. AL is simulated on a set of training tasks where performance for various choices of \\beta ar...
[ 4, -1, -1, -1, -1, -1, -1, 4, 3, 7 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2021_EBRTjOm_sl1", "hsfApIuEwls", "DIhD0wdeKAr", "SrwFtfy6S0G", "iclr_2021_EBRTjOm_sl1", "YRsNMf-hbPV", "gTigTRwAlIb", "iclr_2021_EBRTjOm_sl1", "iclr_2021_EBRTjOm_sl1", "iclr_2021_EBRTjOm_sl1" ]
iclr_2021_tADlrawCrVU
CoLES: Contrastive learning for event sequences with self-supervision
We address the problem of self-supervised learning on discrete event sequences generated by real-world users. Self-supervised learning incorporates complex information from the raw data in low-dimensional fixed-length vector representations that could be easily applied in various downstream machine learning tasks. In t...
withdrawn-rejected-submissions
This work is well written and accurately covers the context and recent related work. It's a good example of how to apply self-supervised training to the event sequence domain. However, the combination of a lack of technical originality (composing a set of previously explored ideas) and significant improvements in resul...
train
[ "lLFUimOkQ5r", "gh3fVpKl2_f", "CtAABbCI0pm", "0OwrYF8ulFC", "HWKV_6-DL8", "0yrAXJ52SGs", "yHsMOFjaaTX", "oCk_lA1U9Lc" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Pros:\n\n1. The paper targets the problem of event sequence prediction in a contrastive self-supervised learning framework .They train this contrastive learning method by generating positive and negative samples via data augmentation method proposed as random slicing, which creates overlapping sub-sequences from e...
[ 5, 6, -1, -1, -1, -1, -1, 5 ]
[ 4, 4, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_tADlrawCrVU", "iclr_2021_tADlrawCrVU", "gh3fVpKl2_f", "iclr_2021_tADlrawCrVU", "lLFUimOkQ5r", "lLFUimOkQ5r", "oCk_lA1U9Lc", "iclr_2021_tADlrawCrVU" ]
iclr_2021_T3kmOP_cMFB
Boosting One-Point Derivative-Free Online Optimization via Residual Feedback
Zeroth-order optimization (ZO) typically relies on two-point feedback to estimate the unknown gradient of the objective function, which queries the objective function value twice at each time instant. However, if the objective function is time-varying, as in online optimization, two-point feedback can not be used. In t...
withdrawn-rejected-submissions
The paper generated a lot of discussion. After reviewing all of the opinions, and my own reading of the paper, we have concluded that the theoretical innovation is too incremental for ICLR. It is possible that the idea of "residual feedback" could be helpful, but for this to be demonstrated effectively one would need t...
train
[ "cCf24rJSglh", "WRpG-HhiXXX", "R_X74D7-WCu", "5oEcY-gpzz0", "12P-KI1cdzW", "a05DVZnqcH", "furVHDYdxtV", "BdJbDjfFj6Y", "9YOlJDuCke2", "CAAlIGKAqWQ", "C5_IrhdRS29", "6iou19PmKSn", "xrYtnWAHeto", "_CKcs2I20Rs", "_qY7RO-e4Z", "g2JBzt17Dfa", "OkLvt6L3zDc", "MdDOgQGcZl6", "Bck6tOFAzLZ...
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_rev...
[ "This manuscript considers online zeroth order optimization and it develops a gradient estimator based on one query per function. In particular, the proposed method mimics two-point estimators by evaluating two consecutive functions at perturbations of an iterate, as shown in equation (3). Although one-point gradie...
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2021_T3kmOP_cMFB", "iclr_2021_T3kmOP_cMFB", "12P-KI1cdzW", "12P-KI1cdzW", "a05DVZnqcH", "9YOlJDuCke2", "CAAlIGKAqWQ", "xrYtnWAHeto", "7a2hXrr8P0b", "g2JBzt17Dfa", "iclr_2021_T3kmOP_cMFB", "iclr_2021_T3kmOP_cMFB", "_qY7RO-e4Z", "OkLvt6L3zDc", "OkLvt6L3zDc", "MdDOgQGcZl6", "FjYsa...
iclr_2021_p84tly8c4zf
WeMix: How to Better Utilize Data Augmentation
Data augmentation is a widely used training trick in deep learning to improve the network generalization ability. Despite many encouraging results, several recent studies did point out limitations of the conventional data augmentation scheme in certain scenarios, calling for a better theoretical understanding of data a...
withdrawn-rejected-submissions
This work presents a new theoretically motivated data augmentation technique. Reviewers agreed that the theory was interesting and has value, however raised concerns regarding the experimental evaluation which was limited to the Cifar datasets. There was some discussion over whether or not a comparison with AutoAugment...
train
[ "g04kh5QPhGy", "ALytuyk1PiY", "YN6-EtllXb", "SwWN3TQiGpP", "pc7AsWvjGMT", "q-ZVmSIMow", "k3cYguqvlt", "aSVsmokM1xK" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thanks for your constructive feedback and useful comments. Below are our responses to the comments.\n\nRe(A). There is no detailed experimental analysis beyond comparing end performance of models trained with the proposed approaches.\n\nA: We provide the performance in the experiments since it is the most impor...
[ -1, -1, -1, -1, 4, 5, 7, 4 ]
[ -1, -1, -1, -1, 3, 3, 4, 2 ]
[ "pc7AsWvjGMT", "q-ZVmSIMow", "k3cYguqvlt", "aSVsmokM1xK", "iclr_2021_p84tly8c4zf", "iclr_2021_p84tly8c4zf", "iclr_2021_p84tly8c4zf", "iclr_2021_p84tly8c4zf" ]
iclr_2021_7MjfPd-Irao
Impact-driven Exploration with Contrastive Unsupervised Representations
Procedurally-generated sparse reward environments pose significant challenges for many RL algorithms. The recently proposed impact-driven exploration method (RIDE) by Raileanu & Rocktäschel (2020), which rewards actions that lead to large changes (measured by ℓ2-distance) in the observation embedding, achieves state-of...
withdrawn-rejected-submissions
I thank the authors for their submission and very active participation in the author response period. I want to start by stating that I rank the paper higher as is currently reflected in the average score of the reviewers. The reasons for this are that a) R2 and R3, while responding to the author's rebuttal, do not see...
train
[ "DhZDsya5yMd", "zAe0fktJCyR", "D2ArqpgXzL", "U306mehwH4l", "Wi5_oiSHi-K", "4zz0d6cclB", "T4ueFQTpRhq", "zgR0M8KweM2", "90MO6Rw5ytH", "TZkulAPuU0B", "-CJJwneyeyD", "_aoksMK0Pvx", "lkW7VdgNzK5", "TKWtI_lKwSN", "OatU1tlSzJi", "FAyNHBScQvK" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "## Summary\n\nThe work provided a nice new method with some performance gains by combining several existing techniques. The presentation was clear and organized, with the new method getting both better performance and some improvements in interpretability. It provides a variety of visual analyses that are typical ...
[ 7, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_7MjfPd-Irao", "iclr_2021_7MjfPd-Irao", "iclr_2021_7MjfPd-Irao", "4zz0d6cclB", "T4ueFQTpRhq", "lkW7VdgNzK5", "_aoksMK0Pvx", "DhZDsya5yMd", "DhZDsya5yMd", "zAe0fktJCyR", "zAe0fktJCyR", "OatU1tlSzJi", "FAyNHBScQvK", "FAyNHBScQvK", "iclr_2021_7MjfPd-Irao", "iclr_2021_7MjfPd-Irao...
iclr_2021_GjqcL-v0J2A
Mixture Representation Learning with Coupled Autoencoding Agents
Jointly identifying a mixture of discrete and continuous factors of variability can help unravel complex phenomena. We study this problem by proposing an unsupervised framework called coupled mixture VAE (cpl-mixVAE), which utilizes multiple interacting autoencoding agents. The individual agents operate on augmented co...
withdrawn-rejected-submissions
This paper introduces and analyses a method to train a population of VAEs with mixed continuous (referred to as "style") and discrete (referred to as "labels") latent-variables. The population is trained under the constraint that inferred discrete latent variables to be the same for all models. The paper also investig...
train
[ "dXg_NYYmK-y", "zgBJel_PzDy", "8UiLNAIe0GW", "61Q3i8hMi8p", "6U-U2CdfW4", "Rso6hyLaOx7", "yYtFiaO0sC", "0T0OeBEbes5", "DFnV_ajJdJK", "drYP5EJxiW", "DmRG_JR7jow", "cDhEfdZCS3" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "====================================================================================================\n\nSummary : \n\nThe paper proposed the new disentanglement approach based on the \"wisdom of the crowd\". First, the proposed method enforces the consensus on the categorical assignments from the different agents....
[ 6, -1, -1, -1, 6, -1, -1, -1, -1, -1, 5, 5 ]
[ 5, -1, -1, -1, 3, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_GjqcL-v0J2A", "iclr_2021_GjqcL-v0J2A", "61Q3i8hMi8p", "0T0OeBEbes5", "iclr_2021_GjqcL-v0J2A", "DmRG_JR7jow", "dXg_NYYmK-y", "6U-U2CdfW4", "iclr_2021_GjqcL-v0J2A", "cDhEfdZCS3", "iclr_2021_GjqcL-v0J2A", "iclr_2021_GjqcL-v0J2A" ]
iclr_2021_F8xpAPm_ZKS
Model-Free Counterfactual Credit Assignment
Credit assignment in reinforcement learning is the problem of measuring an action’s influence on future rewards. In particular, this requires separating \emph{skill} from \emph{luck}, ie.\ disentangling the effect of an action on rewards from that of external factors and subsequent actions. To achieve this, we a...
withdrawn-rejected-submissions
In this paper, the authors aim to develop a new method for credit assignment, where certain types of future information is conditioned on. The authors are well-aware that naive conditioning on future information introduces bias due to Berkson's paradox (explaining away), and introduce a number of corrections (describe...
train
[ "zEUKbSupj-S", "nElbFAE392r", "UuHKtu0-uHc", "-iRiMx_JYGJ", "00nQN8Icmfx", "hHRwqN8KB9t", "J9NcIYtgW-", "uG0Ad1t40Iv", "C9i2TbS9Hnp", "KpE8N4puPzU", "HICO0C8TygJ", "_hzrQX2SYC0", "dAa4ghG2vzi", "9Rg80nPJbK", "0KnNNngCzf", "O3f9oS_X1Rj", "K41vnJMig7f", "PZ778K9KN2p" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Regarding your first paragraph, this is a fair point. While common RL benchmarks may have limited credit assignment issues, we could modify a common RL environment to exacerbate those issues (such as assigning all rewards to the final state, or adding high variance perturbations to the dynamics).\n\nFor your secon...
[ -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "nElbFAE392r", "-iRiMx_JYGJ", "iclr_2021_F8xpAPm_ZKS", "J9NcIYtgW-", "hHRwqN8KB9t", "iclr_2021_F8xpAPm_ZKS", "_hzrQX2SYC0", "hHRwqN8KB9t", "hHRwqN8KB9t", "hHRwqN8KB9t", "iclr_2021_F8xpAPm_ZKS", "O3f9oS_X1Rj", "K41vnJMig7f", "PZ778K9KN2p", "hHRwqN8KB9t", "iclr_2021_F8xpAPm_ZKS", "iclr...
iclr_2021_DC1Im3MkGG
Exchanging Lessons Between Algorithmic Fairness and Domain Generalization
Standard learning approaches are designed to perform well on average for the data distribution available at training time. Developing learning approaches that are not overly sensitive to the training distribution is central to research on domain- or out-of-distribution generalization, robust optimization and fairness. ...
withdrawn-rejected-submissions
The paper analyzes connections between algorithmic fairness and domain generalization literatures. The reviewers found the paper interesting but they also raised some important concerns about it. The applicability of the method presented in the paper is not clear nor well-discussed in the paper. The papers and the re...
train
[ "00OGPI-NfFl", "op6_KJK-yGd", "4Cw5BwySCGT", "kNrjeolYQy8", "EXFuCgg-Uo", "QhSpOixr95Z", "CcO1K7h08cz", "aqAVhT0nVGY", "Y9bNxEEnp1R", "A8vA0SBZUIV", "_rN8KFzJMmb" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents parallels between algorithmic fairness and domain generalization literatures. The authors explore a learning setup where the goal is to learn some representation $\\Phi(x)$ that is \"independent\" of some environmental variable $e$. The authors explore cases where $e$ is known or not and come u...
[ 4, 5, -1, -1, -1, -1, -1, -1, -1, 4, 6 ]
[ 4, 3, -1, -1, -1, -1, -1, -1, -1, 3, 2 ]
[ "iclr_2021_DC1Im3MkGG", "iclr_2021_DC1Im3MkGG", "A8vA0SBZUIV", "_rN8KFzJMmb", "00OGPI-NfFl", "op6_KJK-yGd", "iclr_2021_DC1Im3MkGG", "Y9bNxEEnp1R", "00OGPI-NfFl", "iclr_2021_DC1Im3MkGG", "iclr_2021_DC1Im3MkGG" ]
iclr_2021_GEpTemgn7cq
Dependency Structure Discovery from Interventions
Promising results have driven a recent surge of interest in continuous optimization methods for Bayesian network structure learning from observational data. However, there are theoretical limitations on the identifiability of underlying structures obtained from observational data alone. Interventional data provides muc...
withdrawn-rejected-submissions
In this paper, the authors study how to incorporate experimental data with interventions into existing pipelines for DAG learning. Mixing observational and experimental data is a well-studied problem, and it is well-known how to incorporate interventions into e.g. the likelihood function, along with theoretical guarant...
train
[ "y7d05UqM_df", "qggTAi_RG4L", "a9iYbHQNBu1", "l_g_Iu3ueKL", "W6SPrjl01GF", "zJddwoAfdA", "sXyTKgprp_B", "qkHj2cqasjM", "m15EqJyLu1w", "6UAUXS7IRm", "7W2QWUo7s9B", "1At1BEkHQMl", "NXbW8i74D1-", "IMYX0Fase7P", "5LYG72OPl2", "lUpPiuk34H", "wfgQc35ybWY" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The authors propose a 3-phase heuristic algorithm to learn a causal graph from interventional data using continuous optimization. Unfortunately, the paper is hard to follow. Specifically, the exact procedure should be clarified by the authors. If I understand correctly, first they fit to observational data by sear...
[ 4, 6, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ 5, 3, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_GEpTemgn7cq", "iclr_2021_GEpTemgn7cq", "wfgQc35ybWY", "6UAUXS7IRm", "6UAUXS7IRm", "6UAUXS7IRm", "6UAUXS7IRm", "qggTAi_RG4L", "iclr_2021_GEpTemgn7cq", "7W2QWUo7s9B", "IMYX0Fase7P", "lUpPiuk34H", "m15EqJyLu1w", "y7d05UqM_df", "wfgQc35ybWY", "m15EqJyLu1w", "iclr_2021_GEpTemgn...
iclr_2021_PYAFKBc8GL4
Client Selection in Federated Learning: Convergence Analysis and Power-of-Choice Selection Strategies
Federated learning is a distributed optimization paradigm that enables a large number of resource-limited client nodes to cooperatively train a model without data sharing. Several works have analyzed the convergence of federated learning by accounting of data heterogeneity, communication and computation limitations, an...
withdrawn-rejected-submissions
In federated learning, distributed and resource-limited client nodes cooperatively train a model without sharing their local data. The results thus far on analyzing the convergence of federated learning are restricted to “unbiased” client participation, where the probability of a client c being selected is proportiona...
train
[ "WZaMnFYVuPs", "n89p1W9Y-XZ", "RplBAm6mqA", "K3nUeE69KlE", "TD46KclIKTp", "pWP23zEV8rM", "BDeleUQpI9c", "gqytxg-jlVP", "Fcxly2eFaYF", "TeG4Zr17mIP", "2eaTbr8DxSj", "h2AfMFoDI31", "ryo7kpaFUp5", "ipB5vVzb-Iq", "UCAKJieic33", "CEH_dD7H4Wu", "_-yMeg0Wmyr", "heSQ9GVM9-t", "LRZwkZai-h...
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_r...
[ "We thank all the reviewers for the valuable feedback! We updated our paper reflecting the reviewers' constructive feedback, and made the changes be seen by **magenta** color. All the reviewers highlight the novelty of our paper -- that it is the first convergence analysis of biased client selection strategies in f...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "iclr_2021_PYAFKBc8GL4", "RplBAm6mqA", "K3nUeE69KlE", "TD46KclIKTp", "UCAKJieic33", "BDeleUQpI9c", "gqytxg-jlVP", "Fcxly2eFaYF", "TeG4Zr17mIP", "2eaTbr8DxSj", "h2AfMFoDI31", "_-yMeg0Wmyr", "ipB5vVzb-Iq", "heSQ9GVM9-t", "LRZwkZai-hG", "pPw-LZ4wSeA", "iclr_2021_PYAFKBc8GL4", "iclr_20...
iclr_2021_G67PtYbCImX
Similarity Search for Efficient Active Learning and Search of Rare Concepts
Many active learning and search approaches are intractable for industrial settings with billions of unlabeled examples. Existing approaches, such as uncertainty sampling or information density, search globally for the optimal examples to label, scaling linearly or even quadratically with the unlabeled data. However, in...
withdrawn-rejected-submissions
The paper proposed an active search algorithm for efficiently identifying rare concepts among heavily imbalanced datasets. Reviewers find the paper very well-motivated and addressing an important real-world challenge in active learning. All reviewers appreciate the extensive demonstration of the effectiveness of the pr...
train
[ "bGugs5PN5GX", "mrTVh-IQ6i", "0YwMDHlNZn_", "u_tC0Jxo-OD", "Fu-hKWWUva", "WGAy_CHycMg", "plVOZRcR_vw", "_PYlbG-xkV7", "gn8WvEUsYdv" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThis paper proposes a new method (SEALS) to accelerate the active learning and active search with the skewness of the cardinality of rare class compared to the large-scale datasets. To leverage this skewness, the authors restrict the candidate pool for labelling mainly from the nearest neighbours of the ...
[ 4, -1, -1, -1, -1, -1, -1, 8, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, 3, 5 ]
[ "iclr_2021_G67PtYbCImX", "0YwMDHlNZn_", "_PYlbG-xkV7", "plVOZRcR_vw", "WGAy_CHycMg", "bGugs5PN5GX", "gn8WvEUsYdv", "iclr_2021_G67PtYbCImX", "iclr_2021_G67PtYbCImX" ]
iclr_2021_aKt7FHPQxVV
Efficient Differentiable Neural Architecture Search with Model Parallelism
Neural architecture search (NAS) automatically designs effective network architectures. Differentiable NAS with supernets that encompass all potential architectures in a large graph cuts down search overhead to few GPU days or less. However, these algorithms consume massive GPU memory, which will restrain NAS from larg...
withdrawn-rejected-submissions
This paper proposes a model parallelism scheme (CMP) for training differentiable NAS with large supernets, which performs the forward and backward passes for multiple tasks at the same time, to increase hardware utilization. Moreover, since CMP consumes large GPU memory due to having multiple computational graphs in me...
train
[ "ydF9NDBTKa", "OfNXcnZPdFe", "Y4DftBhoal", "YOsWgYOBmHo", "zA9xPGEmNSA", "n45etWnV5uC", "MESPKNW_tG7", "YDj1cDUnhbD", "ahu7Bsah_id", "d0UGMrz9oxS", "PFU3hc46mR", "3Sty0RVnxYu", "7hI3t-7QiB", "Rpxkys90lo", "2i_mLDY5WKs", "cd-yGMcQvDt", "CvAGqwWqjCE" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_revi...
[ "##############################################################\n\nSummary:\n\nThis paper provides the interesting method that leverages GPU memory resources more efficiently for supernet (meta-graph) of differentiable NAS. For this, this paper proposes binary neural architecture search and consecutive model parall...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2021_aKt7FHPQxVV", "ydF9NDBTKa", "n45etWnV5uC", "ydF9NDBTKa", "3Sty0RVnxYu", "YOsWgYOBmHo", "cd-yGMcQvDt", "ydF9NDBTKa", "2i_mLDY5WKs", "CvAGqwWqjCE", "iclr_2021_aKt7FHPQxVV", "2i_mLDY5WKs", "cd-yGMcQvDt", "CvAGqwWqjCE", "iclr_2021_aKt7FHPQxVV", "iclr_2021_aKt7FHPQxVV", "iclr_2...
iclr_2021_PmUGXmOY1wK
GL-Disen: Global-Local disentanglement for unsupervised learning of graph-level representations
Graph-level representation learning plays a crucial role in a variety of tasks such as molecular property prediction and community analysis. Currently, several models based on mutual information maximization have shown strong performance on the task of unsupervised graph representation learning. In this paper, instead,...
withdrawn-rejected-submissions
In this paper, the authors designed a disentanglement mechanism for global and local information of graphs and proposed a graph representation method based on it. I agree with the authors that 1) considering the global and local information of graphs jointly is reasonable and helpful (as shown in the experiments) and 2...
train
[ "PFKFeGqZfVx", "vsuMMcuTShW", "UBPGWtv3vi", "2AGEEdmjHHM", "7ADxrBnQEz6", "y54yPEiMDFd", "O74iTYiIaf7", "fsYKwjVyv2O", "UhTCU6RsCoz", "ZjU36EONPuZ", "1G0g_rkgyHU", "Inpq8RfLVm", "SgW4VFH4-l4", "lhz5oieahAg", "GUXn8g9b07K", "Ipxx-O8928w", "Fj8gwSfyf9X", "KU2nbhXmxhW", "22cInD8Aeh_...
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary: \nThis paper proposes an unsupervised graph-level representation learning method considering global-local disentanglement. Specifically, the authors propose a GL-Disen model based on graph VAE architecture to jointly learn global and local representations for a graph. The global information is shared acr...
[ 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 3 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2021_PmUGXmOY1wK", "iclr_2021_PmUGXmOY1wK", "22cInD8Aeh_", "iclr_2021_PmUGXmOY1wK", "Inpq8RfLVm", "1G0g_rkgyHU", "fsYKwjVyv2O", "UhTCU6RsCoz", "R5l4n8GZ5UA", "UBPGWtv3vi", "KU2nbhXmxhW", "Fj8gwSfyf9X", "PFKFeGqZfVx", "GUXn8g9b07K", "Ipxx-O8928w", "vsuMMcuTShW", "SgW4VFH4-l4", ...
iclr_2021_tw60PTRSda2
Understanding Mental Representations Of Objects Through Verbs Applied To Them
In order to interact with objects in our environment, we rely on an understanding of the actions that can be performed on them, and the extent to which they rely or have an effect on the properties of the object. This knowledge is called the object "affordance". We propose an approach for creating an embedding of objec...
withdrawn-rejected-submissions
This paper is a computational linguistic study of the semantics that can be inferred form text corpora given parsers (which are trained on human data) are used to infer the verbs and their objects in text. The reviewers agreed that the work was well executed, and that the experiments comparing the resulting representat...
val
[ "o3brdGr9oln", "YNLwLe5sqOe", "eo1XfstqSgH", "gh-kPC8iQPh", "SAQ_-0N_J-x", "0_ulaZOg5_M", "67dZyx6NQtD", "zzqNZBWwmO", "GICytvYSAu", "5o6O8mAF50Y", "Ij2ed3mfTS", "AKiFIrCSoXq", "DvLT4rqi_Xk", "pRWUZuh_jNT", "Ai4-u9SxA9B" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author" ]
[ "Summary:\n\nThis paper attempts to learn embeddings for objects based on their affordances i.e., verbs that could be applied to them to realise their meaning. Here each dimension corresponds to an affordance or an aspect of meaning shared by actions, thus allowing a correspondence between nouns (objects) and verbs...
[ 7, 7, 6, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2021_tw60PTRSda2", "iclr_2021_tw60PTRSda2", "iclr_2021_tw60PTRSda2", "iclr_2021_tw60PTRSda2", "0_ulaZOg5_M", "67dZyx6NQtD", "zzqNZBWwmO", "eo1XfstqSgH", "iclr_2021_tw60PTRSda2", "Ij2ed3mfTS", "AKiFIrCSoXq", "gh-kPC8iQPh", "YNLwLe5sqOe", "Ai4-u9SxA9B", "o3brdGr9oln" ]
iclr_2021_vT0NSQlTA
Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via Latent Model Ensembles
Learning complex behaviors through interaction requires coordinated long-term planning. Random exploration and novelty search lack task-centric guidance and waste effort on non-informative interactions. Instead, decision making should target samples with the potential to optimize performance far into the future, while ...
withdrawn-rejected-submissions
The submission is acknowledged as having potential value in terms of proposing a new approach for exploration based on ensembles and value functions. However, there are lingering concerns about the discussion of what this paper brings to the table vis-a-vis prior work, together with a lack of clear demonstration of the...
train
[ "31hZ1qoeM-H", "6WshcSaufjL", "ZT2YBorB1n", "eUMBFjweGq", "g-JTbUibAkU", "b2LrT1qSm6l", "eev4kohAcUG", "9UHetJvlM1j", "v1ifj-dSg_C", "nAiJNOBLkU2", "89tX-gtaM0-", "SiWroFzFCCy", "8bf_fPmfkue", "BwZgeBkJAsU" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "---- Summary ----\n\nThe paper proposes LOVE, an adaptation of DOVE (Seyde’20) to latent variable predictive models (Seyde’20 only condsidered predictive models without latent variables). Seyde’20 proposes to use a generalization of Upper Confidence Bound to deep model-based RL, by training an ensemble of models a...
[ 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_vT0NSQlTA", "iclr_2021_vT0NSQlTA", "iclr_2021_vT0NSQlTA", "9UHetJvlM1j", "nAiJNOBLkU2", "89tX-gtaM0-", "SiWroFzFCCy", "v1ifj-dSg_C", "31hZ1qoeM-H", "6WshcSaufjL", "8bf_fPmfkue", "BwZgeBkJAsU", "iclr_2021_vT0NSQlTA", "iclr_2021_vT0NSQlTA" ]
iclr_2021_4JLiaohIk9
Motion Forecasting with Unlikelihood Training
Motion forecasting is essential for making safe and intelligent decisions in robotic applications such as autonomous driving. State-of-the-art methods formulate it as a sequence-to-sequence prediction problem, which is solved in an encoder-decoder framework with a maximum likelihood estimation objective. In this paper,...
withdrawn-rejected-submissions
This paper makes use of the unlikelihood objective from Welleck et al (2019) which was shown in NLP to the problem of forecasting motion trajectories on roads. The unlikelihood term is meant to lower the probability mass in non-driveable areas. The paper makes use of Trajectron++, and existing trajectory forecasting mo...
train
[ "B0YRYqpsA1N", "LvsaLtV8GW1", "W0hf0OdJJe", "3OHQoa-MnOd", "wllByr96o9r", "2zIor_Sjs2t", "B_GZ94Y4YLq", "ewL59Arwnx3", "uhYhv8kxDDm", "753XkwUWVXu", "fx-Ew_i4rwy", "KylYt0vE1GD", "b3fiP6c84C5", "sklx6niKSUJ", "ZjuXst0YXQW", "H4hFGYpoWVB" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "**SUMMARY**\n\nThe present paper considers the problem of context integration in probabilistic agent trajectory predictors, particularly Trajectron++. It starts with the observation that these predictors often do a bad job at considering non-drivable areas in their predictions even if context information is inject...
[ 4, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ 5, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_4JLiaohIk9", "iclr_2021_4JLiaohIk9", "B0YRYqpsA1N", "KylYt0vE1GD", "iclr_2021_4JLiaohIk9", "iclr_2021_4JLiaohIk9", "B0YRYqpsA1N", "H4hFGYpoWVB", "iclr_2021_4JLiaohIk9", "2zIor_Sjs2t", "2zIor_Sjs2t", "b3fiP6c84C5", "B0YRYqpsA1N", "ZjuXst0YXQW", "iclr_2021_4JLiaohIk9", "iclr_2...
iclr_2021_b905-XVjbDO
Globally Injective ReLU networks
Injectivity plays an important role in generative models where it enables inference; in inverse problems and compressed sensing with generative priors it is a precursor to well posedness. We establish sharp characterizations of injectivity of fully-connected and convolutional ReLU layers and networks. First, through a ...
withdrawn-rejected-submissions
The average review ratings for this paper is somewhat borderline. The paper provides mathematical characterizations on when ReLU neural networks are injective. The paper has very nice ideas, but the reviewers also pointed out several key concerns: 1. “Given that the DSS condition takes exponential time to check, how ...
train
[ "jVsNAfEq3jE", "mC_QZpTovMl", "IqeoiJMKSy", "WTwTCj8hUQJ", "RreO7II2Hut", "gPycuZWCNvS", "8GGlmS305a", "NDM3RMssAw", "RiagbmZxW7s", "WUWAU6OZ3-", "ngECdGZUB24", "JVz4AQrLQRr" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear reviewers, this is a note that we fixed a small typo regarding the bound on sums of binomial coefficients used in the proof of Theorem 5 and made a corresponding modification in the statement.", "We would like to thank the referees for taking the time to read and comment on our paper. Here we summarize our ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 8, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 3, 3 ]
[ "mC_QZpTovMl", "iclr_2021_b905-XVjbDO", "RiagbmZxW7s", "WUWAU6OZ3-", "RiagbmZxW7s", "WUWAU6OZ3-", "JVz4AQrLQRr", "ngECdGZUB24", "iclr_2021_b905-XVjbDO", "iclr_2021_b905-XVjbDO", "iclr_2021_b905-XVjbDO", "iclr_2021_b905-XVjbDO" ]
iclr_2021_oGq4d9TbyIA
Uniform-Precision Neural Network Quantization via Neural Channel Expansion
Uniform-precision neural network quantization has gained popularity thanks to its simple arithmetic unit densely packed for high computing capability. However, it ignores heterogeneous sensitivity to the impact of quantization across the layers, resulting in sub-optimal inference accuracy. This work proposes a novel ap...
withdrawn-rejected-submissions
This paper suggests a NAS approach for quantization which focuses on expanding the number of channels in problematic layers, given some uniform quantization level for all the layers. The reviewers were initially all negative, but the authors added more experiments and the scores changed to borderline (6/6/5). I think t...
train
[ "CkR7LZe55K1", "i8LnvyHrJNz", "ELuePrhHg_t", "HYL47lhfxqR", "aOHzCziZLfg", "CS__bJ0h-mL", "yYogHEEv5P4", "QdOYYP9bvKo", "9RCR_2WiHn" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "### Overview\nIn this paper, the authors proposed selectively channel pruning and expansion via neural architecture search to make an off-the-shelf neural network more robust to quantization. Under 2-bit quantization, the proposed method outperforms the original model at similar or smaller FLOPs and model size.\n\...
[ 6, 6, -1, -1, -1, -1, -1, -1, 5 ]
[ 5, 4, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2021_oGq4d9TbyIA", "iclr_2021_oGq4d9TbyIA", "iclr_2021_oGq4d9TbyIA", "i8LnvyHrJNz", "9RCR_2WiHn", "aOHzCziZLfg", "QdOYYP9bvKo", "CkR7LZe55K1", "iclr_2021_oGq4d9TbyIA" ]
iclr_2021_L5b6jUonKFB
Deep Continuous Networks
CNNs and computational models of biological vision share some fundamental principles, which, combined with recent developments in deep learning, have opened up new avenues of research in neuroscience. However, in contrast to biological models, conventional CNN architectures are based on spatio-temporally discrete repre...
withdrawn-rejected-submissions
This paper received 1 weak accept, 1 accept, and 1 weak reject. All reviewers questioned the motivation for continuous space/time with respect to biological vision. Obviously, discrete approximations used in machine vision are approximations but it is not clear from the paper or the authors’ response that this severel...
val
[ "cz3g-4wZF5_", "dQI1TWT5-f2", "2lskEF1iXJ9", "3vWNVkNB4Ib", "GJ55dcWnaGI", "LVBTeNRObM", "tKqs-NhjGIJ", "Xm3HM6xLEVi", "2Loul8EcgJx", "lmyVJS-gP6", "I-3W9JMYRCO", "AbjIHxP2p0i" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "official_reviewer" ]
[ "## Review\n\n\n### Summary\n\nThe authors define continuous deep networks by expressing 2D-convolutional filters as a linear combination of Gaussian function and its derivatives. By combining this description with the previously proposed neural ODE framework they obtain a spatio-temporally continuous description o...
[ 7, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_L5b6jUonKFB", "3vWNVkNB4Ib", "iclr_2021_L5b6jUonKFB", "LVBTeNRObM", "I-3W9JMYRCO", "2lskEF1iXJ9", "Xm3HM6xLEVi", "cz3g-4wZF5_", "AbjIHxP2p0i", "2Loul8EcgJx", "iclr_2021_L5b6jUonKFB", "iclr_2021_L5b6jUonKFB" ]
iclr_2021_4cC0HFuVd2d
Decoy-enhanced Saliency Maps
Saliency methods can make deep neural network predictions more interpretable by identifying a set of critical features in an input sample, such as pixels that contribute most strongly to a prediction made by an image classifier. Unfortunately, recent evidence suggests that many saliency methods poorly perform, especial...
withdrawn-rejected-submissions
This paper presents a novel approach to producing saliency maps for interpreting deep neural networks. In general this paper seems quite close to borderline, although on the positive side, with some low confidence reviews. The reviewers felt that the proposed approach could be useful to the community and they seemed ...
train
[ "YBqr78gskMs", "OcUUftT7Pkk", "60d_PkVDOr", "SAqC8c29S7", "uzLLV_TXaMP", "opWPZPAYqq", "5Y74jjah4Lb", "XcIm_En0q5X", "X9Zp6duCqbN", "rhRJFHJTDKQ", "h459oV-g-eF" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the prompt response. We really appreciate the constructive feedback and the reviewer’s kindness. In the following, we respond to the reviewer’s new comments. \n1. Regarding the way to compute the average saliency map, our understanding is as follows. Given a raw saliency map, this method ...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, 3, 2 ]
[ "OcUUftT7Pkk", "SAqC8c29S7", "iclr_2021_4cC0HFuVd2d", "opWPZPAYqq", "opWPZPAYqq", "XcIm_En0q5X", "rhRJFHJTDKQ", "60d_PkVDOr", "h459oV-g-eF", "iclr_2021_4cC0HFuVd2d", "iclr_2021_4cC0HFuVd2d" ]
iclr_2021_-csYGiUuGlt
Convergent Adaptive Gradient Methods in Decentralized Optimization
Adaptive gradient methods including Adam, AdaGrad, and their variants have been very successful for training deep learning models, such as neural networks, in the past few years. Meanwhile, given the need for distributed training procedures, distributed optimization algorithms are at the center of attention. With the g...
withdrawn-rejected-submissions
The reviewers have some following concerns: 1) There is lack of experimental result. The experiment on MNIST with small CNN architecture is definitely not sufficient to verify the efficiency of the proposed method. Moreover, the advantage of the proposed method is not very clear due to the choices of the parameters. ...
train
[ "IGouo4EDGdn", "3WMdlWfRkn", "kCt9hDCJSA8", "qr0Ug3000Vi", "T_PvU0XFILL", "tWSkgeu4Obz", "3nAe2NZod8B", "zgTX4dit_I", "-3BOnT7vr87", "NyHY5TPXoMl", "4UWvZ77pRy", "6Qe46iGaYy4", "v9ur0b0S9q-", "Tc_4hwIna-9", "nid7RhHNq1I" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Based on the reviewer's comments, we revised the discussions in our paper. Specifically,\n1. We added discussions on the convergence rate for AMSGrad and AdaGrad below Corollary 2.1 (original Theorem 2). More intuitions are provided.\n2. We added discussion on example settings for homogeneous and heterogeneous dat...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 8, 4, 7, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5, 1, 5 ]
[ "-3BOnT7vr87", "iclr_2021_-csYGiUuGlt", "3nAe2NZod8B", "NyHY5TPXoMl", "4UWvZ77pRy", "Tc_4hwIna-9", "6Qe46iGaYy4", "v9ur0b0S9q-", "zgTX4dit_I", "nid7RhHNq1I", "iclr_2021_-csYGiUuGlt", "iclr_2021_-csYGiUuGlt", "iclr_2021_-csYGiUuGlt", "iclr_2021_-csYGiUuGlt", "iclr_2021_-csYGiUuGlt" ]
iclr_2021_IUYthV32lbK
On the Certified Robustness for Ensemble Models and Beyond
Recent studies show that deep neural networks (DNN) are vulnerable to adversarial examples, which aim to mislead DNNs to make arbitrarily incorrect predictions. To defend against such attacks, both empirical and theoretical defense approaches have been proposed for a single ML model. In this work, we aim to explore and...
withdrawn-rejected-submissions
I thank the authors and reviewers for the lively discussions. Although reviewers agreed the work is interesting, there are some concerns about the significance of the results and experiments. None of the reviewers were strongly supportive of the paper while majority of them suggest that the paper needs a bit more work ...
test
[ "K_lRYr-LCSc", "cnlThuv0MkS", "Zg-Ja5KWfA", "wW8ljYGuNtD", "UgsHogporhr", "E_dB-DgzUq9", "kXvGh1ZvwHf", "l0hCj-4mN8c", "HSTNJLryDgq", "_j9yh67YqWE", "MDk3EVlTjFH", "ey8Xm-mqccR", "4NLO22ZbHwT", "OACR6gJiHnD" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper offers a novel idea regarding improving robustness of ensemble models with a rigorous mathematical background. A comparison between two types of ensemble models (Weighted Ensembles and Max-Margin Ensembles) and to single models offers very good insight into the theoretical dynamics of certified robustnes...
[ 5, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4 ]
[ "iclr_2021_IUYthV32lbK", "UgsHogporhr", "iclr_2021_IUYthV32lbK", "iclr_2021_IUYthV32lbK", "l0hCj-4mN8c", "4NLO22ZbHwT", "4NLO22ZbHwT", "wW8ljYGuNtD", "wW8ljYGuNtD", "K_lRYr-LCSc", "K_lRYr-LCSc", "OACR6gJiHnD", "iclr_2021_IUYthV32lbK", "iclr_2021_IUYthV32lbK" ]
iclr_2021_2K5WDVL2KI
Information Condensing Active Learning
We introduce Information Condensing Active Learning (ICAL), a batch mode model agnostic Active Learning (AL) method targeted at Deep Bayesian Active Learning that focuses on acquiring labels for points which have as much information as possible about the still unacquired points. ICAL uses the Hilbert Schmidt Independen...
withdrawn-rejected-submissions
The paper introduces a model-agnostic heuristic for batch active learning. There was an agreement among the reviewers that it's a good approach to try and report about, but the paper was ultimately rejected after calibration. There were two concerns raised in the reviews, and the authors are encouraged to address the...
train
[ "rdhq-GK1bCO", "g9WzWx3LKLA", "cm2QIPBzg1Q", "69-99qruRSD", "mfpO2LaKwh", "G_wwImBjyP", "qekblM-ikC6", "i7v4GXWqKzj", "dq2jTsGMyoH", "FYjUZfwPYCE", "PSdeG0k2Al", "GAg7mIUaB7S", "kHQ5Y8tPiTz" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper introduces a model agnostic active learning technique that maximizes the dependency between a batch and the rest of the unlabeled pool. The dependence is measured via the Hilbert Schmidt Independence Criterion and this paper introduces some approximations/optimizations to speed up the method.\n\nIn term...
[ 6, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ 4, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_2K5WDVL2KI", "cm2QIPBzg1Q", "69-99qruRSD", "G_wwImBjyP", "iclr_2021_2K5WDVL2KI", "PSdeG0k2Al", "GAg7mIUaB7S", "rdhq-GK1bCO", "mfpO2LaKwh", "kHQ5Y8tPiTz", "dq2jTsGMyoH", "iclr_2021_2K5WDVL2KI", "iclr_2021_2K5WDVL2KI" ]
iclr_2021_jz7tDvX6XYR
Speeding up Deep Learning Training by Sharing Weights and Then Unsharing
It has been widely observed that increasing deep learning model sizes often leads to significant performance improvements on a variety of natural language processing and computer vision tasks. In the meantime, however, computational costs and training time would dramatically increase when models get larger. In this pap...
withdrawn-rejected-submissions
The paper proposed a new way for training models that stack the same basic block for multiple times -- share the weights first and then untie the weights. Ablation study shows that the proposed algorithm has marginal improvement over the baseline. The authors also provide some theoretical justifications to how the pro...
train
[ "zIPjKbBHel", "RZFaXOc-OX6", "5f2GRTFyTZC", "tSCtY_rkKZ4", "ezasgFbZHhE", "nYFjbxBhKm", "LJBtLMiqaed", "G5_5a3tA3Ax", "IwA4ocixC8", "Jnlf2gRtAfa", "KptY_YoMDld", "ZBu0w1eIxJO", "Nz_cfFfxJym", "yU4Ykzr57s1" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for addressing my concerns. I can understand that by just training for 0.5M steps, SWE can achieve better performance than the normal training pipeline. However, I'd like to also see whether SWE can still be better than BERT when trained for 1M steps. In fact, by looking at the MLM acc, the 0.5M + SWE still...
[ -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, 5, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, 3, 4, 3 ]
[ "5f2GRTFyTZC", "Nz_cfFfxJym", "yU4Ykzr57s1", "iclr_2021_jz7tDvX6XYR", "ZBu0w1eIxJO", "IwA4ocixC8", "Jnlf2gRtAfa", "iclr_2021_jz7tDvX6XYR", "LJBtLMiqaed", "KptY_YoMDld", "G5_5a3tA3Ax", "iclr_2021_jz7tDvX6XYR", "iclr_2021_jz7tDvX6XYR", "iclr_2021_jz7tDvX6XYR" ]
iclr_2021_OLrVttqVt2
Model-Targeted Poisoning Attacks with Provable Convergence
In a poisoning attack, an adversary with control over a small fraction of the training data attempts to select that data in a way that induces a model that misbehaves in a particular way desired by the adversary, such as misclassifying certain inputs. We propose an efficient poisoning attack that can target a desired m...
withdrawn-rejected-submissions
The paper establishes an interesting relationship between poisoning and online learning. Instead of framing the poisoning problem as a bi-level optimization problem as what is done conventionally, the paper proposes reducing the poisoning attack design to an online learning problem in which the adversary decides on a p...
train
[ "3jdG2U-LkFK", "rBE6nktV_NK", "OuHoD9RGIBF", "h62O_xF-4uW", "yB5d254psC", "rP36boY1t7o", "lqugCocoWvn", "6syN9kB7Xyi", "8MkMIM-28PQ", "EmcgivRCQR4", "ktj6jzDFs32" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "> 2. Insufficient evaluation. The evaluation does not compare with subpopulation attacks. I understand you study model-targeted attacks. Since the evaluation is for subpopulation attacks. It is still interesting to know the comparison results.\n\nAs mentioned in Section 5 (the experiment section), to compare with ...
[ -1, -1, -1, -1, -1, -1, -1, 6, 3, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 5, 4 ]
[ "8MkMIM-28PQ", "iclr_2021_OLrVttqVt2", "6syN9kB7Xyi", "ktj6jzDFs32", "EmcgivRCQR4", "iclr_2021_OLrVttqVt2", "8MkMIM-28PQ", "iclr_2021_OLrVttqVt2", "iclr_2021_OLrVttqVt2", "iclr_2021_OLrVttqVt2", "iclr_2021_OLrVttqVt2" ]
iclr_2021_y4-e1K23GLC
A law of robustness for two-layers neural networks
We initiate the study of the inherent tradeoffs between the size of a neural network and its robustness, as measured by its Lipschitz constant. We make a precise conjecture that, for any Lipschitz activation function and for most datasets, any two-layers neural network with k neurons that perfectly fit the data must ha...
withdrawn-rejected-submissions
The paper studies the Lipschitz properties of neural networks — in particular, two layer neural networks that interpolate generic datasets. It conjectures a “size robustness tradeoff”: in this setting, the number of neurons required to interpolate with an O(1)-Lipschitz function is proportional to the number of data po...
train
[ "rD1DfgezTxB", "gPQ0rjb5DGK", "V6yZ9nbi8Mg", "wWGdyTdyizU", "0uUOfOn5lWa", "4NbmJhztkw3", "VweP8F-832k", "X2-7JhX1Cqc", "3RQ7AJDwCx-" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "SUMMARY\n#######\n\nThe present paper proposes a study of the tradeoff between the number of neurons $k$ and the Lipschitz constant $L$ of a 2-layers neural network $f$ that fits a training dataset, i.e. $f(x_i) = y_i$ for all $(x_i, y_i)$ in the training set.\n\nTo that end, authors consider \"generic datasets\" ...
[ 5, -1, -1, -1, -1, -1, 5, 7, 7 ]
[ 4, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2021_y4-e1K23GLC", "iclr_2021_y4-e1K23GLC", "VweP8F-832k", "rD1DfgezTxB", "3RQ7AJDwCx-", "X2-7JhX1Cqc", "iclr_2021_y4-e1K23GLC", "iclr_2021_y4-e1K23GLC", "iclr_2021_y4-e1K23GLC" ]
iclr_2021_jQSBcVURlpW
Learning Algebraic Representation for Abstract Spatial-Temporal Reasoning
Is intelligence realized by connectionist or classicist? While connectionist approaches have achieved superhuman performance, there has been growing evidence that such task-specific superiority is particularly fragile in systematic generalization. This observation lies in the central debate (Fodor et al., 1988; Fodor &...
withdrawn-rejected-submissions
This paper was reviewed by 4 experts in the field. The reviewers raised their concerns on lack of novelty, unconvincing experiment, and the presentation of this paper, While the paper clearly has merit, the decision is not to recommend acceptance. The authors are encouraged to consider the reviewers' comments when revi...
test
[ "B6Ovhp6gZjT", "1eiBFIYEcR3", "GWAWjhce9O1", "ztMKEwjUmeH", "5vO8faqBSY3", "xuByeeQMosJ", "Q8zp1zWS9C6", "nl6zTUk3y2G", "Z8GIw34AoSu", "wJnh5H9k8Z", "TH7PBUGUOmu", "XJq4_kMFABe" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your reply.\n\nWe'll include the two points in another revision of the work. We also hope that the reviewer can clarify on which point he/she is not clear about, so that we can do our best to clarify.\n\nIn a theoretic point of view, the task of RPM was initially proposed as a challenge on few-shot a...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 1, 4 ]
[ "1eiBFIYEcR3", "5vO8faqBSY3", "ztMKEwjUmeH", "xuByeeQMosJ", "Z8GIw34AoSu", "wJnh5H9k8Z", "XJq4_kMFABe", "TH7PBUGUOmu", "iclr_2021_jQSBcVURlpW", "iclr_2021_jQSBcVURlpW", "iclr_2021_jQSBcVURlpW", "iclr_2021_jQSBcVURlpW" ]
iclr_2021_DFIoGDZejIB
Benefits of Assistance over Reward Learning
Much recent work has focused on how an agent can learn what to do from human feedback, leading to two major paradigms. The first paradigm is reward learning, in which the agent learns a reward model through human feedback that is provided externally from the environment. The second is assistance, in which the human is ...
withdrawn-rejected-submissions
This is a well written paper, outlining a class of assistive algorithms. Being more or less a survey paper, it could do a better job of discussing 'inverse reinforcement learning' and 'collaborative inverse reinforcement learning'. It could also be slightly more general: for example the human dewcision function need no...
train
[ "IsJJ_J7rcBM", "DcvDLwyij8r", "xABc0UhVmV", "0hIITL-hCcP", "BF2Wh9JwPo8", "5Y2WxFLJXiD", "ID70IcgsrK5", "aGhj1JcB6fQ", "j_i3ygSW-_s", "V3xSYZiKD_B", "uiSiTzCZh2A", "CRjS32NeivL", "2DKWdwzOiW", "x5UQWpCuEKo", "C6OkjcHi7Sk", "Uf7gewHICRD", "1KBK_bezeQD", "W0YBiptg5Ko", "0Xrlurh789H...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "o...
[ "### Overview\n\nThe paper addresses the problem of learning from human feedback. It provides an analysis of reward learning---where human feedback is used to extract a task description in the form of a reward---and assistance---where the learning agent and human co-exist in the environment and both perform actions...
[ 5, 6, 4, 5, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 5, 3, 2, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2021_DFIoGDZejIB", "iclr_2021_DFIoGDZejIB", "iclr_2021_DFIoGDZejIB", "iclr_2021_DFIoGDZejIB", "iclr_2021_DFIoGDZejIB", "aGhj1JcB6fQ", "V3xSYZiKD_B", "2DKWdwzOiW", "kThpyTpuNAz", "uiSiTzCZh2A", "2DKWdwzOiW", "0Xrlurh789H", "W0YBiptg5Ko", "1KBK_bezeQD", "Uf7gewHICRD", "tQoL9m2B7O",...
iclr_2021_7nfCtKep-v
EXPLORING VULNERABILITIES OF BERT-BASED APIS
Natural language processing (NLP) tasks, ranging from text classification to text generation, have been revolutionised by pretrained BERT models. This allows corporations to easily build powerful APIs by encapsulating fine-tuned BERT models. These BERT-based APIs are often designed to not only provide...
withdrawn-rejected-submissions
The paper presents novel model stealing attacks against BERT API. The attacks are split in two phases. In the first phase, the black-box BERT model is recovered by submission of specially crafted data. In the second phase, the inferred model can be used for identifying sensitive attributes or to generate adversarial ex...
train
[ "O0P8nszU5nM", "wTlm7q0cIjz", "qyjUjPuDwaR", "CtTQ9Zc1LBM", "c2ADSxyUnh6", "L2VkQCyxvae", "8yvDNotBnJ-", "gElkekg5E6M", "SjbI9E4Q-kH", "8J2a5GXhPt7", "HKv8cpznnwg", "MGQhlOwQvmw", "u5Jacc5eZOq", "9JMsuQ5pNvG" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary: This paper is studying the vulnerabilities of modern BERT-based classifiers, which a service provider is hosting using a black-box inference API. Consistent with prior work [2], the authors succeed in extracting high performing copies of the APIs, by training models using the outputs of the API to queries...
[ 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4 ]
[ 5, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_7nfCtKep-v", "c2ADSxyUnh6", "iclr_2021_7nfCtKep-v", "gElkekg5E6M", "L2VkQCyxvae", "8yvDNotBnJ-", "MGQhlOwQvmw", "qyjUjPuDwaR", "9JMsuQ5pNvG", "u5Jacc5eZOq", "MGQhlOwQvmw", "O0P8nszU5nM", "iclr_2021_7nfCtKep-v", "iclr_2021_7nfCtKep-v" ]
iclr_2021_0Hj3tFCSjUd
Energy-based View of Retrosynthesis
Retrosynthesis—the process of identifying a set of reactants to synthesize a target molecule—is of vital importance to material design and drug discovery. Existing machine learning approaches based on language models and graph neural networks have achieved encouraging results. However, the inner connections of these mo...
withdrawn-rejected-submissions
Before the discussion phase nearly all reviewers had doubts about the comparison of the current work with state-of-the-art works (notably Yan et al., 2020, RetroXpert, and GraphRETRO). The authors then compared with these works and emphasized that these works rely on hand-crafted features. They argue that the fairest c...
test
[ "jHp9SfIkHw", "Iu2Ojnh8p6z", "9HHX_D2k10c", "tbAjwNlv2f", "ziuy6FSvI3F", "bqRvrAv0VSy", "U356aK-iAO", "kQPWk5VVwar", "Qhukq2uBvct", "ef8GaySb3Wj", "nIVlCorUKTW", "FDC34zfDlvL", "74mJtnpyS1e", "_99sSejhs7" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "### Summary of the paper\nThis paper proposes an energy based model (EBM) for retrosynthesis. The best model (dual model) leverages the duality of retrosynthesis and reaction prediction. The EBM contains three factors: prior on reactants $p(X)$, forward reaction probability $p(y | X) and backward posterior $P(X|y)...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 8 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "iclr_2021_0Hj3tFCSjUd", "tbAjwNlv2f", "bqRvrAv0VSy", "74mJtnpyS1e", "_99sSejhs7", "nIVlCorUKTW", "Qhukq2uBvct", "jHp9SfIkHw", "nIVlCorUKTW", "FDC34zfDlvL", "iclr_2021_0Hj3tFCSjUd", "iclr_2021_0Hj3tFCSjUd", "iclr_2021_0Hj3tFCSjUd", "iclr_2021_0Hj3tFCSjUd" ]
iclr_2021_uSYfytRBh-f
Efficiently Troubleshooting Image Segmentation Models with Human-In-The-Loop
Image segmentation lays the foundation for many high-stakes vision applications such as autonomous driving and medical image analysis. It is, therefore, of great importance to not only improve the accuracy of segmentation models on well-established benchmarks, but also enhance their robustness in the real world so as t...
withdrawn-rejected-submissions
This paper studies how to efficiently expose failures of "top-performing" segmentation models in the real world and how to leverage such counterexamples to rectify the models. The key idea is to discover most "controversial" samples from massive online unlabeled images. The approach is sound, well grounded, and quite l...
train
[ "2h8alHDeOBV", "l2XgIhDpi_m", "ZiE5NCctkCf", "OOwFdYXanI_", "IWi1xfprIE", "0lq2IGbNbAa", "iPsdCs25_q", "PX0m4K1t5PX", "znqqZGfHJgB" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This work used a variety of existing segmentation algorithms to discover most \"controversial\" samples from massive online unlabeled images. Those representative controversial samples were believed to have the best chance to confuse the algorithm being trained and to expose its weakness. They are rated by annotat...
[ 8, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_uSYfytRBh-f", "2h8alHDeOBV", "PX0m4K1t5PX", "PX0m4K1t5PX", "PX0m4K1t5PX", "znqqZGfHJgB", "znqqZGfHJgB", "iclr_2021_uSYfytRBh-f", "iclr_2021_uSYfytRBh-f" ]
iclr_2021_j6rILItz4yr
ALFA: Adversarial Feature Augmentation for Enhanced Image Recognition
Adversarial training is an effective method to combat adversarial attacks in order to create robust neural networks. By using an auxiliary batch normalization on adversarial examples, it has been shown recently to possess great potential in improving the generalization ability of neural networks for image recognition a...
withdrawn-rejected-submissions
Adversarial training is usually done on the image space by directly optimizing the pixels. This paper suggests the adversarial training over intermediate feature spaces in the neural network. The idea is very simple. The authors have done extensive experiments to justify its performance. But the performance gain though...
val
[ "7edAIG3AVbB", "9aDWy1ZcXLH", "U2f66LvvHlD", "uu_Mss_50-I", "juokQ9-cSqu", "9t1aRg_nb3h", "P2N5LQJQ87", "EvZCOga-9Ch", "vCBdCkmqcFS", "ezRXIm7LtWH", "VOFwhNjFUPE", "29_xoejI29a", "xF3HLaclrj", "-MfX8wf5IiJ", "Ks4EQiY6wA", "ZuCHtYBX-fU", "b2WCXNhZ_y2", "RJosmcZcvGU", "OuzLawnPc7A"...
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "official_...
[ "Overview of paper: this work tackles the task of adversarial augmentation for better generalization. Instead of augmentation the pixels space, which is expensive and potentially harder, they augment the intermediate feature representation. As the choice of the particular layer for application of the perturbations ...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "iclr_2021_j6rILItz4yr", "uu_Mss_50-I", "uu_Mss_50-I", "zLzFz1c15KN", "ZuCHtYBX-fU", "P2N5LQJQ87", "-MfX8wf5IiJ", "iclr_2021_j6rILItz4yr", "29_xoejI29a", "29_xoejI29a", "1HtSNxGra9I", "Ks4EQiY6wA", "7edAIG3AVbB", "RJosmcZcvGU", "7edAIG3AVbB", "Q-xKdKUPrZ", "Q-xKdKUPrZ", "VOFwhNjFUP...
iclr_2021_lcNa5mQ-CSb
Score-based Causal Discovery from Heterogeneous Data
Causal discovery has witnessed significant progress over the past decades. Most algorithms in causal discovery consider a single domain with a fixed distribution. However, it is commonplace to encounter heterogeneous data (data from different domains with distribution shifts). Applying existing methods on such heteroge...
withdrawn-rejected-submissions
This submission tackles an important problem and presents interesting ideas. I am confident that the research will lead to good publications. However, in the particular situation here, AnonReviewer2 had serious concerns that are shared by me. The authors made a great effort to clarify the situation, but the current sit...
train
[ "rtHp13sc2JU", "2cg1ycCPLtV", "biExnJ72bO", "CEsuMNQynID", "AkBoyY7Cbxk", "j73JQNkMQZy", "ibBO2ftZHbA", "tg0fdLpdWd", "i6ZJVw0YNIR", "XK9SVvkfCe", "Icu6mXZoV3V", "FcdQcUHhHkZ", "3njY0I1L9pk", "TO6pqnJEM6c", "7jGBt5Gh-4f" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We have uploaded a revised version of the paper, following the suggestions and comments from the reviewers. The changes including:\n\n1. In Section 1, we add some literature about mixture of Bayesion networks, following R#1's suggestions.\n2. In Section 1, we add brief introduction to \"causal sufficiency\", follo...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 3, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "iclr_2021_lcNa5mQ-CSb", "biExnJ72bO", "AkBoyY7Cbxk", "j73JQNkMQZy", "ibBO2ftZHbA", "XK9SVvkfCe", "XK9SVvkfCe", "FcdQcUHhHkZ", "3njY0I1L9pk", "TO6pqnJEM6c", "7jGBt5Gh-4f", "iclr_2021_lcNa5mQ-CSb", "iclr_2021_lcNa5mQ-CSb", "iclr_2021_lcNa5mQ-CSb", "iclr_2021_lcNa5mQ-CSb" ]
iclr_2021_hE3JWimujG
Cortico-cerebellar networks as decoupled neural interfaces
The brain solves the credit assignment problem remarkably well. For credit to be correctly assigned across multiple cortical areas a given area should, in principle, wait for others to finish their computation. How the brain deals with this locking problem has remained unclear. Deep learning methods suffer from similar...
withdrawn-rejected-submissions
Reviewers split on this paper with one arguing that it is an intriguing and significant paper for both neuroscience and deep learning, whereas others argued that it fails to answer some key questions and stops short of offering testable predictions or novel findings. In particular Reviewer 2 questioned the limited expe...
train
[ "ytvZcRDf2O", "WV-XA_uDIx6", "qjaFwJO59dP", "tDwoQaEfU-c", "vZJ6DlckvCd", "Cm6240NYdnU", "n0MTBxs_G7" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the positive, constructive and detailed feedback. We believe we have now address the points raised. In particular, we have added three new figures, a new section (5) and extended the discussion to highlight predictions made by the model. \n\n1. Predictions and comparison with experimental...
[ -1, -1, -1, -1, 3, 5, 7 ]
[ -1, -1, -1, -1, 3, 5, 3 ]
[ "vZJ6DlckvCd", "Cm6240NYdnU", "n0MTBxs_G7", "iclr_2021_hE3JWimujG", "iclr_2021_hE3JWimujG", "iclr_2021_hE3JWimujG", "iclr_2021_hE3JWimujG" ]
iclr_2021_qOCdZn3lQIJ
Compressing gradients in distributed SGD by exploiting their temporal correlation
We propose SignXOR, a novel compression scheme that exploits temporal correlation of gradients for the purpose of gradient compression. Sign-based schemes such as Scaled-sign and SignSGD (Bernstein et al., 2018; Karimireddy et al., 2019) compress gradients by storing only the sign of gradient entries. These methods, ho...
withdrawn-rejected-submissions
The paper introduces a new scheme for compressing gradients in distributed learning which is argued to exploit temporal correlation. The paper received very detailed reviews and generated a lot of discussions (thank you to the reviewers for the amazing job). Many reviewers acknowledge that this is interesting work, a...
train
[ "TMjsibF1-jd", "pwA4p1NN3m", "Wsz6baVzku4", "TuPpavaZ35N", "ZPtlEPNtqf8", "a_-xbxgwSLR", "m5wgKkTqXIs", "f9OtwXCS4xD", "akMTIT5jK_L", "_smLCYbI9TS", "WRy_Jj_jPOA", "SWhue6RMEjr", "-suSt4gLtKa", "5TDJGRCc2qz" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Following the reviewer's latest response we have incorporated to our paper the reasoning for the connection between lossy compression and temporal correlation. Please see the updated text (in blue) on page 5.", "I am increasing my rating based on your clarification on the source of the gains. \n\nI would encoura...
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 2, 6, 4 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "pwA4p1NN3m", "m5wgKkTqXIs", "iclr_2021_qOCdZn3lQIJ", "_smLCYbI9TS", "WRy_Jj_jPOA", "Wsz6baVzku4", "Wsz6baVzku4", "SWhue6RMEjr", "SWhue6RMEjr", "5TDJGRCc2qz", "-suSt4gLtKa", "iclr_2021_qOCdZn3lQIJ", "iclr_2021_qOCdZn3lQIJ", "iclr_2021_qOCdZn3lQIJ" ]
iclr_2021_BVPowUU1cR
Assisting the Adversary to Improve GAN Training
Some of the most popular methods for improving the stability and performance of GANs involve constraining or regularizing the discriminator. In this paper we consider a largely overlooked regularization technique which we refer to as the Adversary's Assistant (AdvAs). We motivate this using a different perspective to t...
withdrawn-rejected-submissions
The paper received low ratings and the reviewers pointed out a number of issues. The authors' short response failed to address these concerns.
train
[ "YDqZJM49gxP", "9dtCbl5koWR", "n6b_wJ90gyF", "6GAIJ81Ipsg", "-tMUN34gS6", "vySURmuERV7", "dZdPaNTbNxZ", "vUFWPEcVtf9", "l83gld8C01" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "- We thank you for referring us to this prior work, and refer to our the general comments above as well as the revised paper for a discussion and our repositioning.\n\n- Regarding \"well known that the gradient vanishing problem may appear if one train the discriminator to optimality\": This is not true for WGAN a...
[ -1, -1, -1, -1, -1, 4, 4, 3, 6 ]
[ -1, -1, -1, -1, -1, 4, 4, 5, 2 ]
[ "vySURmuERV7", "dZdPaNTbNxZ", "vUFWPEcVtf9", "l83gld8C01", "iclr_2021_BVPowUU1cR", "iclr_2021_BVPowUU1cR", "iclr_2021_BVPowUU1cR", "iclr_2021_BVPowUU1cR", "iclr_2021_BVPowUU1cR" ]
iclr_2021_xEpUl1um6V
Benchmarking Bias Mitigation Algorithms in Representation Learning through Fairness Metrics
With the recent expanding attention of machine learning researchers and practitioners to fairness, there is a void of a common framework to analyze and compare the capabilities of proposed models in deep representation learning. In this paper, we evaluate different fairness methods trained with deep neural networks on ...
withdrawn-rejected-submissions
The paper studies benchmarking of bias mitigation methods. The authors propose a synthetic dataset of images (alike colored-MNIST) that enables a controlled setup over different types of correlations between a binary sensitive attribute, dataset features, and a binary outcome label. The authors have evaluated 2K models...
train
[ "uRtjhmOmFu", "h_e9AK7Zsq", "2kclxKBNne6", "TfNGwf1uoqb", "KEXi4d9D9fo", "rbegbK2m7QH", "xtE5j2c7eix", "lc260k0mQ1", "E6Owj_hDhYQ" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "#Summary:\n\nThis paper presents an interesting benchmark study on three bias-mitigation algorithms: \n- LAFTR (Madras et al., 2018), an adversarial method for learning fair and transferable representations;\n- CFAIR (Zhao et al., 2020), conditional learning of fair representations;\n- FFVAE (Creager et al., 2019)...
[ 4, 5, -1, -1, -1, -1, -1, 5, 4 ]
[ 5, 2, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_xEpUl1um6V", "iclr_2021_xEpUl1um6V", "E6Owj_hDhYQ", "iclr_2021_xEpUl1um6V", "uRtjhmOmFu", "lc260k0mQ1", "h_e9AK7Zsq", "iclr_2021_xEpUl1um6V", "iclr_2021_xEpUl1um6V" ]
iclr_2021_vttv9ADGuWF
Certified robustness against physically-realizable patch attack via randomized cropping
This paper studies a certifiable defense against adversarial patch attacks on image classification. Our approach classifies random crops from the original image independently and the original image is classified as the vote over these crops. This process minimizes changes to the training process, as only the crop clas...
withdrawn-rejected-submissions
The paper provides a simple prediction procedure to defend against (rectangular) patch attacks, and also a method to obtain some random estimates of the certified robustness of the method. The simplicity of the method is certainly appreciated. On the other hand, there are a number of issues preventing the acceptance of...
val
[ "fxO6iTYgR2", "LMoxIS39q5k", "a6Fvd-HxG4Z", "g74BP57QAVF", "PXvrmhG5MZ_", "CJYhMpM-tix", "2UfUt8xrZtn", "GTFigNVUns" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The method can be basically summarized as a majority voting of crops of an image. Moreover, a new certification of the proposed method is introduced, not similar to the conventional adversarial robustness certification on perturbation under $\\ell_p$ ball , the method using simple geometry and probability problem ...
[ 5, 5, 4, 5, -1, -1, -1, -1 ]
[ 4, 4, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2021_vttv9ADGuWF", "iclr_2021_vttv9ADGuWF", "iclr_2021_vttv9ADGuWF", "iclr_2021_vttv9ADGuWF", "g74BP57QAVF", "a6Fvd-HxG4Z", "fxO6iTYgR2", "LMoxIS39q5k" ]