paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2021_bhngY7lHu_
Adaptive N-step Bootstrapping with Off-policy Data
The definition of the update target is a crucial design choice in reinforcement learning. Due to the low computation cost and empirical high performance, n-step returns with off-policy data is a widely used update target to bootstrap from scratch. A critical issue of applying n-step returns is to identify...
withdrawn-rejected-submissions
This paper studies n-step returns in off-policy RL and introduces a novel algorithm which adapts the return’s horizon n in function of a notion of policy’s age. Overall, the reviewers found that the paper presents interesting observations and promising experimental results. However, they also raised concerns in their i...
train
[ "5csEMD9ujb", "pAhghkOm-_K", "244u53IS73s", "U3QFlQieiEb" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes an approach to adapting the n parameter in n-step returns according to the off-policyness of the sampled transition. The key novelty to approximate this off-policyness using the age of the policy that generated the data, and demonstrating that there is a fairly reliable relationship between the...
[ 4, 4, 3, 5 ]
[ 5, 4, 4, 4 ]
[ "iclr_2021_bhngY7lHu_", "iclr_2021_bhngY7lHu_", "iclr_2021_bhngY7lHu_", "iclr_2021_bhngY7lHu_" ]
iclr_2021_qU-eouoIyAy
Hyperrealistic neural decoding: Reconstruction of face stimuli from fMRI measurements via the GAN latent space
We introduce a new framework for hyperrealistic reconstruction of perceived naturalistic stimuli from brain recordings. To this end, we embrace the use of generative adversarial networks (GANs) at the earliest step of our neural decoding pipeline by acquiring functional magnetic resonance imaging data as subjects perce...
withdrawn-rejected-submissions
The approach proposed here have raised major concerns from multiple reviewers especially concerning the novelty and the experimental validation procedure.
train
[ "a4bOFmQ8WM4", "up7nqqwF0o", "gp0g02uMlZA", "i5hriLVTKNE", "XqINH1hRp3-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes to reconstruct images of faces from fMRI measurements, using GANs. The authors collected a new dataset, showing static faces generated by a GAN model to human subjects, and recording their brain BOLD responses with MRI. Then, they learned a model to reconstruct the stimulus based on the brain re...
[ 4, 5, 7, 5, 2 ]
[ 3, 4, 5, 4, 5 ]
[ "iclr_2021_qU-eouoIyAy", "iclr_2021_qU-eouoIyAy", "iclr_2021_qU-eouoIyAy", "iclr_2021_qU-eouoIyAy", "iclr_2021_qU-eouoIyAy" ]
iclr_2021_n5go16HF_B
Adversarial Data Generation of Multi-category Marked Temporal Point Processes with Sparse, Incomplete, and Small Training Samples
Asynchronous stochastic discrete event based processes are commonplace in application domains such as social science, homeland security, and health informatics. Modeling complex interactions of such event data via marked temporal point processes (MTPPs) provides the ability of detection and prediction of specific inte...
withdrawn-rejected-submissions
This paper proposes a new generation technique for multi-category marked temporal point processes. The paper was reviewed by three expert reviewers who expressed concerns for limited novel contributions, theoretical justification, and empirical evidence. The authors are encouraged to continue research, taking into con...
train
[ "HeGtEvLLnnA", "cIfQreIhy_T", "FWXH1UqsSln", "5Ugsi6Eaam-", "zo94RIWspfJ", "8zY0h0fAmY", "3qnCg2mbSZF", "A7ouY0w7wMb" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "Summary:\nThe authors propose a method for multi-category marked temporal point processes (MTPPs) generation with sparse, incomplete, and small training dataset. They apply Adversarial Autoencoder (AAE) and feature mapping techniques, which include a transformation between the categories and timestamps of marked p...
[ 5, -1, -1, 5, -1, -1, -1, 3 ]
[ 4, -1, -1, 4, -1, -1, -1, 4 ]
[ "iclr_2021_n5go16HF_B", "zo94RIWspfJ", "8zY0h0fAmY", "iclr_2021_n5go16HF_B", "A7ouY0w7wMb", "5Ugsi6Eaam-", "HeGtEvLLnnA", "iclr_2021_n5go16HF_B" ]
iclr_2021_PcBVjfeLODY
Constraining Latent Space to Improve Deep Self-Supervised e-Commerce Products Embeddings for Downstream Tasks
The representation of products in a e-commerce marketplace is a key aspect to be exploited when trying to improve the user experience on the site. A well known example of the importance of a good product representation are tasks such as product search or product recommendation. There is however a multitude of lesser k...
withdrawn-rejected-submissions
All reviews are somewhat below the acceptance threshold. The main concerns are in terms of lack of novelty, and that some of the paper's main claims are unsupported. Many of the criticisms are quite focused on specific details, but these seem significant enough to have been deal-breakers for this submission.
train
[ "2ugYaoPUl5M", "L5KzOMIHrAy", "CMNKGz4Y-V7", "lmDI7SZTJx0" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposed a way of product representation learning via BYOL framework. Background introduced, model explained, experiments conducted. I don't think this paper is good enough for acceptance. See detailed comments:\n\n1. The writing. The model part is not clearly written, and neither is the experiments set...
[ 3, 3, 5, 4 ]
[ 4, 4, 3, 4 ]
[ "iclr_2021_PcBVjfeLODY", "iclr_2021_PcBVjfeLODY", "iclr_2021_PcBVjfeLODY", "iclr_2021_PcBVjfeLODY" ]
iclr_2021_5Spjp0zDYt
Failure Modes of Variational Autoencoders and Their Effects on Downstream Tasks
Variational Auto-encoders (VAEs) are deep generative latent variable models that are widely used for a number of downstream tasks. While it has been demonstrated that VAE training can suffer from a number of pathologies, existing literature lacks characterizations of exactly when these pathologies occur and how they im...
withdrawn-rejected-submissions
This paper investigates various pathologies that occur when training VAE models. There was quite a bit of discussion (including "private" discussion between the reviewers) about the theory presented. Particular concerns included: For Theorem 1, while the required conditions formalise the setting in which the learned li...
val
[ "B3A2cuPoz2i", "gM5zKauTA7j", "_LCQFE6rblS", "gir7j-6342", "yA5Yko5pX7I", "m7qy9RXUHjA", "0JY2kCjeM2Z", "ZG0v_mwy3Pz", "K8lAE4Njpwz", "kof5q9Qhxx0" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper looks to investigate when VAEs fail to learn the maximum marginal likelihood (MML) model and some of the implications this can have for downstream tasks. In particular, it introduces a theorem (Theorem 1) that provides assumptions under which MML will not be found, and then performs experiments to asse...
[ 4, 5, 5, -1, -1, -1, -1, -1, -1, 5 ]
[ 5, 3, 3, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_5Spjp0zDYt", "iclr_2021_5Spjp0zDYt", "iclr_2021_5Spjp0zDYt", "0JY2kCjeM2Z", "ZG0v_mwy3Pz", "B3A2cuPoz2i", "kof5q9Qhxx0", "_LCQFE6rblS", "gM5zKauTA7j", "iclr_2021_5Spjp0zDYt" ]
iclr_2021_poH5qibNFZ
Neighbourhood Distillation: On the benefits of non end-to-end distillation
End-to-end training with back propagation is the standard method for training deep neural networks. However, as networks become deeper and bigger, end-to-end training becomes more challenging: highly non-convex models gets stuck easily in local optima, gradients signals are prone to vanish or explode during backpropaga...
withdrawn-rejected-submissions
The paper proposes a layer-wise or block-wise distillation scheme, Neighbourhood Distillation, that aims to reduce the training time and to improve parallelism when distilling large teacher networks. By breaking down the end-to-end distillation objective into blocks, the proposed method enables faster distillation when...
train
[ "v1JujUOk3go", "qfK_cm_HDiO", "m9-QMuHHkrf", "e65ePnScK6N", "LxZ5GQDHa0m", "rw7ktYvPKYH" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for review. Please find below some answers to the concerns that were raised in the review.\n\n*Missing a relevant paper. [1] proposes a similar blockwise knowledge distillation method. The authors should cite and explain the differences between ND and [1].*\n\n* Progressive Blockwise KD is indeed a relev...
[ -1, -1, -1, 5, 4, 5 ]
[ -1, -1, -1, 4, 4, 4 ]
[ "e65ePnScK6N", "LxZ5GQDHa0m", "rw7ktYvPKYH", "iclr_2021_poH5qibNFZ", "iclr_2021_poH5qibNFZ", "iclr_2021_poH5qibNFZ" ]
iclr_2021_FcfH5Pskt2G
Clearing the Path for Truly Semantic Representation Learning
The performance of β-Variational-Autoencoders (β-VAEs) and their variants on learning semantically meaningful, disentangled representations is unparalleled. On the other hand, there are theoretical arguments suggesting impossibility of unsupervised disentanglement. In this work, we show that small perturbations of exis...
withdrawn-rejected-submissions
Since the authors have decided to withdraw this submission, it has been rejected from the conference.
train
[ "wCLZeVRluYq", "WQVTWtyRHqa", "TI-wfUHYa6G", "Zu2fawTgjLv", "AwE_0Dn3m1N" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewers for their effort. Given the broad range of feedback and the substantial nature of the raised concerns, we decided to not proceed with this submission.", "\nThis work aims to demonstrate that VAE-based architectures can take advantage of inherent correlation between data to produce represen...
[ -1, 5, 5, 3, 4 ]
[ -1, 2, 4, 2, 5 ]
[ "iclr_2021_FcfH5Pskt2G", "iclr_2021_FcfH5Pskt2G", "iclr_2021_FcfH5Pskt2G", "iclr_2021_FcfH5Pskt2G", "iclr_2021_FcfH5Pskt2G" ]
iclr_2021_sfy1DGc54-M
Towards Robustness against Unsuspicious Adversarial Examples
Despite the remarkable success of deep neural networks, significant concerns have emerged about their robustness to adversarial perturbations to inputs. While most attacks aim to ensure that these are imperceptible, physical perturbation attacks typically aim for being unsuspicious, even if perceptible. However, there ...
withdrawn-rejected-submissions
The pursued here goal to explore what a broader and more nuanced notion of "imperceptible" perturbation is quite intriguing and could be a basis of really impactful investigations. However, as pointed out in the reviews and comments, the current treatment of this topic suffers from significant presentation and framing ...
train
[ "cd6FJira2E", "s6F3cyu449", "SPq3H3-k1Bv", "FHtjVfmEJs", "yGcVtO7efCs", "c7AzRDb79Pw", "layhzsSMJ45", "K4q91bEpHkY", "XnUj9bo5Bwy" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "I like the idea to check whether it might be possible that noise can be injected selectively in areas where it is likely to make the adversarial example less suspicious and thus it might be possible to inject more noise in those directions. But the challenge is that this might increase the saliency of those areas....
[ 4, 6, 4, 3, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, 5, -1, -1, -1, -1, -1 ]
[ "iclr_2021_sfy1DGc54-M", "iclr_2021_sfy1DGc54-M", "iclr_2021_sfy1DGc54-M", "iclr_2021_sfy1DGc54-M", "SPq3H3-k1Bv", "s6F3cyu449", "FHtjVfmEJs", "cd6FJira2E", "iclr_2021_sfy1DGc54-M" ]
iclr_2021_vNw0Gzw8oki
Physics Informed Deep Kernel Learning
Deep kernel learning is a promising combination of deep neural networks and nonparametric function estimation. However, as a data-driven approach, the performance of deep kernel learning can still be restricted by scarce or insufficient data, especially in extrapolation tasks. To address these limitations, we propose ...
withdrawn-rejected-submissions
The paper presents a framework for incorporating physics knowledge (through, potentially incomplete, differential equations) into the deep kernel learning approach of Wilson et al. The reviewers found the paper addresses an important problem and presents good results. However, one of the main issues raised by R1 is th...
test
[ "gTo_Ff5nbw", "87kmRaPlrqO", "S3Hd1AHSZMy", "C34qxM6EokB", "dxDITktHdfS", "UIQd5u1M0Fb", "TzrcYNmWlUg", "yQLi7LN1aZa", "-8sjp6BR8oP", "BwoSl5-pBMe", "_22_-tiJlSm", "wZ3ClHdcNM5", "TJLX2wDtgVI", "taOqeVrm7s", "FUf02GrxUg1", "0W4eVJrGqdM", "vuQKw3jioQv", "eK3tye_1mf" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Post-discussion update: The authors have clarified their work considerably, and I believe the work is probably correct. However, the paper still suffers from poor presentation and poorly-motivated or justified modelling choices. The current version of the paper has not been updated, and all below issues thus stand...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 8 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2021_vNw0Gzw8oki", "S3Hd1AHSZMy", "dxDITktHdfS", "UIQd5u1M0Fb", "BwoSl5-pBMe", "TzrcYNmWlUg", "yQLi7LN1aZa", "-8sjp6BR8oP", "_22_-tiJlSm", "taOqeVrm7s", "wZ3ClHdcNM5", "gTo_Ff5nbw", "0W4eVJrGqdM", "vuQKw3jioQv", "eK3tye_1mf", "iclr_2021_vNw0Gzw8oki", "iclr_2021_vNw0Gzw8oki", ...
iclr_2021_iMKvxHlrZb3
Scalable Graph Neural Networks for Heterogeneous Graphs
Graph neural networks (GNNs) are a popular class of parametric model for learning over graph-structured data. Recent work has argued that GNNs primarily use the graph for feature smoothing, and have shown competitive results on benchmark tasks by simply operating on graph-smoothed node features, rather than using end-t...
withdrawn-rejected-submissions
This paper proposed an extension of the SIGN model as an efficient and scalable solution to handle prediction problems on heterogeneous graphs with multiple edge types. The approach is quite simple: (1) sample subsets of edge types, then construct graphs with these subsets of edge types and (2) compute node features o...
test
[ "1tWiuOT9FQi", "IeoLVfoXnob", "zXybr9FEmON", "7b3DDZ5lxR", "4dilifakSE5", "ES6usP2sRTx", "xFG5YV_yBm1", "Z6ZuH-1lEV5", "ka7kO2gUK8y", "WJVB599S8lG", "3PImdhRId5K" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper aims to propose a new GNN for heterogeneous graphs, which is scalable to large-scale graphs. The proposed idea is to leverage an existing model called SIGN, which simplifies GCN by dropping the non-linear transformation from intermediate layers, and extend it to heterogeneous graphs. The results on seve...
[ 5, -1, 5, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ 5, -1, 5, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2021_iMKvxHlrZb3", "4dilifakSE5", "iclr_2021_iMKvxHlrZb3", "ES6usP2sRTx", "zXybr9FEmON", "ka7kO2gUK8y", "1tWiuOT9FQi", "WJVB599S8lG", "3PImdhRId5K", "iclr_2021_iMKvxHlrZb3", "iclr_2021_iMKvxHlrZb3" ]
iclr_2021_ES9cpVTyLL
Weak and Strong Gradient Directions: Explaining Memorization, Generalization, and Hardness of Examples at Scale
Coherent Gradients (CGH) [Chatterjee, ICLR 20] is a recently proposed hypothesis to explain why over-parameterized neural networks trained with gradient descent generalize well even though they have sufficient capacity to memorize the training set. The key insight of CGH is that, since the overall gradient for a single...
withdrawn-rejected-submissions
The reviews are concerned about the novelty/incremental nature of the paper and partially also about the conclusions drawn from the experiments. The authors did not take the chance to write a response.
test
[ "YmK787NCNf", "lVKsJlRbDuk", "izgCF_V64LO", "skIHYzhYMGi" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "### Pros\n\nI appreciate an experimental study to delve deeper into the dynamics of CGH. The RM3 algorithm is very interesting and scalable and can perhaps be adapted to be used more regularly for optimization in the presence of noise.\n\n### Cons\n\nIn short my main issue is as follows - The entire idea of the pa...
[ 4, 4, 4, 5 ]
[ 5, 3, 4, 5 ]
[ "iclr_2021_ES9cpVTyLL", "iclr_2021_ES9cpVTyLL", "iclr_2021_ES9cpVTyLL", "iclr_2021_ES9cpVTyLL" ]
iclr_2021_w_haMPbUgWb
Rewriter-Evaluator Framework for Neural Machine Translation
Encoder-decoder architecture has been widely used in neural machine translation (NMT). A few methods have been proposed to improve it with multiple passes of decoding. However, their full potential is limited by a lack of appropriate termination policy. To address this issue, we present a novel framework, Rewriter-Eval...
withdrawn-rejected-submissions
This paper builds upon recent iterative refinement approaches NMT with an evaluator model that controls the termination of the translation process, yielding a “rewriter-evaluator framework” for multi-pass decoding. Their approach is an alternative to the policy network used in Geng et al (EMNLP 2018). The main delta wr...
train
[ "RQLX--vfQg_", "JzvhHJQPYtT", "-FqEh5dlRL_", "I0WSD9GRjEm", "XhSm3CDTUbx", "eDu4Xc9jge1", "Sur1qJMgYwG", "VWhTpON9BXW", "C8JLEBwGU8-", "cuMtEbU_GWG", "nJRQbw0hI3Q" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your valuable feedbacks. For reproducibility, we will release the source code once the paper is accepted.\n\nComment-1: The Zh-En training data is not publicly available.\n\nAnswer-1: The use of NIST Zh->En dataset and WMT’15 En->De dataset follows the experiment settings of prior works [1,2,3] on multi...
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 4, 3, 4 ]
[ "cuMtEbU_GWG", "-FqEh5dlRL_", "C8JLEBwGU8-", "XhSm3CDTUbx", "eDu4Xc9jge1", "nJRQbw0hI3Q", "VWhTpON9BXW", "iclr_2021_w_haMPbUgWb", "iclr_2021_w_haMPbUgWb", "iclr_2021_w_haMPbUgWb", "iclr_2021_w_haMPbUgWb" ]
iclr_2021_RwQZd8znR10
Intrinsically Guided Exploration in Meta Reinforcement Learning
Deep reinforcement learning algorithms generally require large amounts of data to solve a single task. Meta reinforcement learning (meta-RL) agents learn to adapt to novel unseen tasks with high sample efficiency by extracting useful prior knowledge from previous tasks. Despite recent progress, efficient exploration in...
withdrawn-rejected-submissions
The paper proposes a novel off-policy meta-RL algorithm able to achieve efficient exploration in meta-training able to perform a fast task identification. Although the reviewers agree that this paper has merits (relevant topic, interesting idea, nice experimental analysis), they have raised several concerns about the c...
train
[ "wDxSVmaHS_d", "LWOnlG9lDrp", "4Lm7Gl7D2X0", "mJekm_KlSv", "S2rIJCh6ukF", "2PrN5PBb77l", "jTUhUUEh-sy", "smOymkAP5hh", "6qGVJcl4UA7", "hWv8sjaOCJx" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "*SUMMARY*\n\nThe paper presents a method for efficient task identification to improve adaptation in a meta RL setting. The approach is based on learning an exploration policy to quickly discriminate the task at hand, so that to leverage a task-specific policy for exploitation. To do so, it employs an intrinsic rew...
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, 4 ]
[ 3, 3, 5, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_RwQZd8znR10", "iclr_2021_RwQZd8znR10", "iclr_2021_RwQZd8znR10", "LWOnlG9lDrp", "hWv8sjaOCJx", "4Lm7Gl7D2X0", "mJekm_KlSv", "wDxSVmaHS_d", "iclr_2021_RwQZd8znR10", "iclr_2021_RwQZd8znR10" ]
iclr_2021_SkUfhuFsvK-
FASG: Feature Aggregation Self-training GCN for Semi-supervised Node Classification
Recently, Graph Convolutioal Networks (GCNs) have achieved significant success in many graph-based learning tasks, especially for node classification, due to its excellent ability in representation learning. Nevertheless, it remains challenging for GCN models to obtain satisfying prediction on graphs where few nodes ar...
withdrawn-rejected-submissions
This paper presents a self-training idea for GCN models to help improve the node classification. The reviewers agreed that the technical contribution of the proposed approach is limited and the performance improvement seems marginal.
train
[ "D6U1I1zHCr", "A7zZTNVsMP0", "Au0mWE9chC7", "WSxIgbI7FH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents a self-training algorithm based on GCN to improve the semi-supervised node classification on graphs. The key idea is to add new nodes with high confidence as supervision to enlarge the labeled nodes. Although the experimental results show the proposed method outperforms or performs similarly to...
[ 3, 4, 4, 4 ]
[ 5, 4, 4, 5 ]
[ "iclr_2021_SkUfhuFsvK-", "iclr_2021_SkUfhuFsvK-", "iclr_2021_SkUfhuFsvK-", "iclr_2021_SkUfhuFsvK-" ]
iclr_2021_PcUprce4TM2
CAFE: Catastrophic Data Leakage in Federated Learning
Private training data can be leaked through the gradient sharing mechanism deployed in machine learning systems, such as federated learning (FL). Increasing batch size is often viewed as a promising defense strategy against data leakage. In this paper, we revisit this defense premise and propose an advanced data ...
withdrawn-rejected-submissions
This paper focuses attacks on federated learning. The reviewers had the following concerns: - The assumption of knowledge of batch indices is unrealistic in an HFL setting - The setup only works when doing a single epoch (I believe the authors claim that it is applicable in more general settings, but evidence to that e...
train
[ "3dtTBbwvq9R", "SdGMGmJz-sm", "1Aqwz0Qq8I3", "lVqxoER1xKA", "738eqluJYS-", "-b7PWDT8np", "lLnPaIM72P3", "17fnXB-rVHO", "BYFnLabnNEs" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the careful review and constructive feedback.\n\n•In VFL systems, since each local worker contains part and incomplete feature space of the data. To successfully train the VFL model, it’s important to make sure the data features from each local worker are aligned according to the data in...
[ -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ -1, -1, -1, -1, -1, 4, 3, 5, 2 ]
[ "-b7PWDT8np", "lLnPaIM72P3", "17fnXB-rVHO", "BYFnLabnNEs", "iclr_2021_PcUprce4TM2", "iclr_2021_PcUprce4TM2", "iclr_2021_PcUprce4TM2", "iclr_2021_PcUprce4TM2", "iclr_2021_PcUprce4TM2" ]
iclr_2021_K3qa-sMHpQX
ForceNet: A Graph Neural Network for Large-Scale Quantum Chemistry Simulation
Machine Learning (ML) has a potential to dramatically accelerate large-scale physics-based simulations. However, practical models for real large-scale and complex problems remain out of reach. Here we present ForceNet, a model for accurate and fast quantum chemistry simulations to accelerate catalyst discovery for rene...
withdrawn-rejected-submissions
The model presented here may be of use to others in running quantum chemistry simulations, and it may well lead to new advances, but the authors did not sufficiently address the key concerns around the model not being energy conserving and rotation covariant. The approach proposed could be learning such physical rules,...
train
[ "Z2ewseZK0HR", "tfJ3DbkpKg-", "aupkvzzh0_g", "m25axpHFrRz", "TqPlpXSNhwK", "48yU1insGMb", "c3-FpyPD-Lu", "KxBPhdn-j08", "gxlJtUhc46" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper propose a neural network force field that predicts atomic forces directly. This has the benefit of not requiring to differentiate an energy model and may be more flexible. The approach is well motivated and the paper is well structured and written. A strong point of the paper is the extensive discussion ...
[ 6, -1, -1, -1, -1, -1, 7, 5, 7 ]
[ 4, -1, -1, -1, -1, -1, 4, 5, 5 ]
[ "iclr_2021_K3qa-sMHpQX", "c3-FpyPD-Lu", "iclr_2021_K3qa-sMHpQX", "gxlJtUhc46", "Z2ewseZK0HR", "KxBPhdn-j08", "iclr_2021_K3qa-sMHpQX", "iclr_2021_K3qa-sMHpQX", "iclr_2021_K3qa-sMHpQX" ]
iclr_2021_KBWK5Y92BRh
Neighborhood-Aware Neural Architecture Search
Existing neural architecture search (NAS) methods often return an architecture with good search performance but generalizes poorly to the test setting. To achieve better generalization, we propose a novel neighborhood-aware NAS formulation to identify flat-minima architectures in the search space, with the assumption t...
withdrawn-rejected-submissions
This paper proposes a new NAS methods that when doing architecture search, returns flat minima using based on a notion of distance defined for two cells (Eq. (2)). Authors then evaluation the effectiveness of the proposed methods against prior work on several benchmarks. As authors have discussed in the paper, the ide...
train
[ "RydFGH04m0f", "aSWQy9FKJcC", "ca2WTy9SOqH", "uDQnwfSz5ts", "eyG137lfvFc", "vUsCoDVACs", "y3Q-rboTOn", "tdO3MKS69ph", "cXLzcqsESLS" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "** Summary \nThe authors proposed neighborhood-aware neural architecture search, where during the evaluation phase during search, the neighborhood of an architecture is considered. Specifically, when an architecture $\\alpha$ is picked, its neighbors $\\mathcal{N}(\\alpha)$ all contribute to the performance valida...
[ 6, 6, -1, -1, -1, -1, -1, 4, 5 ]
[ 4, 4, -1, -1, -1, -1, -1, 5, 3 ]
[ "iclr_2021_KBWK5Y92BRh", "iclr_2021_KBWK5Y92BRh", "vUsCoDVACs", "RydFGH04m0f", "aSWQy9FKJcC", "tdO3MKS69ph", "cXLzcqsESLS", "iclr_2021_KBWK5Y92BRh", "iclr_2021_KBWK5Y92BRh" ]
iclr_2021_HQoCa9WODc0
Suppressing Outlier Reconstruction in Autoencoders for Out-of-Distribution Detection
While only trained to reconstruct training data, autoencoders may produce high-quality reconstructions of inputs that are well outside the training data distribution. This phenomenon, which we refer to as outlier reconstruction, has a detrimental effect on the use of autoencoders for outlier detection, as an autoencod...
withdrawn-rejected-submissions
The paper proposes to use reconstruction error of autoencoder as the energy function and normalize the resulting density for detecting anomalous/OOD examples. Reviewers have raised several concerns with the paper, including, lack of insights into why the AE energy is better for OOD detection than other energy function ...
train
[ "KaqeYIwZwRm", "vSEWhT6IXCS", "SWoyaklTp2h", "QzIWxLsPRn4", "lPoWl77-HN" ]
[ "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear Authors and Reviewers,\n\nWe found that the current paper missed some important references about pioneering works that are related to energy-based generative models parameterized with deep net energy.\n\nThe first paper that proposes to train an energy-based model parameterized by modern deep neural network a...
[ -1, 4, 5, 5, 4 ]
[ -1, 5, 4, 4, 4 ]
[ "iclr_2021_HQoCa9WODc0", "iclr_2021_HQoCa9WODc0", "iclr_2021_HQoCa9WODc0", "iclr_2021_HQoCa9WODc0", "iclr_2021_HQoCa9WODc0" ]
iclr_2021_g4szfsQUdy3
Implicit Regularization Effects of Unbiased Random Label Noises with SGD
Random label noises (or observational noises) widely exist in practical machinelearning settings. we analyze the learning dynamics of stochastic gradient descent(SGD) over the quadratic loss with unbiased label noises, and investigate a newnoise term of dynamics, which is dynamized and influenced by mini-batch sam-plin...
withdrawn-rejected-submissions
The reviewers pointed out that the claims made in this submission have already appeared (in even stronger forms) before, to which the authors seem to agree. Therefore, this submission is not ready for publication in its current form.
train
[ "Y-e9tlZiJOC", "Re_R5E0QhC8", "Anyitg_gxCh", "busy-cBhoVj", "yUbGoEo1oV7" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Many thanks for the reviews. We acknowledge that the high-level conclusions made in this manuscript already exist in the earlier work (Blanc et al. 2020), though we adopt a different model to characterize SGD and empirically evaluate the findings using deep neural networks with benchmark datasets. We regret that w...
[ -1, 3, 4, 2, 3 ]
[ -1, 3, 4, 5, 4 ]
[ "iclr_2021_g4szfsQUdy3", "iclr_2021_g4szfsQUdy3", "iclr_2021_g4szfsQUdy3", "iclr_2021_g4szfsQUdy3", "iclr_2021_g4szfsQUdy3" ]
iclr_2021_nXSDybDWV3
Einstein VI: General and Integrated Stein Variational Inference in NumPyro
Stein Variational Inference is a technique for approximate Bayesian inferencethat is recently gaining popularity since it combines the scalability of traditionalVariational Inference (VI) with the flexibility of non-parametric particle basedinference methods. While there has been considerable progress in developmentof...
withdrawn-rejected-submissions
All reviewers have carefully reviewed and discussed this paper. They are in consensus that this manuscript merits a strong revision. I encourage the authors to take these experts' thoughts into consideration in revising their manuscript.
train
[ "4MtuUJ64x25", "IHRTiiNQZzT", "nJ-HCWwsiYe", "fLFt8zQ1qOg", "WRmJT033YOQ", "yZWRjRbKCgB", "j0Keb4w45G5", "xFK60pknP5S", "YM69dKGw_Xv", "yw1cGGf9hkU", "9Z5s9QlxIBq" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear authors of paper 1105,\n\nThank you for your response. I agree with AnonReviewer2 that the paper needs comparison to other methods. In addition, it is important that the paper highlights the importance of having a dedicated library for Stein inference through examples / a case study. Currently, there is not e...
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "fLFt8zQ1qOg", "yZWRjRbKCgB", "xFK60pknP5S", "YM69dKGw_Xv", "yw1cGGf9hkU", "9Z5s9QlxIBq", "xFK60pknP5S", "iclr_2021_nXSDybDWV3", "iclr_2021_nXSDybDWV3", "iclr_2021_nXSDybDWV3", "iclr_2021_nXSDybDWV3" ]
iclr_2021_MDX3F0qAfm3
Can We Use Gradient Norm as a Measure of Generalization Error for Model Selection in Practice?
The recent theoretical investigation (Li et al., 2020) on the upper bound of generalization error of deep neural networks (DNNs) demonstrates the potential of using the gradient norm as a measure that complements validation accuracy for model selection in practice. In this work, we carry out empirical studies using se...
withdrawn-rejected-submissions
Dear authors, Thank you for your submission. The reviewers all appreciated the direction of research and the message that GN can be a bad measure of generalization. That said, they all shared concerns regarding the strength of the conclusions that can be drawn from your work. I encourage you to address their comments...
train
[ "h5pb0oyBc6", "zd9Sh8CeT2", "oNkFQj_Ycvx", "dRGwjlLK88-", "TWspKlLaIo4", "Pt2yViZ1E3n", "10DuQsYl0Je", "zJ73WJcLLym", "YuUvrJcsl_C", "6nfM2UmeeF7" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThis paper studies the gradient norm as a measure of generalization in deep learning. The authors first an approximation to the gradient norm (GN) that is the norm of the gradients for only fully connected layers (AGN). Then they empirically evaluate the correlation between AGN and GN as well as GN and t...
[ 4, 4, -1, -1, -1, -1, -1, -1, 6, 4 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_MDX3F0qAfm3", "iclr_2021_MDX3F0qAfm3", "dRGwjlLK88-", "Pt2yViZ1E3n", "YuUvrJcsl_C", "zd9Sh8CeT2", "6nfM2UmeeF7", "h5pb0oyBc6", "iclr_2021_MDX3F0qAfm3", "iclr_2021_MDX3F0qAfm3" ]
iclr_2021_YPm0fzy_z6R
Signed Graph Diffusion Network
Given a signed social graph, how can we learn appropriate node representations to infer the signs of missing edges? Signed social graphs have received considerable attention to model trust relationships. Learning node representations is crucial to effectively analyze graph data, and various techniques such ...
withdrawn-rejected-submissions
The paper addresses an interesting problem of clustering/link prediction/representation learning of signed graphs, where edge weights are allowed to take either positive or negative values. The paper proposed an end to end pipeline targeted at link sign prediction and the feature diffusion step. The reviewers think the...
train
[ "E-I4nPI1t16", "PB161zFtuD", "DXw2L6hTspj", "DyQAVOLFoGU", "-DrNlWUCLi8", "g5KzPHdPDng", "VPFq0MXCWge", "R1fRpbLksm2" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We deeply appreciate your dedicated review of our paper. As the reviewer mentioned, our model is inspired by two existing methods: SRWR [1] and APPNP [2]. As an unsupervised approach, SRWR computes trustworthiness scores between nodes in signed graphs. The paper of APPNP addressed the over-smoothing issue by util...
[ -1, -1, -1, -1, 4, 6, 4, 7 ]
[ -1, -1, -1, -1, 3, 4, 4, 3 ]
[ "-DrNlWUCLi8", "g5KzPHdPDng", "VPFq0MXCWge", "R1fRpbLksm2", "iclr_2021_YPm0fzy_z6R", "iclr_2021_YPm0fzy_z6R", "iclr_2021_YPm0fzy_z6R", "iclr_2021_YPm0fzy_z6R" ]
iclr_2021_UAAJMiVjTY_
Abductive Knowledge Induction from Raw Data
For many reasoning-heavy tasks, it is challenging to find an appropriate end-to-end differentiable approximation to domain-specific inference mechanisms. Neural-Symbolic (NeSy) AI divides the end-to-end pipeline into neural perception and symbolic reasoning, which can directly exploit general domain knowledge such as a...
withdrawn-rejected-submissions
The paper addresses the difficult problem of combining ILP in a meta-interpretive framework with noisy inputs from a neural system. The essential idea is to use MIL to "efficiently" search for constraints on the neural outputs (eg z1 + z2 + z3 = 7, or z2< z3) as well as logic programs, with a score related to program...
val
[ "Za7kj-HW5Lb", "Hd84txFYpce", "DBEX4geIFF2", "9CR7iFWsCBK", "6e_mJJvwI1a", "ZrgE83Mvb0q", "kdy-1I8Z66j", "nY6lW3kz9F6", "fgc_q8m7IHt", "0XD_iigxXJ5", "ZloEDUa71Y", "iOF3pnR1Rg", "WGgOeUWU-t2", "5CPJwzn4rM5", "C4imHvOcWH", "EvjRWfDQjo8" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the clarifications. My biggest concern about this paper remains to be missing comparison with other related works.\n\nIn general, combining rule learning and pattern recognition tools, such as deep networks have been broadly studied. The authors have presented a hybrid approach of deep nets + ILP. Th...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 4 ]
[ "ZrgE83Mvb0q", "6e_mJJvwI1a", "9CR7iFWsCBK", "kdy-1I8Z66j", "nY6lW3kz9F6", "C4imHvOcWH", "WGgOeUWU-t2", "WGgOeUWU-t2", "5CPJwzn4rM5", "EvjRWfDQjo8", "iOF3pnR1Rg", "iclr_2021_UAAJMiVjTY_", "iclr_2021_UAAJMiVjTY_", "iclr_2021_UAAJMiVjTY_", "iclr_2021_UAAJMiVjTY_", "iclr_2021_UAAJMiVjTY_"...
iclr_2021_imnG4Ap9dAd
News-Driven Stock Prediction Using Noisy Equity State Representation
News-driven stock prediction investigates the correlation between news events and stock price movements. Previous work has considered effective ways for representing news events and their sequences, but rarely exploited the representation of underlying equity states. We address this issue by making use of a...
withdrawn-rejected-submissions
In this paper, the authors propose a model for integrating news representations for stock predictions. While the research direction has good value in real applications, it seems that this particular paper has not done a sufficiently good job in pushing the frontier of this direction. The reviewers have raised quite a f...
train
[ "4UXwQZdrT3r", "cfdWYXainHb", "XoOODuWQsWc" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this paper, the authors propose a model for integrating news representations for stock predictions. The authors claim that their proposed model outperforms the previous baselines. \n\nHere are my comments: \n\n1. Presentation needs improvement. Some statements are difficult to follow, or more explanations are n...
[ 5, 5, 6 ]
[ 4, 4, 4 ]
[ "iclr_2021_imnG4Ap9dAd", "iclr_2021_imnG4Ap9dAd", "iclr_2021_imnG4Ap9dAd" ]
iclr_2021_xCm8kiWRiBT
Adversarial Attacks on Binary Image Recognition Systems
We initiate the study of adversarial attacks on models for binary (i.e. black and white) image classification. Although there has been a great deal of work on attacking models for colored and grayscale images, little is known about attacks on models for binary images. Models trained to classify binary images are used i...
withdrawn-rejected-submissions
This paper was referred to the ICLR 2021 Ethics Review Committee based on concerns about a potential violation of the ICLR 2021 Code of Ethics (https://iclr.cc/public/CodeOfEthics) raised by reviewers. The paper was carefully reviewed by two committee members, who provided a binding decision. The decision is "Significa...
train
[ "0R-iEJVin4b", "e_Ib5HiR4g7", "W_Zz0P__9Cc", "RXFkyG06U4", "FAMnjwtZPuS", "KI6euUN2-K", "5pfEZ1-sJj", "VKbkCFx1_UA", "3B2lt5tJyn" ]
[ "public", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper was referred to the Ethics Panel because of the comment by AnonReviewer4: “The authors make the case that this is a realistic attack on a high-value target. I believe it is worth opening a discussion on whether this raises ethical issues for publication.” The primary ethical issue is whether the authors...
[ -1, -1, -1, -1, -1, 5, 5, 5, 7 ]
[ -1, -1, -1, -1, -1, 3, 3, 2, 3 ]
[ "iclr_2021_xCm8kiWRiBT", "KI6euUN2-K", "5pfEZ1-sJj", "VKbkCFx1_UA", "3B2lt5tJyn", "iclr_2021_xCm8kiWRiBT", "iclr_2021_xCm8kiWRiBT", "iclr_2021_xCm8kiWRiBT", "iclr_2021_xCm8kiWRiBT" ]
iclr_2021_jDIWFyftpQh
Discriminative Cross-Modal Data Augmentation for Medical Imaging Applications
While deep learning methods have shown great success in medical image analysis, they require a number of medical images to train. Due to data privacy concerns and unavailability of medical annotators, it is oftentimes very difficult to obtain a lot of labeled medical images for model training. In this paper, we study c...
withdrawn-rejected-submissions
The paper proposes a method for data augmentation by cross-modal data generation. While the reviewers agree that the paper addresses a relevant and important problem in medical imaging, they also agree on that the paper has limited novelty over the state of the art. Also the setup of experimental validation to comparis...
train
[ "z0okLeUou9m", "gaFPxfHMdzJ", "8PxXXzrE58r", "mkW1xWf2UJ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors propose an algorithm to enlarge the training set for image classification problems in certain medical applications where training data of the target modality is scarce. They do so by training an unpaired image-to-image translator network and an image classifier end-to-end in order to utilize labeled im...
[ 5, 4, 5, 6 ]
[ 4, 4, 4, 5 ]
[ "iclr_2021_jDIWFyftpQh", "iclr_2021_jDIWFyftpQh", "iclr_2021_jDIWFyftpQh", "iclr_2021_jDIWFyftpQh" ]
iclr_2021_2nm0fGwWBMr
PanRep: Universal node embeddings for heterogeneous graphs
Learning unsupervised node embeddings facilitates several downstream tasks such as node classification and link prediction. A node embedding is universal if it is designed to be used by and benefit various downstream tasks. This work introduces PanRep, a graph neural network (GNN) model, for unsupervised learning of un...
withdrawn-rejected-submissions
Although the paper is clearly written overall and well motivated, reviewers raised several crucial concerns and, unfortunately, the authors did not respond to reviews. During the discussion, reviewers agree with that this submission is not ready for publication. In particular, empirical evaluation is not thorough as i...
test
[ "J_uYjcV5s8r", "e1x_PnwPUzX", "DUqs_VHqULL", "Qq5ubFiM5KA" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary: This paper proposes a universal and unsupervised GNN-based representation learning (node embedding pretraining) model named PanRep for heterogeneous graphs, which benefits a variety of downstream tasks such as node classification and link prediction. More specifically, employing an encoder similar to R-GC...
[ 6, 5, 4, 5 ]
[ 3, 4, 5, 4 ]
[ "iclr_2021_2nm0fGwWBMr", "iclr_2021_2nm0fGwWBMr", "iclr_2021_2nm0fGwWBMr", "iclr_2021_2nm0fGwWBMr" ]
iclr_2021_pHkBwAaZ3UK
Learning Discrete Adaptive Receptive Fields for Graph Convolutional Networks
Different nodes in a graph neighborhood generally yield different importance. In previous work of Graph Convolutional Networks (GCNs), such differences are typically modeled with attention mechanisms. However, as we prove in our paper, soft attention weights suffer from over-smoothness in large neighborhoods. To addres...
withdrawn-rejected-submissions
Four knowledgeable referees lean towards rejection because of the missing detailed complexity analysis [R1,R2,R3], the choice of rather small datasets which hinders the rigorous evaluation of GNN models [R3,R4], missing state-of-the-art comparisons [R2] and ablations [R4]. The rebuttal addressed some of the concerns ra...
train
[ "weuJhfgNhZu", "ljfq6983Leo", "i0fZNDFBcA", "d9ibwMFch_", "kGMEUfSLpLM", "XPH0SrRYvYs", "wNPe-lqT6b", "5ZKkTW4kN6J", "FNacOVbfK14", "a6dpzdlBsHg" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "To address the over-smoothness in neighborhood caused by soft-attention in GNN, this paper presents an idea of adaptive receptive fields (ARFs), which can choose contexts on different hops from the central node, so as to efficiently explore dependencies with longer distances. The construction of ARFs follows a re...
[ 5, 5, 5, -1, -1, -1, -1, -1, -1, 5 ]
[ 5, 4, 2, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_pHkBwAaZ3UK", "iclr_2021_pHkBwAaZ3UK", "iclr_2021_pHkBwAaZ3UK", "XPH0SrRYvYs", "wNPe-lqT6b", "ljfq6983Leo", "a6dpzdlBsHg", "i0fZNDFBcA", "weuJhfgNhZu", "iclr_2021_pHkBwAaZ3UK" ]
iclr_2021_VwU1lyi5nzb
MULTI-SPAN QUESTION ANSWERING USING SPAN-IMAGE NETWORK
Question-answering (QA) models aim to find an answer given a question and con- text. Language models like BERT are used to associate question and context to find an answer span. Prior art on QA focuses on finding the best answer. There is a need for multi-span QA models to output the top-K likely answers to questions s...
withdrawn-rejected-submissions
This paper is not ready for a publication at ICLR, as agreed unanimously by the reviewers. There are three main reasons for that: 1. Novelty: it is mentioned in the paper that “To the best of our knowledge, a multi-span QA architecture has not been proposed", which is certainly incorrect. See the multiple references ...
train
[ "IAU0cnsAFB1", "fEzTo8YBIkR", "rO6dl5aZwid", "S3YsLI8JsaC" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "#### Summary\nThe paper proposes a novel method for predicting multiple answer spans in question-answering (QA) tasks. When the Span-Image technique is applied to a base BERT model, the authors show performance gains on a single-span dataset (SQuAD) and substantial improvements on a multi-span dataset (an internal...
[ 5, 4, 1, 3 ]
[ 3, 4, 5, 4 ]
[ "iclr_2021_VwU1lyi5nzb", "iclr_2021_VwU1lyi5nzb", "iclr_2021_VwU1lyi5nzb", "iclr_2021_VwU1lyi5nzb" ]
iclr_2021_6htjOqus6C3
DynamicVAE: Decoupling Reconstruction Error and Disentangled Representation Learning
This paper challenges the common assumption that the weight β, in β-VAE, should be larger than 1 in order to effectively disentangle latent factors. We demonstrate that β-VAE, with β<1, can not only attain good disentanglement but also significantly improve reconstruction accuracy via dynamic control. The paper \textit...
withdrawn-rejected-submissions
The paper is in general well written and easy to follow, and the considered approach of controlling beta is sensible. However, all reviewers identify shortcomings in the empirical analysis of the proposed method (missing comparison with stronger baselines, convergence issues of the considered baselines, considered data...
train
[ "ly54xXa1OJ1", "For3EgHTeU", "621aQGPY-pm", "xEMTvZTbbyN" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In $\\beta$-VAE, one challenge is to choose the hyper-parameter $\\beta$ that controls the trade-off between the reconstruction quality and the disentanglement. This paper proposes a method called DynamicVAE. Rather than using a fixed hyperparameter $\\beta$, the method leverages a modified incremental Proportiona...
[ 4, 4, 4, 4 ]
[ 4, 5, 4, 5 ]
[ "iclr_2021_6htjOqus6C3", "iclr_2021_6htjOqus6C3", "iclr_2021_6htjOqus6C3", "iclr_2021_6htjOqus6C3" ]
iclr_2021_OEgDatKuz2O
EMTL: A Generative Domain Adaptation Approach
We propose an unsupervised domain adaptation approach based on generative models. We show that when the source probability density function can be learned, one-step Expectation–Maximization iteration plus an additional marginal density function constraint will produce a proper mediator probability density function to b...
withdrawn-rejected-submissions
This work proposes an EM type of approach for domain adaptation under covariate shift. The approach well motivated and developed and experimentally evaluated on synthetic data. Pro: - The EM type of framework is simple and natural and promising direction for DA, which should be explored and analyzed further. Con: - ...
train
[ "tQzg5ouEnKX", "w7zf12ZF9vZ", "97XgRYd7qC", "hSmZ-PufiD", "72d16qiwr4", "qUS3g9msTbZ", "NE3nA3OJYn5", "9-2cEET7_I", "EggB1lZX0Jt" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author" ]
[ "In this paper, the authors propose generative domain adaptation approach called EMTL. The key idea is to model a mediator distribution which can approximate the true target joint distribution. Specifically, the authors apply an E-M strategy to infer the model parameters. Experimental studies are done on both synth...
[ 3, 4, 5, 3, -1, -1, -1, -1, -1 ]
[ 5, 3, 4, 5, -1, -1, -1, -1, -1 ]
[ "iclr_2021_OEgDatKuz2O", "iclr_2021_OEgDatKuz2O", "iclr_2021_OEgDatKuz2O", "iclr_2021_OEgDatKuz2O", "w7zf12ZF9vZ", "tQzg5ouEnKX", "EggB1lZX0Jt", "97XgRYd7qC", "hSmZ-PufiD" ]
iclr_2021_heqv8eIweMY
Non-Local Graph Neural Networks
Modern graph neural networks (GNNs) learn node embeddings through multilayer local aggregation and achieve great success in applications on assortative graphs. However, tasks on disassortative graphs usually require non-local aggregation. In addition, we find that local aggregation is even harmful for some disassortati...
withdrawn-rejected-submissions
This paper is right at the borderline: the reviewers agree it is well written, proposing a simple but interesting idea. However, there was a feeling among the reviewers (especially reviewer 1) that the paper could be strengthened considerably with a better discussion/some theory on the sufficiency of the calibration ve...
train
[ "Pg48FelWiR", "KwtALQwC5T", "Ac2fjZ6yVeu", "SO4JL_41V6C", "X1ffzkK4j3b", "vRfiv7dsDp", "23ZUSip9ZD", "Yg1vPVUcTSc", "rmLCjOsGhIV", "h_P6oa5Tmsp", "Ny4Qm9Xc8lz", "DYACM6AlcNn", "WVCNrKCO2X", "NU4cggIX4DC" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper targets on addressing the node embedding problem in disassortative graphs. A non-local aggregation framework is proposed, since local aggregation may be harmful for some disassortative graphs. To address the high computational cost in the recent Geom-GCN model that has an attention-like step to compute ...
[ 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7 ]
[ 5, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2021_heqv8eIweMY", "iclr_2021_heqv8eIweMY", "SO4JL_41V6C", "Ny4Qm9Xc8lz", "vRfiv7dsDp", "DYACM6AlcNn", "iclr_2021_heqv8eIweMY", "WVCNrKCO2X", "NU4cggIX4DC", "KwtALQwC5T", "Pg48FelWiR", "WVCNrKCO2X", "iclr_2021_heqv8eIweMY", "iclr_2021_heqv8eIweMY" ]
iclr_2021_jcpcUjw7Kzz
Discrete Predictive Representation for Long-horizon Planning
Discrete representations have been key in enabling robots to plan at more abstract levels and solve temporally-extended tasks more efficiently for decades. However, they typically require expert specifications. On the other hand, deep reinforcement learning aims to learn to solve tasks end-to-end, but struggles with lo...
withdrawn-rejected-submissions
This paper explores a foundational problem in AI around learning abstractions that allow for easier planning. The work proposes a specific procedure for learning temporally abstract, discrete representations in which it becomes tractable to perform graph-based search. Evaluation is performed on two 2D tasks where a g...
train
[ "XF_kH80aGX", "dh5xg3bq8RK", "5RQDNq4_78", "YuMZm28qARt" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper tackles the problem of long horizon visual planning, with the aim of of being able to plan actions to reach distant goals. This is a well studied problem, and like prior work this method considers the setting where the agent is given an offline dataset of interaction, which it learns from to be able to ...
[ 4, 4, 4, 4 ]
[ 4, 4, 4, 4 ]
[ "iclr_2021_jcpcUjw7Kzz", "iclr_2021_jcpcUjw7Kzz", "iclr_2021_jcpcUjw7Kzz", "iclr_2021_jcpcUjw7Kzz" ]
iclr_2021_D2TE6VTJG9
Predicting What You Already Know Helps: Provable Self-Supervised Learning
Self-supervised representation learning solves auxiliary prediction tasks (known as pretext tasks), that do not require labeled data, to learn semantic representations. These pretext tasks are created solely using the input features, such as predicting a missing image patch, recovering the color channels of an image fr...
withdrawn-rejected-submissions
This paper proposes a mathematical framework to theoretically understand and quantify the benefit of self-supervision on the downstream tasks. The theoretical analyses in this paper are concrete and the authors conducted experiments to support their claims. However, the current version still has the following weaknesse...
val
[ "tsXGhnJ2IOd", "YeFbdGJa5T", "p923wcwgTMb", "2t20zynyn6", "SunosyC8pYj", "6shuCmeyM0y", "85GZPorYHPI", "L4gTzLYQ5aj", "ec-G11UuHqq", "AaX1P6L7y4s", "-edXpxoq6R7", "i2IJns_QCwN", "fhhB5fpr1pt" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "__post-rebuttal__\nThe responses have been persuasive enough. I am raising my score, with an expectation that the authors will make additional textual revisions based on their responses to make it clear in the abstract and introduction that (1) authors only consider the \"reconstruction-based\" SSL, instead of SSL...
[ 6, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 3, 3, 5, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_D2TE6VTJG9", "iclr_2021_D2TE6VTJG9", "iclr_2021_D2TE6VTJG9", "SunosyC8pYj", "ec-G11UuHqq", "85GZPorYHPI", "fhhB5fpr1pt", "i2IJns_QCwN", "tsXGhnJ2IOd", "YeFbdGJa5T", "p923wcwgTMb", "iclr_2021_D2TE6VTJG9", "iclr_2021_D2TE6VTJG9" ]
iclr_2021_sgNhTKrZjaT
Guiding Representation Learning in Deep Generative Models with Policy Gradients
Variational Auto Encoder (VAE) provide an efficient latent space representation of complex data distributions which is learned in an unsupervised fashion. Using such a representation as input to Reinforcement Learning (RL) approaches may reduce learning time, enable domain transfer or improve interpretability of ...
withdrawn-rejected-submissions
Due to uniformly unfavourable reviews and lack of author engagement in the discussion period, this paper is rejected.
test
[ "9CmBPt0tOlP", "YtueAiI_rUN", "RuR3tUMRBi", "xsSRr2EEso" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "My recommendation would be to reject this paper for reasons which I will outline below.\n\nPros:\nThe problem of learning good features for RL is an important one to tackle and has application in many important tasks with complex input spaces like vision. The authors explain why this is important and I encourage t...
[ 2, 4, 3, 1 ]
[ 4, 4, 5, 5 ]
[ "iclr_2021_sgNhTKrZjaT", "iclr_2021_sgNhTKrZjaT", "iclr_2021_sgNhTKrZjaT", "iclr_2021_sgNhTKrZjaT" ]
iclr_2021_MP0LhG4YiiC
Analogical Reasoning for Visually Grounded Compositional Generalization
Children acquire language subconsciously by observing the surrounding world and listening to descriptions. They can discover the meaning of words even without explicit language knowledge, and generalize to novel compositions effortlessly. In this paper, we bring this ability to AI, by studying the task of multimodal co...
withdrawn-rejected-submissions
The authors propose a new dataset and compositional task based on the EPIC Kitchens dataset. The goal is to test novel compositions and to build a transformer based network specifically for this inference (by analogy). Specifically, the analogy here references the use of nearest neighbors in the dataset. There are a ...
train
[ "de5e4EpbFDb", "sMT80wTXXGm", "o8OjQcqAh8M", "mfMamrzNmjF", "I638oXZ5BpZ", "2b-zv0-CUmK", "dbuE53JUOc", "Jv-jcJ8Cbp-", "nrPu34PrXdM", "QlbFmvUC7zQ", "9U5Xx29qJj", "9DmuLMNZETj" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We appreciate your comments in recognizing several of the novel ideas and thorough experiments. We address all of your concerns and questions below.\n\n* **C1- Notations:** We clarified and revised the mentioned notations and variables in Equation 2 and 8.\n * **C1.1 - Equation 2 Analogical Attention**\n * Abo...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "9U5Xx29qJj", "I638oXZ5BpZ", "2b-zv0-CUmK", "QlbFmvUC7zQ", "de5e4EpbFDb", "mfMamrzNmjF", "mfMamrzNmjF", "de5e4EpbFDb", "9DmuLMNZETj", "iclr_2021_MP0LhG4YiiC", "iclr_2021_MP0LhG4YiiC", "iclr_2021_MP0LhG4YiiC" ]
iclr_2021_D62nJAdpijt
Trojans and Adversarial Examples: A Lethal Combination
In this work, we naturally unify adversarial examples and Trojan backdoors into a new stealthy attack, that is activated only when 1) adversarial perturbation is injected into the input examples and 2) a Trojan backdoor is used to poison the training process simultaneously. Different from traditional attacks, we levera...
withdrawn-rejected-submissions
The paper presents a new attack combining trojans (backdoor attacks) with adversarial examples. The new attack is triggered only if both a trojan and the respective adversarial perturbation are present. Experimental evaluation demonstrates that neither adversarial training (as a defense against adversarial examples) no...
train
[ "AreXZ8TlB6j", "jJvY7hgvpRM", "EWJe1k8X_ee", "LXI2jBHl3Hw", "DcVPGNuEH2S", "SVvrCTQfJam", "aFC9U1a7C9S", "58ZvjgIHDEw", "Pr1tq3egk1", "jTul1JBcNKX", "HtSy1c4JGcN" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper presents a very strong combined attack method, where infected\ntraining examples are crafted such the the trojan backdoor becomes very\ndifficult to detect. I feel their approach to be relevant, informative\nand presents a significant advance.\n\nThe paper focuses on the mechanisms of cleverly disguisi...
[ 7, 5, 6, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ 4, 4, 5, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2021_D62nJAdpijt", "iclr_2021_D62nJAdpijt", "iclr_2021_D62nJAdpijt", "aFC9U1a7C9S", "SVvrCTQfJam", "Pr1tq3egk1", "EWJe1k8X_ee", "HtSy1c4JGcN", "AreXZ8TlB6j", "jJvY7hgvpRM", "iclr_2021_D62nJAdpijt" ]
iclr_2021_KVTkzgz3g8O
TraDE: A Simple Self-Attention-Based Density Estimator
We present TraDE, a self-attention-based architecture for auto-regressive density estimation with continuous and discrete valued data. Our model is trained using a penalized maximum likelihood objective, which ensures that samples from the density estimate resemble the training data distribution. The use of self-atte...
withdrawn-rejected-submissions
This work explores an auto-regressive density estimator based on transformer networks. The model is trained via MLE with an additional MMD regularization term. Various experiments are performed on small benchmarks and show good results on density estimation. It is great to see that such a simple model is indeed very e...
train
[ "7aZjpKHkh5Y", "cRacBAJ0jn4", "ivIjPAQ66dI", "ML4Eha6oCPG", "s3OOBZPsc-", "szzzvBb3C2d", "ztwBdPebNYP", "ApOA1mut5yo" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your feedback. Please see the main comment above that addresses common concerns.\n\n>There is not much novelty. Simply borrowing the Transformer architecture does not seem sufficient for an academic venue\n\nSee the common response to all reviewers above.\n\n> The additional evaluation tasks are not ...
[ -1, -1, -1, -1, -1, 3, 4, 5 ]
[ -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "ztwBdPebNYP", "szzzvBb3C2d", "7aZjpKHkh5Y", "ApOA1mut5yo", "iclr_2021_KVTkzgz3g8O", "iclr_2021_KVTkzgz3g8O", "iclr_2021_KVTkzgz3g8O", "iclr_2021_KVTkzgz3g8O" ]
iclr_2021_XbJiphOWXiU
Empirically Verifying Hypotheses Using Reinforcement Learning
This paper formulates hypothesis verification as an RL problem. Specifically, we aim to build an agent that, given a hypothesis about the dynamics of the world, can take actions to generate observations which can help predict whether the hypothesis is true or false. Existing RL algorithms fail to solve this task, even ...
withdrawn-rejected-submissions
All the reviewers unanimously agree that the paper should be rejected. The main concern is well summarized by comment by R1's comment "While the problem is interesting, I found the paper difficult to read as the task is ill-defined in section 3 where many notation definitions are missing and some notations are reused i...
train
[ "Up891neK2oM", "RxFdP9Covtx", "Cpmyeqq1Qmh", "CnzsB9TlOF", "a2kn9FoUxsp", "kP6j4Nk8qxN", "KYb4TOU3i3t", "FBUuUOUWiiY" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for their time and feedback", "We thank the reviewer for their time and feedback", "We thank the reviewer for their time and feedback", "We thank the reviewer for their time and feedback", "\nThis paper presents the problem formulation for a hypothesis testing within a reinforcement l...
[ -1, -1, -1, -1, 3, 3, 5, 4 ]
[ -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "a2kn9FoUxsp", "kP6j4Nk8qxN", "KYb4TOU3i3t", "FBUuUOUWiiY", "iclr_2021_XbJiphOWXiU", "iclr_2021_XbJiphOWXiU", "iclr_2021_XbJiphOWXiU", "iclr_2021_XbJiphOWXiU" ]
iclr_2021_g75kUi1jAc_
WAFFLe: Weight Anonymized Factorization for Federated Learning
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices. In light of this need, federated learning has emerged as a popular training paradigm. However, many federated learning approaches trade transmitting dat...
withdrawn-rejected-submissions
This paper proposes an anonymization method for federated learning based on the Indian buffet process. The reviewers found the idea interesting, but raised the following main concerns (please see the reviews for more details): * Motivation and terminology needs clarification * Better comparison with secure aggregation ...
train
[ "DbndHdUpixA", "EgfGL_AVNJa", "ojZfWGI9b7w", "r-u06YVsri", "vF5GkyKu9F8", "c-t_fLBGheC" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the insightful comments. Before we address them individually, we’d like to encourage taking a step back and viewing our work holistically. We’ve proposed a completely new approach for federated learning, which allows for a principled Bayesian nonparametric approach to per-client personalization while...
[ -1, -1, -1, 5, 4, 6 ]
[ -1, -1, -1, 5, 4, 3 ]
[ "vF5GkyKu9F8", "r-u06YVsri", "c-t_fLBGheC", "iclr_2021_g75kUi1jAc_", "iclr_2021_g75kUi1jAc_", "iclr_2021_g75kUi1jAc_" ]
iclr_2021_E_U8Zvx7zrf
Delay-Tolerant Local SGD for Efficient Distributed Training
The heavy communication for model synchronization is a major bottleneck for scaling up the distributed deep neural network training to many workers. Moreover, model synchronization can suffer from long delays in scenarios such as federated learning and geo-distributed training. Thus, it is crucial that the distributed ...
withdrawn-rejected-submissions
No discussion or answers to concerns are offered by the authors. Given this, the current consensus remains the same as the initial review status, and AC's meta-review cannot provide any additional information. This leads to rejection
train
[ "5u9L-cafLA8", "4agwuSfFuW", "ubgXw8cWvf", "XqkxGFgVSeF" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a method, called OLCO3, to reduce the communication cost in distributed learning. Experiments on real datasets are used for evaluation. The paper is well written. \n\nThe main idea of OLCO3 is to combine many existing communication reduction methods, including pipelining, gradient compression a...
[ 5, 4, 5, 5 ]
[ 4, 4, 3, 4 ]
[ "iclr_2021_E_U8Zvx7zrf", "iclr_2021_E_U8Zvx7zrf", "iclr_2021_E_U8Zvx7zrf", "iclr_2021_E_U8Zvx7zrf" ]
iclr_2021_bBDlTR5eDIX
Predicting Video with VQVAE
In recent years, the task of video prediction---forecasting future video given past video frames---has attracted attention in the research community. In this paper we propose a novel approach to this problem with Vector Quantized Variational AutoEncoders (VQ-VAE). With VQ-VAE we compress high-resolution videos into a h...
withdrawn-rejected-submissions
While this paper was perceived as being fairly well written, the level of novelty and the evaluation were seen as weak by many reviewers. The aggregate opinions across reviewers is just too low to warrant an acceptance rating by the AC. The AC recommends rejection.
train
[ "ZybVqcjppc1", "W7cuK7g3jf9", "iA05aAcREQ", "gLZMyzn8P6X", "Wn2MHch6WQ", "VX1QinzTYW2", "KFv3xXht4fh", "cpD7XVfkk_7", "LJV2gzcG6fl", "GkkdxxEJ6E4" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "Summary\n\nThe paper proposes a modification of hierarchical VQ-VAE for video prediction task. To model temporal dependency, the encoder and pixelCNN in original VQ-VAE are extended with 3D convolutions. The proposed method is evaluated on a large-scale video dataset, Kinetics-600 dataset.\n \nPros\n- The paper is...
[ 4, 4, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2021_bBDlTR5eDIX", "iclr_2021_bBDlTR5eDIX", "GkkdxxEJ6E4", "LJV2gzcG6fl", "ZybVqcjppc1", "W7cuK7g3jf9", "cpD7XVfkk_7", "iclr_2021_bBDlTR5eDIX", "iclr_2021_bBDlTR5eDIX", "iclr_2021_bBDlTR5eDIX" ]
iclr_2021_kOA6rtPxyL
A Lazy Approach to Long-Horizon Gradient-Based Meta-Learning
Gradient-based meta-learning relates task-specific models to a meta-model by gradients. By this design, an algorithm first optimizes the task-specific models by an inner loop and then backpropagates meta-gradients through the loop to update the meta-model. The number of inner-loop optimization steps has to be small (e....
withdrawn-rejected-submissions
This paper presents a variant of MAML or Reptile, where the meta-update along the long trajectory of the inner-loop optimization is bypassed to reduce the computational overhead appeared in MAML. The main idea is to use the look-ahed optimizer with careful tuning of relevant hyperparameters, which is done by a teacher-...
train
[ "ql_XFK1EwJY", "yBLAzVMHoCd", "5gHpCMnnzWd", "IrCGJczn5j", "ly2f_YYq7R", "jVB_JazlDEn", "HkmeVtPZQAQ", "FG-L_hemg0l" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the detailed review and constructive feedback! Please see our responses to each of the “Cons” bullet points as follows. \n\n* The intuition is to use lookahead to build a bridge between a task-specific model's initialization and its endpoints after a long-horizon exploration. Unlike the original look...
[ -1, -1, -1, -1, 5, 7, 5, 4 ]
[ -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "jVB_JazlDEn", "ly2f_YYq7R", "HkmeVtPZQAQ", "FG-L_hemg0l", "iclr_2021_kOA6rtPxyL", "iclr_2021_kOA6rtPxyL", "iclr_2021_kOA6rtPxyL", "iclr_2021_kOA6rtPxyL" ]
iclr_2021_Kao09W-oe8
Channel-Directed Gradients for Optimization of Convolutional Neural Networks
We introduce optimization methods for convolutional neural networks that can be used to improve existing gradient-based optimization in terms of generalization error. The method requires only simple processing of existing stochastic gradients, can be used in conjunction with any optimizer, and has only a linear overhea...
withdrawn-rejected-submissions
The paper begins with an observation in standard trained CNNs that the correlations in the output channels are high. Building upon this the paper proposes a new "optimizer" which modifies the gradients to encourage corelations among output channels. They provide a theoretical foundation for the method, by deriving the ...
test
[ "UZ549vCIvqK", "IgvAvss33hn", "9bBbW3x9XK4", "yjNShMNRgqs", "m-A7hLDAJbJ", "gmS3g34bhC0", "J_i12EcF-W-", "oRNFLFKL_Tp", "RbdG5X-I87E", "y3Ogy-_SdV", "1W_YMuIIrko" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes an optimization method for convolutional neural networks that can be used to improve existing gradient-based optimization in terms of generalization error. The method computes the gradient of the loss function with respect to output-channel directed re-weighted $H^0$ or Sobolev metrics. \n\nTh...
[ 6, 5, -1, -1, -1, -1, -1, -1, 6, 4, 6 ]
[ 1, 5, -1, -1, -1, -1, -1, -1, 3, 3, 2 ]
[ "iclr_2021_Kao09W-oe8", "iclr_2021_Kao09W-oe8", "iclr_2021_Kao09W-oe8", "UZ549vCIvqK", "1W_YMuIIrko", "RbdG5X-I87E", "y3Ogy-_SdV", "IgvAvss33hn", "iclr_2021_Kao09W-oe8", "iclr_2021_Kao09W-oe8", "iclr_2021_Kao09W-oe8" ]
iclr_2021_JNtw9rUJnV
Real-Time AutoML
We present a new zero-shot approach to automated machine learning (AutoML) that predicts a high-quality model for a supervised learning task and dataset in real-time without fitting a single model. In contrast, most AutoML systems require tens or hundreds of model evaluations. Hence our approach accelerates AutoML by o...
withdrawn-rejected-submissions
The paper presents an algorithm for real-time auto-ML based on zero-shot learning, which matches an ML pipeline to a dataset via the meta-features of the pipeline and the dataset. It aims to address an important problem, and the idea of the proposed solution seems interesting. However, there are several issues with th...
train
[ "UyVjf_XOk", "egPsFurA02", "LyBF1WL702t", "5McocLMV5b_", "C58D6vPrEGA", "bbjTf5SLlqM" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewers for their time and are working on an improved system and paper.\n\n1. Transformers: this work is the first to use Transformers in AutoML, as a follow-up to the workshop paper on AutoML language embeddings (2019) and symposium paper on Zero-Shot AutoML (2020). \n\n2. Privileged AutoML: Curren...
[ -1, 4, 4, 2, 4, 4 ]
[ -1, 5, 4, 5, 4, 4 ]
[ "iclr_2021_JNtw9rUJnV", "iclr_2021_JNtw9rUJnV", "iclr_2021_JNtw9rUJnV", "iclr_2021_JNtw9rUJnV", "iclr_2021_JNtw9rUJnV", "iclr_2021_JNtw9rUJnV" ]
iclr_2021_6X_32jLUaDg
Exploiting Playbacks in Unsupervised Domain Adaptation for 3D Object Detection
Self-driving cars must detect other vehicles and pedestrians in 3D to plan safe routes and avoid collisions. State-of-the-art 3D object detectors, based on deep learning, have shown promising accuracy but are prone to over-fit to domain idiosyncrasies, causing them to fail in new environments---a serious problem...
withdrawn-rejected-submissions
This paper proposed an unsupervised domain adaptation method for 3D lidar-based object detection. Four reviewers provided detailed reviews: 3 rated “Marginally above acceptance threshold”, and 1 rated “Ok but not good enough - rejection”. The reviewers appreciated simple yet effective idea, the well motivated method, t...
test
[ "UwKX6NnsE8T", "J2YL_uieuC", "UjP9YsAfOS7", "LyhDGmBEBaz", "ZC5ikd9EReJ", "K8hGAJeQDPE", "4Elz5dYAIDi", "m-o-DYrT-M", "O3f3o1jhA9" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The topic of adapting 3d object detectors to new domains is important. The paper clearly motivates the problem, clearly presents the methods and shows detailed experiments. I really enjoyed reading the paper. \n\nMy main concern is that the two components of the method (self-training with pseudo labels and genera...
[ 6, 6, -1, -1, -1, -1, -1, 6, 4 ]
[ 5, 3, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_6X_32jLUaDg", "iclr_2021_6X_32jLUaDg", "iclr_2021_6X_32jLUaDg", "m-o-DYrT-M", "UwKX6NnsE8T", "O3f3o1jhA9", "J2YL_uieuC", "iclr_2021_6X_32jLUaDg", "iclr_2021_6X_32jLUaDg" ]
iclr_2021_QTgP9nKmMPM
Decoupled Greedy Learning of Graph Neural Networks
Graph Neural Networks (GNNs) become very popular for graph-related applications due to their superior performance. However, they have been shown to be computationally expensive in large scale settings, because their produced node embeddings have to be computed recursively, which scales exponentially with the number of ...
withdrawn-rejected-submissions
In this paper, the authors propose a new layer-by-layer training approach for GNN in particular for a large graph. The proposed approach can be easily parallelizable and scale well to a large graph. Reviewers are concerned about the novelty of the approach and the lack of theoretical analysis, and it is not well addres...
train
[ "y1hj9xGPkUp", "pLdv6m3K2L9", "Zv1RL6jC2ok", "ZiaE-nnbwtm", "afneHE8l3Dl", "-tcydzrV9IA" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your comments. We address your questions and concerns in the following:\n\n1. First of all, we'd like to reclaim our key contributions and novelty. In our work, we aim at designing algorithms to parallelize layer-wise training on GNNs to further enhance its efficiency, which would be extremely critic...
[ -1, -1, -1, 6, 4, 4 ]
[ -1, -1, -1, 5, 4, 4 ]
[ "afneHE8l3Dl", "-tcydzrV9IA", "ZiaE-nnbwtm", "iclr_2021_QTgP9nKmMPM", "iclr_2021_QTgP9nKmMPM", "iclr_2021_QTgP9nKmMPM" ]
iclr_2021_TWDczblpqE
Semi-Supervised Audio Representation Learning for Modeling Beehive Strengths
Honey bees are critical to our ecosystem and food security as a pollinator, contributing 35% of our global agriculture yield. In spite of their importance, beekeeping is exclusively dependent on human labor and experience-derived heuristics, while requiring frequent human checkups to ensure the colony is healthy, which...
withdrawn-rejected-submissions
All Reviewers and myself believe that ICLR may not be the right venue for this paper. Hence, my recommendation is to REJECT it. As a brief summary, I highlight below some pros and cons that arose during the review and meta-review processes. Pros: - Important domain, but out of scope of ICLR. - Collection of sensor and...
train
[ "CLn6C1kWcfn", "vg5PVIvNHWS", "O4AB40ZN5d1", "mc9t_77OIKM" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We wanted to thank all reviewers for reading our paper and their comments, especially related to presentation. We acknowledge it’s difficult to assess works at the boundary of two very different fields, and it was also very difficult when we made the decision to submit this to ICLR.\n\nIn responding to relevance: ...
[ -1, 4, 3, 5 ]
[ -1, 5, 4, 3 ]
[ "iclr_2021_TWDczblpqE", "iclr_2021_TWDczblpqE", "iclr_2021_TWDczblpqE", "iclr_2021_TWDczblpqE" ]
iclr_2021_SeFiP8YAJy
Better Together: Resnet-50 accuracy with 13× fewer parameters and at 3× speed
Recent research on compressing deep neural networks has focused on reducing the number of parameters. Smaller networks are easier to export and deploy on edge-devices. We introduce Adjoined networks as a training approach that can regularize and compress any CNN-based neural architecture. Our one-shot learning paradigm...
withdrawn-rejected-submissions
The authors proposed to train a large network and a small network simultaneously with a new loss function. The parameters are shared between the two networks, and the loss also incorporates the KL-divergence between the outputs of the two models. In this way, the authors claim that one can train a small network with si...
train
[ "Sq-o8X2LRmJ", "Tmt1ho1_rkh", "k0nNoa8cY_G", "5nkrIRsEltx", "j9znrBbMtTc", "oioiEvLqx4m", "wDUbpxnubD0", "KckNCqCCjBJ", "HBtu9YFg0Qa", "rIorrIDzV7U" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors present a very interesting idea of training a large network and a small network simultaneously with an interesting new loss function. The authors show that this can lead to a much smaller network with good accuracy (compared to the original network); thus this may be a good technique for sparsification...
[ 4, -1, -1, -1, -1, -1, 6, 4, 5, 5 ]
[ 4, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "iclr_2021_SeFiP8YAJy", "wDUbpxnubD0", "KckNCqCCjBJ", "HBtu9YFg0Qa", "rIorrIDzV7U", "Sq-o8X2LRmJ", "iclr_2021_SeFiP8YAJy", "iclr_2021_SeFiP8YAJy", "iclr_2021_SeFiP8YAJy", "iclr_2021_SeFiP8YAJy" ]
iclr_2021_8EGmvcCVrmZ
Deep Learning is Singular, and That's Good
In singular models, the optimal set of parameters forms an analytic set with singularities and classical statistical inference cannot be applied to such models. This is significant for deep learning as neural networks are singular and thus ``dividing" by the determinant of the Hessian or employing the Laplace approxima...
withdrawn-rejected-submissions
The paper proposes to introduce ideas from singular theory to deep learning. All reviewers agree that the work is not yet ready for publication. The key issue seems to boil down to the fact that the paper does not propose nor verify any clearly motivated scientific hypothesis. Relatedly, the work includes many too broa...
test
[ "8mkwAlESXcs", "anxo7ZRHSN", "5iKlER1amIs", "vksM4G2kO_A", "XBy1AjYYhaA", "abyT4sGG61Q", "CgWmojHjqXf", "ioMbxZGoJrB" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We are glad to hear that Reviewer 2 enjoyed the pedagogical aspect of our submission. \n\nRegarding the comments on the effective number of parameters. There are no assumptions on the model here: the calculation is meant only to exhibit that in regular models and a slightly larger class (what we term the minimally...
[ -1, -1, -1, -1, 4, 4, 4, 5 ]
[ -1, -1, -1, -1, 1, 3, 5, 3 ]
[ "abyT4sGG61Q", "XBy1AjYYhaA", "CgWmojHjqXf", "ioMbxZGoJrB", "iclr_2021_8EGmvcCVrmZ", "iclr_2021_8EGmvcCVrmZ", "iclr_2021_8EGmvcCVrmZ", "iclr_2021_8EGmvcCVrmZ" ]
iclr_2021_ry8_g12nVD
SEMANTIC APPROACH TO AGENT ROUTING USING A HYBRID ATTRIBUTE-BASED RECOMMENDER SYSTEM
Traditionally contact centers route an issue to an agent based on ticket load or skill of the agent. When a ticket comes into the system, it is either manually analyzed and pushed to an agent or automatically routed to an agent based on some business rules. A Customer Relationship Management (CRM) system often has pred...
withdrawn-rejected-submissions
Does not seem to be a complete submission (only one page), all reviewers agree on rejecting.
train
[ "UoYuz-hpxxt", "sdzlugc2grb", "gppyPgU9-vg" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "* First of all, the submission is not following the style.\n\n* The paper only has one page.\n\n* The proposed method is not novel, and it is not described clearly. \n\n* Lack of experiments and comparisons. Similarly, a one-page paper will have a very hard time to deliver enough story. There is only one table in ...
[ 2, 2, 3 ]
[ 5, 5, 5 ]
[ "iclr_2021_ry8_g12nVD", "iclr_2021_ry8_g12nVD", "iclr_2021_ry8_g12nVD" ]
iclr_2021_q_kZm9eHIeD
Entropic Risk-Sensitive Reinforcement Learning: A Meta Regret Framework with Function Approximation
We study risk-sensitive reinforcement learning with the entropic risk measure and function approximation. We consider the finite-horizon episodic MDP setting, and propose a meta algorithm based on value iteration. We then derive two algorithms for linear and general function approximation, namely RSVI.L and RSVI.G, res...
withdrawn-rejected-submissions
The paper considers the risk sensitive RL by exploiting entropic risk. The major contribution of this paper is providing the theoretical guarantees for the proposed risk-senstive value iteration with function approximation. The major concern of this paper is the similarity to the existing work in (Fei et al., 2020)....
train
[ "mUG_ACLpZ03", "_IEdnbqvrai", "tYZGEkcptvX", "vTEgPFFptw0", "qBU_lRRob1b", "eZsEJMdyE1Y", "IomtRgA96OG", "-XQs8jst0KD" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your positive feedback. We would like to illustrate the following new insights of our work compared to existing works.\n\n*) Linear function approximation: please see our response to AnonReviewer1.\n\n*) General function approximation: our Algorithm 3 is indeed inspired by [Russo and Van Roy (2014...
[ -1, -1, -1, -1, 6, 5, 4, 5 ]
[ -1, -1, -1, -1, 3, 3, 4, 2 ]
[ "qBU_lRRob1b", "IomtRgA96OG", "eZsEJMdyE1Y", "-XQs8jst0KD", "iclr_2021_q_kZm9eHIeD", "iclr_2021_q_kZm9eHIeD", "iclr_2021_q_kZm9eHIeD", "iclr_2021_q_kZm9eHIeD" ]
iclr_2021_L4n9FPoQL1
Classify and Generate Reciprocally: Simultaneous Positive-Unlabelled Learning and Conditional Generation with Extra Data
The scarcity of class-labeled data is a ubiquitous bottleneck in a wide range of machine learning problems. While abundant unlabeled data normally exist and provide a potential solution, it is extremely challenging to exploit them. In this paper, we address this problem by leveraging Positive-Unlabeled~(PU) classificat...
withdrawn-rejected-submissions
The paper studies the problem of leveraging Positive-Unlabeled~(PU) classification and conditional generation with extra unlabeled data simultaneously in one learning framework. Some major review concerns on the weaknesses include limited novel technical contributions, poor presentation and weak experimental results (e...
train
[ "TBfp1faS0M", "iGdwlLmpBhP", "S5DEkNYsj7v", "wGNWKS1ud0t", "SrTVWPnBVKd", "YUfm2Oq0UuW" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThis paper proposed the combination of two techniques for improved learning with unlabelled data: 1) Positive-Unlabelled (PU) classifier, and 2) class-conditional GAN (cGAN). The idea is that the PU classifier can help produce more accurate pseudo labels for training of a cGAN, and with the improved cGA...
[ 5, -1, -1, -1, 6, 6 ]
[ 3, -1, -1, -1, 4, 4 ]
[ "iclr_2021_L4n9FPoQL1", "SrTVWPnBVKd", "TBfp1faS0M", "YUfm2Oq0UuW", "iclr_2021_L4n9FPoQL1", "iclr_2021_L4n9FPoQL1" ]
iclr_2021_UFJOP5w0kV
BiGCN: A Bi-directional Low-Pass Filtering Graph Neural Network
Graph convolutional networks have achieved great success on graph-structured data. Many graph convolutional networks can be regarded as low-pass filters for graph signals. In this paper, we propose a new model, BiGCN, which represents a graph neural network as a bi-directional low-pass filter. Specifically, we not only...
withdrawn-rejected-submissions
From the positive side the problem addressed by the paper could be of potential interest in the case there is noise in the features associated to each node of the graph. The paper is mostly well written and clear. The proposed approach is based on solid mathematical grounds. On the other hand there are concerns about:...
train
[ "ggvIlfI6FxW", "vVNWlZ9iCPU", "dy7QOPOmIC0", "dXEcglU_UC4", "gGSNR0ao6ss", "mSc0lRXx5kb", "lPdDxyUlEAI", "kTyHp3k0HE" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the helpful and constructive comments. We respond to each question as follows: \n\n**Q1.The convergence of this algorithm with Taylor approximations is not provided in this paper.**\n\nANS: In this paper, using ADMM to solve the convex optimization problem (4) is just a motivation to desi...
[ -1, -1, -1, -1, 5, 4, 6, 5 ]
[ -1, -1, -1, -1, 3, 5, 3, 3 ]
[ "gGSNR0ao6ss", "mSc0lRXx5kb", "lPdDxyUlEAI", "kTyHp3k0HE", "iclr_2021_UFJOP5w0kV", "iclr_2021_UFJOP5w0kV", "iclr_2021_UFJOP5w0kV", "iclr_2021_UFJOP5w0kV" ]
iclr_2021_npOuXc85I5k
Pareto Adversarial Robustness: Balancing Spatial Robustness and Sensitivity-based Robustness
Adversarial robustness, mainly including sensitivity-based robustness and spatial robustness, plays an integral part in the robust generalization. In this paper, we endeavor to design strategies to achieve comprehensive adversarial robustness. To hit this target, firstly we investigate the less-studied spatial robustne...
withdrawn-rejected-submissions
This paper aims to address the robustness issues by considering natural accuracy, sensitivity-based robustness and spatial robustness at the same. However, the reviewers pointed out that many things, like the expriment, the presentation, the algorithm, are not clear. In addition, the technique part is weak and below th...
train
[ "sHJwXmsJlLl", "O7_DeZRFlWw", "R71NVA18wTo", "aHUIzYGTJ1L", "tfi71eNGLNc", "zffltdlUk0x" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Here is our clarification.\n\n$\\textbf{1. Experiments (1). }$ \nThe different tendency of the two spatial robustness is clear, even though legends may overlap with part of curves. We will improve the presentation of Figure 3 in the revised version.\n\n$\\textbf{2. Experiments (2). }$ \nAs stated in Section 3.1 ...
[ -1, -1, -1, 5, 3, 6 ]
[ -1, -1, -1, 3, 4, 4 ]
[ "aHUIzYGTJ1L", "tfi71eNGLNc", "zffltdlUk0x", "iclr_2021_npOuXc85I5k", "iclr_2021_npOuXc85I5k", "iclr_2021_npOuXc85I5k" ]
iclr_2021_4YzI0KpRQtZ
Streaming Probabilistic Deep Tensor Factorization
Despite the success of existing tensor factorization methods, most of them conduct a multilinear decomposition, and rarely exploit powerful modeling frameworks, like deep neural networks, to capture a variety of complicated interactions in data. More important, for highly expressive, deep factorization, we lack an effe...
withdrawn-rejected-submissions
The paper proposes a Bayesian neural network model for tensor factorization, with particular focus on streaming data. The key contribution is the streaming posterior inference of the deep TF models. The combinations of online tensor factorization, Bayesian NN with sparsity priors, posterior inference is new and intere...
train
[ "2b7CWUjFnpx", "lewXU7bGuDM", "V7AlN2kkDGj", "rDzkavGn3qj", "JYZhiPDZiob", "852_h7FPJPK", "4pd_FmZOHdR", "F7kOQSi1Hb" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "##########################\n\nAfter author feedback:\nThanks for the detailed feedback from the authors. Most of my concerns have been addressed and I will keep my scores unchanged. Please add the additional information in the feedback to the final version.\n\n##########################\n\nSummary:\n\nThis paper p...
[ 6, 5, 6, -1, -1, -1, -1, 5 ]
[ 3, 3, 2, -1, -1, -1, -1, 4 ]
[ "iclr_2021_4YzI0KpRQtZ", "iclr_2021_4YzI0KpRQtZ", "iclr_2021_4YzI0KpRQtZ", "V7AlN2kkDGj", "F7kOQSi1Hb", "2b7CWUjFnpx", "lewXU7bGuDM", "iclr_2021_4YzI0KpRQtZ" ]
iclr_2021_fkhl7lb3aw
ROGA: Random Over-sampling Based on Genetic Algorithm
When using machine learning to solve practical tasks, we often face the problem of class imbalance. Unbalanced classes will cause the model to generate preferences during the learning process, thereby ignoring classes with fewer samples. The oversampling algorithm achieves the purpose of balancing the difference in qua...
withdrawn-rejected-submissions
This paper proposes to address the class imbalance problem by defining an over-sampling strategy based on oversampling. It brings potentially interesting ideas. The reviewers agree on the fact that the experiments are limited, some methodological aspects require some clarifications and the writing needs to be improved....
train
[ "DIqx0CBv6u", "zsu2zpJ22ub", "BX_fFQenzjj", "5lAOmgLxN6n" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors tackle the problem of class imbalance in supervised learning. They propose an oversampling algorithm that is based on a genetic algorithm. Samples are iteratively generated and filtered to maintain informative samples. The authors demonstrate the efficacy of the approach using several examples.\n\nAlth...
[ 3, 5, 3, 4 ]
[ 5, 4, 4, 3 ]
[ "iclr_2021_fkhl7lb3aw", "iclr_2021_fkhl7lb3aw", "iclr_2021_fkhl7lb3aw", "iclr_2021_fkhl7lb3aw" ]
iclr_2021_85d8bg9RvDT
Deep Retrieval: An End-to-End Structure Model for Large-Scale Recommendations
One of the core problems in large-scale recommendations is to retrieve top relevant candidates accurately and efficiently, preferably in sub-linear time. Previous approaches are mostly based on a two-step procedure: first learn an inner-product model and then use maximum inner product search (MIPS) algorithms to search...
withdrawn-rejected-submissions
The introduced method is novel and interesting. However, as pointed in the reviews the` paper misses several important references. The authors should extend their discussion on related work by methods from both recommender systems and extreme classification. Besides the papers listed by the reviewers, the introduced me...
train
[ "km7j-UZkjHj", "6AGcmYEdEMq", "MiMY80LxWL1", "Y9SPZBTcrDA" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "##########################################################################\nSummary: \n\nThis paper presents an end-to-end deep retrieval method for recommendation. The model encodes all candidates into a discrete latent space, and learns the latent space parameters alongside the other neural network parameters. R...
[ 3, 5, 4, 4 ]
[ 5, 3, 3, 4 ]
[ "iclr_2021_85d8bg9RvDT", "iclr_2021_85d8bg9RvDT", "iclr_2021_85d8bg9RvDT", "iclr_2021_85d8bg9RvDT" ]
iclr_2021_V3o2w-jDeT5
Multi-Source Unsupervised Hyperparameter Optimization
How can we conduct efficient hyperparameter optimization for a completely new task? In this work, we consider a novel setting, where we search for the optimal hyperparameters for a target task of interest using only unlabeled target task and ‘somewhat relevant’ source task datasets. In this setting, it is essential to ...
withdrawn-rejected-submissions
The paper has been actively discussed in the light of the authors’ response. Even though the paper was, overall, found quite clear with a solid theoretical support, the reviewers listed several concerns that remained unsolved after the rebuttal, e.g., * The proposed approach may not be properly scoped/positioned and e...
train
[ "y4DJCew5DUk", "yqvvuDh_sSP", "uaML968sftJ", "TuNSXahtDaS", "XcXuPEn6Ehf", "2sU5ugoh31P", "r_te5WZ5yZF", "1KbbrJv3JMz", "bT9_nAK8G2I", "ytpXKyoxcSs", "hXwEejlzBsM" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "**Summary**\nIn the situation where a given objective is computed with samples from a distribution, e.g. loss on validation data in hyperparameter optimization, this paper proposes a method to construct a surrogate objective using objectives computed on sets of samples each of which is from a different distributio...
[ 6, -1, -1, -1, -1, -1, -1, -1, 5, 6, 3 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2021_V3o2w-jDeT5", "r_te5WZ5yZF", "TuNSXahtDaS", "1KbbrJv3JMz", "bT9_nAK8G2I", "y4DJCew5DUk", "ytpXKyoxcSs", "hXwEejlzBsM", "iclr_2021_V3o2w-jDeT5", "iclr_2021_V3o2w-jDeT5", "iclr_2021_V3o2w-jDeT5" ]
iclr_2021_xPw-dr5t1RH
KETG: A Knowledge Enhanced Text Generation Framework
Embedding logical knowledge information into text generation is a challenging NLP task. In this paper, we propose a knowledge enhanced text generation (KETG) framework, which incorporates both the knowledge and associated text corpus to address logicality and diversity in text generation. Specifically, we validate our ...
withdrawn-rejected-submissions
While the paper studies an interesting and important problem, namely the language generation, it is poorly written, which makes it difficult to judge its value. The reviewers also expressed concern over the scope of the evaluation and the lack of comparison to SOTA.
train
[ "1XlQLytH7ne", "HreLAgXYAc", "9u8w8Uzy57", "IwZb-tyRyKo" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper is very poorly written. This is in part from English as a second language I suspect, but even assuming some mentorship on the writing, there are too many flaws to consider accepting this paper. The modelling work is so poorly described I can't really comment on it. The majority of my review will just att...
[ 2, 3, 2, 2 ]
[ 4, 4, 5, 4 ]
[ "iclr_2021_xPw-dr5t1RH", "iclr_2021_xPw-dr5t1RH", "iclr_2021_xPw-dr5t1RH", "iclr_2021_xPw-dr5t1RH" ]
iclr_2021_3nSU-sDEOG9
Empirical Sufficiency Featuring Reward Delay Calibration
Appropriate credit assignment for delay rewards is a fundamental challenge in various deep reinforcement learning tasks. To tackle this problem, we introduce a delay reward calibration paradigm inspired from a classification perspective. We hypothesize that when an agent's behavior satisfies an equivalent sufficient co...
withdrawn-rejected-submissions
This work tackles sparse or delayed reward problem in reinforcement learning. The key idea is to build a classifier to detect states that will lead to high rewards in the future and provide a bonus to those states. All the reviewers liked the idea but had issues with the execution of empirical results. The approach is ...
train
[ "p_rl4EMhhrr", "I03T1Lnb8mH", "7mlbAd5KO2D", "lansuKfruF", "3qgZ6P23otS", "OTWW0fFNwKQ", "9yrgrtP7zCM", "7FDivoS_zs1" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the comments. We would try to improve the experiments and theoretical explanation.\n\nQ1. The approach presented in this paper seems to be ad-hoc.\n\nExisting evaluation methodologies rely on the Bellman equation in-depth, however dynamic programming is not always the best way for all situations, e.g. A...
[ -1, -1, -1, -1, 5, 4, 4, 4 ]
[ -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "7FDivoS_zs1", "3qgZ6P23otS", "OTWW0fFNwKQ", "9yrgrtP7zCM", "iclr_2021_3nSU-sDEOG9", "iclr_2021_3nSU-sDEOG9", "iclr_2021_3nSU-sDEOG9", "iclr_2021_3nSU-sDEOG9" ]
iclr_2021_fGiKxvF-eub
Oblivious Sketching-based Central Path Method for Solving Linear Programming Problems
In this work, we propose a sketching-based central path method for solving linear programmings, whose running time matches the state of art results [Cohen, Lee, Song STOC 19; Lee, Song, Zhang COLT 19]. Our method opens up the iterations of the central path method and deploys an "iterate and sketch" approach towards the...
withdrawn-rejected-submissions
The reviewers found it hard to understand the motivation of using both oblivious sketching and maintaining feasibility throughout the course of the algorithm, given that the ultimate running times matched those of existing work. Because there wasn't a concrete improvement over prior work, the worry is what the impact o...
train
[ "PXkP9m_SETP", "mNhgOVzR5ZR", "nbnZWlIwVdM", "d5CDgLyvkmb", "zPuQB0cYPm9", "zFDzFUgKJeG", "AovnThoworM", "U8onvvhzfkS", "e-tz6KX24X8", "NYAOdBTPbT1" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper studies the problem of solving LP and more generally convex programming via sketching based approaches. In particular, the running time of proposed algorithm in this paper matches the running time of the best known algorithms [Cohen et al(19b) and Lee et al(19)]. However, this paper provides some further...
[ 7, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ 3, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "iclr_2021_fGiKxvF-eub", "d5CDgLyvkmb", "U8onvvhzfkS", "zFDzFUgKJeG", "NYAOdBTPbT1", "e-tz6KX24X8", "PXkP9m_SETP", "iclr_2021_fGiKxvF-eub", "iclr_2021_fGiKxvF-eub", "iclr_2021_fGiKxvF-eub" ]
iclr_2021_kKwFlM32HV5
Natural World Distribution via Adaptive Confusion Energy Regularization
We introduce a novel and adaptive batch-wise regularization based on the proposed Batch Confusion Norm (BCN) to flexibly address the natural world distribution which usually involves fine-grained and long-tailed properties at the same time. The Fine-Grained Visual Classification (FGVC) problem is notably characterized ...
withdrawn-rejected-submissions
The authors address the problem of fine-grained image classification. They propose a batch based regularizer, called the batch confusion norm (BCN), to encourage less over confident predictions. They also tackle the problem of class imbalance during training by adaptively weighting the BCN loss at the class level to ta...
train
[ "hdf6afKr37C", "sZSAJK-70iM", "pjmhng2rBNg", "YdlHNuI9-pm", "7O5zRWYL6rD", "GkhzrehohLy", "evX4kdIRf8r", "NUSZc-uBoOC" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "[Q1]. Although the proposed method is reasonable, some specific model designs are not quite clear. 1) Regarding Eq. (2), the reason why it requires to optimize the ranking should be further explained and its motivation needs to state. 2) Regarding Eq. (5), what the intuition of the adaptive matrix (i.e., ($log_{\\...
[ -1, -1, -1, -1, 4, 5, 4, 5 ]
[ -1, -1, -1, -1, 3, 4, 5, 5 ]
[ "NUSZc-uBoOC", "evX4kdIRf8r", "7O5zRWYL6rD", "GkhzrehohLy", "iclr_2021_kKwFlM32HV5", "iclr_2021_kKwFlM32HV5", "iclr_2021_kKwFlM32HV5", "iclr_2021_kKwFlM32HV5" ]
iclr_2021_QpU7n-6l0n
On the Consistency Loss for Leveraging Augmented Data to Learn Robust and Invariant Representations
Data augmentation is one of the most popular techniques for improving the robustness of neural networks. In addition to directly training the model with original samples and augmented samples, a torrent of methods regularizing the distance between embeddings/representations of the original samples and their augmented c...
withdrawn-rejected-submissions
The reviewers brought up many important concerns about this paper. On the positive side, the understanding of data augmentation is an important topic in deep learning, having good theoretical results is interesting here , and the experiments seem to do an okay job of backing up the theory. On the negative side, present...
train
[ "OEyxlcBOMg", "5Hw9RkW9YJW", "etZcY3Bc7GD", "bilYNdPlFpF", "IaoRcxhxsu-", "2hyBUKIwKnp", "TMprEtIwUnZ", "n4rdc02iBpk" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the response and the openness to upgrading the evaluation. We are glad that it seems the only disagreement is about the generalization ability on extrapolations. Below, we will respond point to point. \n\n* Unfortunately, the first point made by the reviewer is not complete. We are not sure what [1, ...
[ -1, -1, -1, -1, -1, 6, 4, 6 ]
[ -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "5Hw9RkW9YJW", "IaoRcxhxsu-", "2hyBUKIwKnp", "n4rdc02iBpk", "TMprEtIwUnZ", "iclr_2021_QpU7n-6l0n", "iclr_2021_QpU7n-6l0n", "iclr_2021_QpU7n-6l0n" ]
iclr_2021_o21sjfFaU1
Learning Robust Models by Countering Spurious Correlations
Machine learning has demonstrated remarkable prediction accuracy over i.i.d data, but the accuracy often drops when tested with data from another distribution. One reason behind this accuracy drop is the reliance of models on the features that are only associated with the label in the training distribution, but not the...
withdrawn-rejected-submissions
Reviewers raised concerns about the paper's clarity (interchangeable use of subtly different terms, notation, typos), and how realistic/practical certain assumptions are. The authors are encouraged to incorporate the reviewers' detailed comments for a future submission.
train
[ "1S2A1U_Gfuo", "lVMafYBqZF", "a5j2J3Td1g", "_O-Ua6b7_e", "-GLfXdj_e6u" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewers, \n\nThank you very much for the assessments and constructive comments, we appreciate the recognition of the importance of the problem this paper focuses on. We are also grateful for the constructive comments, especially the requirements for further clarifications of certain points. We will continue...
[ -1, 3, 5, 6, 4 ]
[ -1, 4, 3, 4, 2 ]
[ "iclr_2021_o21sjfFaU1", "iclr_2021_o21sjfFaU1", "iclr_2021_o21sjfFaU1", "iclr_2021_o21sjfFaU1", "iclr_2021_o21sjfFaU1" ]
iclr_2021_G70Z8ds32C9
Deep Networks from the Principle of Rate Reduction
This work attempts to interpret modern deep (convolutional) networks from the principles of rate reduction and (shift) invariant classification. We show that the basic iterative gradient ascent scheme for maximizing the rate reduction of learned features naturally leads to a deep network, one iteration per layer. The a...
withdrawn-rejected-submissions
This paper received borderline scores, which makes for a difficult recommendation. Unfortunately, two of the reviews were too short and thus were of limited use in forming a recommendation. That includes the high-scoring one, which did not adequately substantiate its score. There is much to admire in this submission. ...
val
[ "PreuNpBJaBq", "lfZzHF5AlB", "vI3d_epy1Ts", "WFqoD2hbr8-", "1m8Zve13L1a", "J6sSRSh8jXE", "YSNHVO_4pog", "YJQKvU6raoi", "-yjS_GOveTW", "bkvQYMvLovN", "JDD9Aw8R5r-", "QXKDL_JOG1W", "6HSxwFexCRm", "s8-yFQbmGr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In their submission the authors discuss an alternative learning rule to backpropagation called “maximal coding rate reduction” that was introduced recently in Yu et a. 2020. As far as I could tell, as a non-expert, maximizing the coding rate reduction objective encourages inputs from different classes to be maxima...
[ 4, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 9, 6, 6 ]
[ 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 4 ]
[ "iclr_2021_G70Z8ds32C9", "iclr_2021_G70Z8ds32C9", "QXKDL_JOG1W", "1m8Zve13L1a", "YJQKvU6raoi", "lfZzHF5AlB", "lfZzHF5AlB", "PreuNpBJaBq", "PreuNpBJaBq", "s8-yFQbmGr", "6HSxwFexCRm", "iclr_2021_G70Z8ds32C9", "iclr_2021_G70Z8ds32C9", "iclr_2021_G70Z8ds32C9" ]
iclr_2021_Y4SOA2qsYJS
MCM-aware Twin-least-square GAN for Hyperspectral Anomaly Detection
Hyperspectral anomaly detection under high-dimensional data and interference of deteriorated bands without any prior information has been challenging and attracted close attention in the exploration of the unknown in real scenarios. However, some emerging methods based on generative adversarial network (GAN) suffer fro...
withdrawn-rejected-submissions
In this paper, the authors propose a MCM-aware twin-least-squares GAN (MTGAN) model for hyperspectral anomaly detection. The proposed method is somewhat novel, and the efficacy of the proposed method is validated through experiments. However, the clarity of the paper is low, and the explanation of some formulas is not ...
train
[ "3icTXc16FuK", "n8THYslpQwq", "J6H5XdZKupR" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a MCM-aware twin-least-squares GAN (MTGAN) model for hyperspectral anomaly detection. MTGAN introduce the MCM-aware strategy to construct multi-scale priors with precise second-order statistics, impose the twin-least-square loss on GAN and enforce a new anomaly rejection loss to establish a pur...
[ 4, 5, 5 ]
[ 4, 4, 4 ]
[ "iclr_2021_Y4SOA2qsYJS", "iclr_2021_Y4SOA2qsYJS", "iclr_2021_Y4SOA2qsYJS" ]
iclr_2021_d_Ue2glvcY8
Structure Controllable Text Generation
Controlling the presented forms (or structures) of generated text are as important as controlling the generated contents during neural text generation. It helps to reduce the uncertainty and improve the interpretability of generated text. However, the structures and contents are entangled together and realized simultan...
withdrawn-rejected-submissions
This paper proposes a controllable text generation model conditioned on desired structures, converting a text into structure information such as part of speech (POS) and participial construction (PC). It proposes a “Structure Aware Transformer” (SAT) to generate text and claims better PPL and BLEU compared with GPT-2....
train
[ "27Qv8GUONiK", "W3-NBJteV86", "T4qv5ktf3g1", "vlVBbgRebL_" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Review of Paper: Structure Controllable Text Generation\n\nSummary:\nThis paper proposes a structure-aware Transformer (SAT) by incorporating multiple types of multi-granularity structure information to control the text generation. Their method can extract structure information given sequence template by auxiliary...
[ 3, 2, 2, 5 ]
[ 4, 5, 5, 4 ]
[ "iclr_2021_d_Ue2glvcY8", "iclr_2021_d_Ue2glvcY8", "iclr_2021_d_Ue2glvcY8", "iclr_2021_d_Ue2glvcY8" ]
iclr_2021_YZrQKLHFhv3
MixSize: Training Convnets With Mixed Image Sizes for Improved Accuracy, Speed and Scale Resiliency
Convolutional neural networks (CNNs) are commonly trained using a fixed spatial image size predetermined for a given model. Although trained on images of a specific size, it is well established that CNNs can be used to evaluate a wide range of image sizes at test time, by adjusting the size of intermediate feature maps...
withdrawn-rejected-submissions
This work proposes to train networks with mixed image sizes to allow for faster inference and also for robustness. The reviewers found the paper was well-written and appreciated that the code was available for reproducibility. However, the paper does not sufficiently compare to related methods. The authors should resub...
train
[ "oemu0FTfFJI", "lLmXcktx4v1", "TlYOaEBPyL7", "gkrcN7B02wF", "TaobUiUxsx", "RrGx8PRuQdB", "UMffaNsMSru", "IJ6U4CE3Lbz", "IAIDXwcGzpR", "Z4y7Y839OxC" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the clarification. \nWe tested GS on the other methods (baseline and D+), and it did not improve accuracy. We therefore stated that it was beneficial only for B+ regime (where it improved over B+ without gradient smoothing and over baseline).\n \nRegarding robustness at training --- in all sampling regi...
[ -1, -1, -1, -1, -1, -1, 5, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, 3, 5, 4, 5 ]
[ "lLmXcktx4v1", "TlYOaEBPyL7", "UMffaNsMSru", "IJ6U4CE3Lbz", "IAIDXwcGzpR", "Z4y7Y839OxC", "iclr_2021_YZrQKLHFhv3", "iclr_2021_YZrQKLHFhv3", "iclr_2021_YZrQKLHFhv3", "iclr_2021_YZrQKLHFhv3" ]
iclr_2021_IqVB8e0DlUd
Fair Differential Privacy Can Mitigate the Disparate Impact on Model Accuracy
The techniques based on the theory of differential privacy (DP) has become a standard building block in the machine learning community. DP training mechanisms offer strong guarantees that an adversary cannot determine with high confidence about the training data based on analyzing the released model, let alone any deta...
withdrawn-rejected-submissions
This paper proposes an algorithm to address the disparate effect that DP has on the accuracy of minority/low-frequency sub-populations. Unfortunately the work does not actually guarantee or analyze the resulting privacy guarantees. In particular it may provide much worse privacy (or no privacy at all) to the minority s...
train
[ "oTz7ozwBCNV", "Nhu-eSTjz1f", "lAEIzcdDqUa", "wdPFkee2Xv", "Wt3o3ukr1Ei", "ISGQUuNFRGu", "HVWLTPCESfx", "iujbUSo9BCB", "o0r1UE35eVk" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We greatly appreciate the reviewer’s detailed review and suggestions to improve the paper.\n\nPlease refer to [Privacy Guarantee] in our response to all reviewers.\n\n[Lipschitz Constant of Logistic Regression]: Thank you for providing valuable information about the Lipschitz constant of the logistic regression. W...
[ -1, -1, -1, -1, -1, 4, 4, 4, 5 ]
[ -1, -1, -1, -1, -1, 3, 4, 4, 3 ]
[ "iujbUSo9BCB", "ISGQUuNFRGu", "o0r1UE35eVk", "HVWLTPCESfx", "iclr_2021_IqVB8e0DlUd", "iclr_2021_IqVB8e0DlUd", "iclr_2021_IqVB8e0DlUd", "iclr_2021_IqVB8e0DlUd", "iclr_2021_IqVB8e0DlUd" ]
iclr_2021_Ek7qrYhJMbn
Central Server Free Federated Learning over Single-sided Trust Social Networks
Federated learning has become increasingly important for modern machine learning, especially for data privacy-sensitive scenarios. Existing federated learning mostly adopts the central server-based architecture or centralized architecture. However, in many social network scenarios, centralized federated learning is not...
withdrawn-rejected-submissions
The paper studies federated learning in what they call ```'single-sided trust' scenario, i.e. there is no dedicated server and the trust relationship is asymmetric. This paper was a trickier case to decide on, and more borderline, in our opinion, than the reviewers' scores suggest, primarily, because the reviewers' re...
train
[ "lpnhkqS9Y7f", "txJeSrhiMo3", "qYvSe1q74p", "t0oqyx8lphx", "HyTwYnviBGx", "WzkJbdksWC9", "FOztKqcRefG", "-m4sn6UcL0j", "0P760t2vx84" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Given the reviews of other reviewers and given that my comments (3,4, and 5) have not been addressed , unfortunately I have lowered my score. BTW, regarding the comment 5, I meant the second paper (i.e., Personalized Federated Learning with Moreau Envelopes). I am not sure if the first paper the authors cited has ...
[ -1, 4, -1, -1, -1, -1, 4, 5, 8 ]
[ -1, 3, -1, -1, -1, -1, 4, 3, 5 ]
[ "qYvSe1q74p", "iclr_2021_Ek7qrYhJMbn", "txJeSrhiMo3", "-m4sn6UcL0j", "0P760t2vx84", "FOztKqcRefG", "iclr_2021_Ek7qrYhJMbn", "iclr_2021_Ek7qrYhJMbn", "iclr_2021_Ek7qrYhJMbn" ]
iclr_2021_CxGPf2BPVA
Regularization Shortcomings for Continual Learning
In most machine learning algorithms, training data is assumed to be independent and identically distributed (iid). When it is not the case, the performances of the algorithms are challenged, leading to the famous phenomenon of \textit{catastrophic forgetting}. Algorithms dealing with it are gathered in the \tex...
withdrawn-rejected-submissions
All four reviewers recommend rejecting the paper. However there is agreement that this is an interesting line of research, and the AC agrees. Reviewers provided extensive and well educated feedback. The authors did not respond to the raised concerns.
train
[ "CupEGj8TnTp", "bgFomwYNFhD", "X2_aLMp5Le8", "Pjn-Nq1S91V" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "TL;DR: The paper presents an intuition as to why continual learning with regularization will be unable to discern subsequent tasks. While I believe the intuitions to be correct, the formal laguage used to present this intuition is not sufficiently rigorous to warrant publication.\n\nThe paper sets out to build on ...
[ 4, 5, 3, 5 ]
[ 4, 2, 5, 4 ]
[ "iclr_2021_CxGPf2BPVA", "iclr_2021_CxGPf2BPVA", "iclr_2021_CxGPf2BPVA", "iclr_2021_CxGPf2BPVA" ]
iclr_2021_0naHZ3gZSzo
Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm
Modern machine learning algorithms usually involve tuning multiple (from one to thousands) hyperparameters which play a pivotal role in terms of model generalizability. Globally choosing appropriate values of hyperparameters is extremely computationally challenging. Black-box optimization and gradient-based algorit...
withdrawn-rejected-submissions
The paper has been actively discussed in the light of the authors’ response. Following a strong consensus across the reviewers, the paper is recommended for rejection. Even though the paper was, overall, found quite clear, theoretically sound and tackling a relevant problem for the ICLR community, they listed several c...
train
[ "W31NCGvMldd", "bYB3_ekcUm-", "zalpVd49fBl", "-CZiI2QShOO", "UVu0Y6ngfr1", "pkDr8vfGXOI", "PDKMJU4zP27", "zDInk0BO4zu" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "EDIT: **post rebuttal. I'd like to thank the authors for their response. As scalability to high-dimensional hyperparameter spaces is presented as a key advantage of the method, direct comparisons to high-dimensional BO techniques would be needed. The fact that using trust regions could benefit HOZOG, or that HOZOG...
[ 4, 5, 4, -1, -1, -1, -1, 3 ]
[ 4, 4, 4, -1, -1, -1, -1, 5 ]
[ "iclr_2021_0naHZ3gZSzo", "iclr_2021_0naHZ3gZSzo", "iclr_2021_0naHZ3gZSzo", "zDInk0BO4zu", "bYB3_ekcUm-", "zalpVd49fBl", "W31NCGvMldd", "iclr_2021_0naHZ3gZSzo" ]
iclr_2021_iUTHidd-ylL
Matrix Data Deep Decoder - Geometric Learning for Structured Data Completion
In this work, we present a fully convolutional end to end method to reconstruct corrupted sparse matrices of Non-Euclidean data. The classic example for such matrices is recommender systems matrices where the rows/columns represent items/users and the entries are ratings. The method we present is inspired by the surpri...
withdrawn-rejected-submissions
The focus of this paper is to analyze an end to end network to reconstruct matrices originating from non-Euclidean data which are corrupted. The authors present an untrained network for this task. In the review period the reviewers raised a variety of concerns including concerns about novelty of the paper with respect ...
train
[ "VfcJ4UB9QSq", "2mfpuNwIXH8", "FHx8gFyYfdu", "_iqwmjgFff4" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In this work, the authors addressed the problem of Matrix Completion on Non-Euclidean domains. They mainly adopt the method Matrix Data Deep Decoder, inspired by the Deep Decoder method for Image Completion. Here are my main concerns:\n\n1. The technical contributions of this paper is limited since it mainly ado...
[ 3, 3, 4, 3 ]
[ 4, 4, 4, 5 ]
[ "iclr_2021_iUTHidd-ylL", "iclr_2021_iUTHidd-ylL", "iclr_2021_iUTHidd-ylL", "iclr_2021_iUTHidd-ylL" ]
iclr_2021_PO0SuuafSX
3D Scene Compression through Entropy Penalized Neural Representation Functions
Some forms of novel visual media enable the viewer to explore a 3D scene from essentially arbitrary viewpoints, by interpolating between a discrete set of original views. Compared to 2D imagery, these types of applications require much larger amounts of storage space, which we seek to reduce. Existing approaches for co...
withdrawn-rejected-submissions
Description: The paper presents a method for encoding a compressed version of an implicit 3D scene, from given images from arbitrary view points. This is achieved via a function, learning with a NeRF model, that maps spatial coordinates to a radiance vector field and is optimized for high compressibility and low recons...
train
[ "csETVrO2VbE", "Eh7aqnqx5Gj", "iGH3nWvfMKy", "32K65143i_G", "NQYZe4VUN_", "Y23I8Thc4RB", "dp-fG2Va-95", "CYnYni84S-", "5cE0tr60YSZ", "_oyaIfM066f" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the rebuttal, however, I still have concerns\n\n1. I do agree the baseline I proposed will be time-consuming in the decoder side, however, this is the only comparison I can think of that can demonstrate the effectiveness of the proposed method (for 3D scene compression), The author argues the training o...
[ -1, -1, -1, -1, -1, -1, 5, 5, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, 4, 2, 3, 5 ]
[ "NQYZe4VUN_", "iclr_2021_PO0SuuafSX", "dp-fG2Va-95", "CYnYni84S-", "_oyaIfM066f", "5cE0tr60YSZ", "iclr_2021_PO0SuuafSX", "iclr_2021_PO0SuuafSX", "iclr_2021_PO0SuuafSX", "iclr_2021_PO0SuuafSX" ]
iclr_2021_-gabSeMKO4H
Translation Memory Guided Neural Machine Translation
Many studies have proven that Translation Memory (TM) can help improve the translation quality of neural machine translation (NMT). Existing ways either employ extra encoder to encode information from TM or concatenate source sentence and TM sentences as encoder's input. These previous methods don't model the semantic ...
withdrawn-rejected-submissions
This paper presents a way to use a translation memory (TM) to improve neural machine translation. Basically the proposed model uses a n-gram retrieve matching sentence (or pieces) and takes advantage of the useful parts using gated attention and copying mechanism. Although the idea of leveraging TM in the context of ...
val
[ "8Dx8xypI6k", "FV8mgwcEn_", "Tu1a3WkxQ14", "nt7WTftPnWQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "** Summary **\n\n(1) The authors proposed a translation system with an external memory. Given a sentence $x$ to be translated, they first retrieve a $(tx, ty)$ sentence pair from the training set through ``SEGMENT-BASED TM RETRIEVAL’’ defined in Section 4.1, where $tx$ and $ty$ are from source and target languages...
[ 4, 2, 4, 4 ]
[ 5, 5, 4, 4 ]
[ "iclr_2021_-gabSeMKO4H", "iclr_2021_-gabSeMKO4H", "iclr_2021_-gabSeMKO4H", "iclr_2021_-gabSeMKO4H" ]
iclr_2021_4zr9e5xwZ9Y
Distributed Training of Graph Convolutional Networks using Subgraph Approximation
Modern machine learning techniques are successfully being adapted to data modeled as graphs. However, many real-world graphs are typically very large and do not fit in memory, often making the problem of training machine learning models on them intractable. Distributed training has been successfully employed to allevia...
withdrawn-rejected-submissions
The paper proposes a new distributed training method for graph convolutional networks, using subgraph approximation. The reviewers raised multiple challenges, such as novelty, validity of experiments, and some technical issues. The authors did not respond to the reviewers' comments. The AC agreed with the reviewers tha...
val
[ "10x2ANP0wqh", "q5rvUkAVdN1", "EB0DGJiTryx", "S4XDagbDQfE" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Graph Convolutional Networks (GCNs) have inspired state-of-the-art methods for learning representations on graphs. However, training GCNs for very large graphs remains an issue because of their memory, computation demands. Towards addressing this issue, the paper proposes a distributed GCN training scheme based on...
[ 5, 4, 5, 4 ]
[ 4, 5, 4, 4 ]
[ "iclr_2021_4zr9e5xwZ9Y", "iclr_2021_4zr9e5xwZ9Y", "iclr_2021_4zr9e5xwZ9Y", "iclr_2021_4zr9e5xwZ9Y" ]
iclr_2021_y2I4gyAGlCB
Imagine That! Leveraging Emergent Affordances for 3D Tool Synthesis
In this paper we explore the richness of information captured by the latent space of a vision-based generative model. The model combines unsupervised generative learning with a task-based performance predictor to learn and to exploit task-relevant object affordances given visual observations from a reaching task, invo...
withdrawn-rejected-submissions
This paper proposes a method for tool synthesis by jointly training a generative model over meshes and a task success predictor. Gradient-based planning is then used to find a latent space tool representation which maximizes task success, given a starting tool and an input scene. The results indicate that this method c...
train
[ "QnqwFETJPw", "AnA218efMpU", "yfS-VAfmelL", "eBaRxERB7QI", "K5UxaHGNQq", "wD0oamMkqKR" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer" ]
[ "Summary\n-------\nBy combining a task-success classifier with the latent space of a tool-shape generative model, this paper shows that an activation-maximization approach can generate tool shapes which can succeed at particular tasks. \n\nPositives\n---------\nThe paper addresses the interesting topic of affordan...
[ 5, 4, 4, -1, -1, 4 ]
[ 3, 3, 5, -1, -1, 5 ]
[ "iclr_2021_y2I4gyAGlCB", "iclr_2021_y2I4gyAGlCB", "iclr_2021_y2I4gyAGlCB", "iclr_2021_y2I4gyAGlCB", "iclr_2021_y2I4gyAGlCB", "iclr_2021_y2I4gyAGlCB" ]
iclr_2021_PeT5p3ocagr
PGPS : Coupling Policy Gradient with Population-based Search
Gradient-based policy search algorithms (such as PPO, SAC or TD3) in deep reinforcement learning (DRL) have shown successful results on a range of challenging control tasks. However, they often suffer from flat or deceptive gradient problems. As an alternative to policy gradient methods, population-based evolutionary a...
withdrawn-rejected-submissions
This paper proposes a hybrid algorithm that combines RL and population-based search. The work is interesting and well-written. But, the contribution of the work is very limited, in comparison with the state-of-the-art.
train
[ "pUjTt7a49x8", "nVwyhJoRgD5", "ERJxryzAkO", "xlWfvP0uVF" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This a nicely written paper. The authors propose an algorithm that combines population-based search with policy gradient approach to obtain optimal policies in reinforcement learning. The proposed combination leads to a yet another algorithm aiming at exploitation (TD3) and exploration (CEM). The idea of using su...
[ 5, 5, 3, 5 ]
[ 4, 3, 4, 4 ]
[ "iclr_2021_PeT5p3ocagr", "iclr_2021_PeT5p3ocagr", "iclr_2021_PeT5p3ocagr", "iclr_2021_PeT5p3ocagr" ]
iclr_2021_fStMpzKkjMT
Why Does Decentralized Training Outperform Synchronous Training In The Large Batch Setting?
Distributed Deep Learning (DDL) is essential for large-scale Deep Learning (DL) training. Using a sufficiently large batch size is critical to achieving DDL runtime speedup. In a large batch setting, the learning rate must be increased to compensate for the reduced number of parameter updates. However, a large batch si...
withdrawn-rejected-submissions
This paper did experimental studies on how DPSGD and SSGD converge in different tasks. Some concerns were raised regarding the clarity, some unjustified claims, baselines and etc, and partially addressed after the rebuttal and discussions. However, some critical concerns remains. The reviewers agreed that the paper wou...
train
[ "36tutOrLDXg", "nDEKaxmx99a", "iqBf4Tf8VHx", "a97HHIqK7NT", "dihZjgqE7U", "111vte6w4_t", "RnQCkJEo8bu", "wPUUpaUgqV4", "VgjV5yNNu57", "CLOdTfGe9ce" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear authors,\n\nThank you for responding to many of my questions. I want to re-emphasize that I think this paper provides an interesting take on gossip algorithms for deep learning.\n\nI can appreciate the response highlighting the size of the ASR task, but the unfortunately, in the context of the vision tasks, t...
[ -1, -1, -1, -1, -1, -1, 5, 3, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 5, 4, 3 ]
[ "RnQCkJEo8bu", "iqBf4Tf8VHx", "VgjV5yNNu57", "wPUUpaUgqV4", "RnQCkJEo8bu", "CLOdTfGe9ce", "iclr_2021_fStMpzKkjMT", "iclr_2021_fStMpzKkjMT", "iclr_2021_fStMpzKkjMT", "iclr_2021_fStMpzKkjMT" ]
iclr_2021_KTEde38blNB
Intervention Generative Adversarial Nets
In this paper we propose a novel approach for stabilizing the training process of Generative Adversarial Networks as well as alleviating the mode collapse problem. The main idea is to incorporate a regularization term that we call intervention into the objective. We refer to the resulting generative model as Interventi...
withdrawn-rejected-submissions
Nominally, the scores on this paper were pretty split. In reality, I concur with the 2 and the 3. The 6 acknowledges being unfamiliar w/ the GAN literature, and I think the 7 is being too permissive about the baselines. The empirical evaluation here is simply not up to par for a major machine learning conference. ...
train
[ "P7HFrvRZOes", "5j3kymreqe", "m3Z4GQmwEDz", "_OwDUeqH4e" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Intervention Generative Adversarial Nets\n\nSummary\n\nThis paper proposes a GAN training procedure where images are encoded into latents, which are perturbed and passed through a generator/decoder, and the discriminator objective is augmented to encourage it to classify the perturbation. Results are presented for...
[ 3, 6, 2, 7 ]
[ 5, 3, 5, 4 ]
[ "iclr_2021_KTEde38blNB", "iclr_2021_KTEde38blNB", "iclr_2021_KTEde38blNB", "iclr_2021_KTEde38blNB" ]
iclr_2021_c5klJN-Bpq1
Generalizing Tree Models for Improving Prediction Accuracy
Can we generalize and improve the representation power of tree models? Tree models are often favored over deep neural networks due to their interpretable structures in problems where the interpretability is required, such as in the classification of feature-based data where each feature is meaningful. However, most tre...
withdrawn-rejected-submissions
The paper provides a neural generalization of decision trees with the idea of maintaining interpretability. The approach falls a bit short on theoretical grounds. For example, the main theorem portraying interpretability isn't properly defined and some definitions appear implicitly in the proof. The view of decision tr...
val
[ "84bc8kSq6D", "VgDHR5KSLLX", "rgecyFQKuma", "wPMQAQYAN_z", "AxGKKuD8eaV", "FZqJzT0rL5g", "5kyhtA2cli", "kBk-XO3Eq_Y", "eQ46rauRZur" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Update: I have read the author's response and decided to keep my review, confidence, and score. \n\n----\n\nSummary: this paper generalizes existing decision trees to some neural-style model. The most critical argument is that the model generalizes decision tree while maintaining interpretability. Since this is a ...
[ 3, 4, -1, -1, -1, -1, -1, 4, 6 ]
[ 4, 4, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_c5klJN-Bpq1", "iclr_2021_c5klJN-Bpq1", "5kyhtA2cli", "kBk-XO3Eq_Y", "eQ46rauRZur", "84bc8kSq6D", "VgDHR5KSLLX", "iclr_2021_c5klJN-Bpq1", "iclr_2021_c5klJN-Bpq1" ]
iclr_2021_68747kJ0qKt
On Dropout, Overfitting, and Interaction Effects in Deep Neural Networks
We examine Dropout through the perspective of interactions. Given N variables, there are O(N2) possible pairwise interactions, O(N3) possible 3-way interactions, i.e. O(Nk) possible interactions of k variables. Conversely, the probability of an interaction of k variables surviving Dropout at rate p is O((1−p)k). In thi...
withdrawn-rejected-submissions
This paper analyzes dropout and shows it selectively regularizes against learning higher-order interactions. The paper received mixed reviews, with two in favor of rejection and one in favor of acceptance. Specifically, while all reviewers find the intuitions and ideas in the paper adequate/plausible, two reviewers did...
val
[ "3vcvT6pvDkp", "m8RirTNsAt9", "y2_XdTsBgTl", "Cv6IqfOJ51T", "-FiSGyCsDhH", "x_JqHM1Szi" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "*Summary*\nThe authors are analyzing to which extent dropout is regularizing the training stage of deep networks, showing that high-order interactions are discouraged, this being a proxy for a better generalization capability once spurious co-adaptations are removed. In an extended mathematical analysis, the autho...
[ 7, 4, -1, -1, -1, 4 ]
[ 5, 4, -1, -1, -1, 3 ]
[ "iclr_2021_68747kJ0qKt", "iclr_2021_68747kJ0qKt", "x_JqHM1Szi", "3vcvT6pvDkp", "m8RirTNsAt9", "iclr_2021_68747kJ0qKt" ]
iclr_2021_xng0HoPDaFN
An Adversarial Attack via Feature Contributive Regions
Recently, to deal with the vulnerability to generate examples of CNNs, there are many advanced algorithms that have been proposed. These algorithms focus on modifying global pixels directly with small perturbations, and some work involves modifying local pixels. However, the global attacks have the problem of perturbat...
withdrawn-rejected-submissions
This paper proposes an adversarial attack method based on feature contributive regions. In generating adversarial perturbations, Grad-CAM heatmaps are used as constraints for the perturbation. The overall idea is interesting and straightforward. However, as several reviewers raised, similar methods have been proposed i...
train
[ "OOYBwK9dtdA", "oDrVU-IShtR", "0BjbRGzEAXr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper focuses on the problem of generating sparse l2-adversarial examples in a white-box and surrogate/transfer setting. The authors consider “local attacks” – perturbing on a limited number of pixels while achieving high attack success rate. The main contribution of this work is to define the region to pertu...
[ 3, 5, 3 ]
[ 4, 4, 2 ]
[ "iclr_2021_xng0HoPDaFN", "iclr_2021_xng0HoPDaFN", "iclr_2021_xng0HoPDaFN" ]
iclr_2021_5Dj8rVRg9Ui
Using Synthetic Data to Improve the Long-range Forecasting of Time Series Data
Effective long-range forecasting of time series data remains an unsolved and open problem. One possible approach is to use generative models to improve long-range forecasting, but the challenge then is how to generate high-quality synthetic data. In this paper, we propose a conditional Wasserstein GAN with Gradient and...
withdrawn-rejected-submissions
This paper tackles the problem of long term time series forecasting. One challenge in long term forecasting is that often no sufficient date may be available. This paper proposes to use GANs to generate data that can be used to improve long-range forecasts. While reviewers agree that this paper presents an interesting...
train
[ "wp6gvEgRHdQ", "61IPJhHvzrb", "NvcDsDN-_ly", "U0WbY-mzK1_", "AR6v-EEjZ1k", "bREFXXhVQy", "C27dCZknLu3", "TnExDeHgBeU" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThis paper proposes cWGAN-GEP for long-range forecasting of time-series data. cWGAN-GEP is a combination of data generation of data prediction, where given some observations, GAN iteratively generates a short synthetic time-series data, and an LSTM subsequently makes a long-range prediction based on the ...
[ 5, -1, -1, -1, -1, -1, 5, 6 ]
[ 4, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_5Dj8rVRg9Ui", "iclr_2021_5Dj8rVRg9Ui", "TnExDeHgBeU", "TnExDeHgBeU", "C27dCZknLu3", "wp6gvEgRHdQ", "iclr_2021_5Dj8rVRg9Ui", "iclr_2021_5Dj8rVRg9Ui" ]
iclr_2021_zWvMjL6o60V
Improving Mutual Information based Feature Selection by Boosting Unique Relevance
Mutual Information (MI) based feature selection makes use of MI to evaluate each feature and eventually shortlist a relevant feature subset, in order to address issues associated with high-dimensional datasets. Despite the effectiveness of MI in feature selection, we have noticed that many state-of-the-art algori...
withdrawn-rejected-submissions
During the discussion among reviewers, we have shared the concern that this work has a significant overlap with [Liu et al. 2018] and [Liu & Motani 2020]. Although the authors tried to address this concern by the author response, I also think that the difference is not enough. In particular, the reviewers pointed out t...
train
[ "IG_7fQst1PA", "KoUZC_6JT1N", "Y_IcN7xcV_x", "37eZNuyPfYn", "yUhTKw7_Onr", "UF-Dlp6y3E6", "FwyA7erQRjc", "1RkC4yl2rdw", "5rhO7x26fg0", "j4ERfUOeT7I" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Update: The author response has not changed my opinion that there is insufficient new material in this paper vs the ISIT paper, and the presentation of the material from the ISIT paper does not note that this material was previously presented there. Without clarity in what is the novel material claimed in this pap...
[ 2, -1, -1, -1, -1, -1, -1, 4, 4, 8 ]
[ 5, -1, -1, -1, -1, -1, -1, 5, 5, 3 ]
[ "iclr_2021_zWvMjL6o60V", "1RkC4yl2rdw", "5rhO7x26fg0", "j4ERfUOeT7I", "IG_7fQst1PA", "IG_7fQst1PA", "iclr_2021_zWvMjL6o60V", "iclr_2021_zWvMjL6o60V", "iclr_2021_zWvMjL6o60V", "iclr_2021_zWvMjL6o60V" ]
iclr_2021_j7qEcn647RY
LINGUINE: LearnIng to pruNe on subGraph convolUtIon NEtworks
Graph Convolutional Network (GCN) has become one of the most successful methods for graph representation learning. Training and evaluating GCNs on large graphs is challenging since full-batch GCNs have high overhead in memory and computation. In recent years, research communities have been developing stochastic samplin...
withdrawn-rejected-submissions
This paper presents an approach for training GCNs by learning to select subgraphs to train on to improve efficiency when transferring the model to larger graphs. In the proposed method, a meta-model and a light-weight GCN are trained iteratively in turns. Results are presented on medium-to-large graph datasets such as ...
train
[ "BU6x-T1kTzq", "F3pKL3lJns4", "KlhE0w4FKn_", "9t4QKqUW3A6", "eU6w_9at4K4", "Ss4jsSQpUxN", "hMTKSnhN4tB", "B3zVzSzkM5M" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "First, we would thank the anonymous reviewer for their helpful comments. In this work, we used a purely empirical method to illustrate the nodes that were less informatic vs the nodes that were more important in bootstrapping. The work redundant might be confusing to the reader. We are saying that redundant nodes ...
[ -1, -1, -1, -1, 3, 3, 4, 5 ]
[ -1, -1, -1, -1, 3, 4, 5, 3 ]
[ "eU6w_9at4K4", "Ss4jsSQpUxN", "hMTKSnhN4tB", "B3zVzSzkM5M", "iclr_2021_j7qEcn647RY", "iclr_2021_j7qEcn647RY", "iclr_2021_j7qEcn647RY", "iclr_2021_j7qEcn647RY" ]
iclr_2021_9CG8RW_p3Y
Fundamental Limits and Tradeoffs in Invariant Representation Learning
Many machine learning applications involve learning representations that achieve two competing goals: To maximize information or accuracy with respect to a target while simultaneously maximizing invariance or independence with respect to a subset of features. Typical examples include privacy-preserving learning, domain...
withdrawn-rejected-submissions
This paper studies an interesting information-theoretic trade-off between accuracy and invariance by posing it as a minimax problem. The results are of theoretical nature. However, the implications of the results are not clear. Also, the model/assumptions authors consider are not completely justified. Therefore, the pa...
train
[ "86ZrUC2M57", "B2v6NBJoWZS", "jQ8RWSN68cr", "34ghoXneS7J", "zt_2QC7aAj", "JQX0Jicolj8", "ThJ2hzsetDe", "YoWUBrmWS9", "hbmz8M2IBlf" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the updated comment. We acknowledge that the concepts and results in the manuscript are a bit abstract, but we also believe this level of abstraction from specific applications makes our results more widely applicable in broader domains, including but not limited to fairness, domain adaptation, priva...
[ -1, -1, -1, -1, -1, -1, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, 1, 3, 2 ]
[ "B2v6NBJoWZS", "zt_2QC7aAj", "ThJ2hzsetDe", "YoWUBrmWS9", "hbmz8M2IBlf", "iclr_2021_9CG8RW_p3Y", "iclr_2021_9CG8RW_p3Y", "iclr_2021_9CG8RW_p3Y", "iclr_2021_9CG8RW_p3Y" ]
iclr_2021_uhiF-dV99ir
Visualizing High-Dimensional Trajectories on the Loss-Landscape of ANNs
Training artificial neural networks requires the optimization of highly non-convex loss functions. Throughout the years, the scientific community has developed an extensive set of tools and architectures that render this optimization task tractable and a general intuition has been developed for choosing hyper parameter...
withdrawn-rejected-submissions
The reviewers and I agree that the paper is well motivated and that there are good comparisons to prior work. However, the scope of the paper is rather limited, and there were some doubts about the overall conclusions and whether the current results fully support them. As such, I cannot recommend the paper for publicat...
train
[ "wk0QiKyqkZ4", "SpwGVlWtFVn", "ECYDBLOq1pc", "6vMb01TaGAq", "RLHLYMYJlx5", "UnFjCpVKAXK", "U7FiNHxjUpi", "wKmxdqiRWDE" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for his/her valuable comments. It is true that more analysis settings (architectures and data sets) would help support our claims regarding generalization, and we will focus on that in the future. To our knowledge however, the generalization properties of flat vs. sharp minima are still large...
[ -1, -1, -1, -1, 6, 4, 5, 5 ]
[ -1, -1, -1, -1, 5, 4, 4, 4 ]
[ "UnFjCpVKAXK", "RLHLYMYJlx5", "U7FiNHxjUpi", "wKmxdqiRWDE", "iclr_2021_uhiF-dV99ir", "iclr_2021_uhiF-dV99ir", "iclr_2021_uhiF-dV99ir", "iclr_2021_uhiF-dV99ir" ]
iclr_2021_vSttC0bV3Ji
Deep Convolution for Irregularly Sampled Temporal Point Clouds
We consider the problem of modeling the dynamics of continuous spatial-temporal processes represented by irregular samples through both space and time. Such processes occur in sensor networks, citizen science, multi-robot systems, and many others. We propose a new deep model that is able to directly learn and pr...
withdrawn-rejected-submissions
The paper proposes a new spatial-temporal point cloud convolution. However, many reviewers suggest the paper can be improved with better baselines and motivations.
train
[ "1dxdzOf3io", "AbvYeT7Um-G", "2nAEp5bbgbV", "4g026068nTA" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes an extension of PointConv for spatial-temporal point cloud modeling. The model can be used for prediction or forecasting and is evaluated on Starcraft II and weather nowcasting.\n\n- The TemporalPointConv follows PointConv and the current work extends this by appending time. I think the paper is...
[ 5, 5, 5, 4 ]
[ 4, 4, 3, 3 ]
[ "iclr_2021_vSttC0bV3Ji", "iclr_2021_vSttC0bV3Ji", "iclr_2021_vSttC0bV3Ji", "iclr_2021_vSttC0bV3Ji" ]
iclr_2021_Wis-_MNpr4
DarKnight: A Data Privacy Scheme for Training and Inference of Deep Neural Networks
Protecting the privacy of input data is of growing importance as machine learning methods reach new application domains. In this paper, we provide a unified training and inference framework for large DNNs while protecting input privacy and computation integrity. Our approach called DarKnight uses a novel data bli...
withdrawn-rejected-submissions
While reviewers appreciated the simple approach of this work, the biggest concern reviewers had was with the security guarantee of the method. R4 argued that in a certain case recovering an original image x_1 amounted to guessing 2 coefficients. In the discussion phase the authors argued that security amounts to the ad...
train
[ "E2ldJ2x3n_", "fRNNmnCI8EX", "IZg65Ph54j", "rPziBqsAgC", "EkHZHpgZkO", "s8O9rUc9YPQ", "qjyNn5zx_u2", "7jzXNBn3F_Q", "EdMjkhCVkXD" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\n\nResults - The two main proposed benefits of the approach are increased speedups in inference as well as training over Slalom and just SGX. Looking at Figure 2 and Figure 4, the spped-ups do not appear to be consistently significant. I do appreciate that the authors show the results for MobileNetV2, for which ...
[ 4, -1, -1, -1, -1, -1, 5, 5, 3 ]
[ 4, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "iclr_2021_Wis-_MNpr4", "EdMjkhCVkXD", "E2ldJ2x3n_", "qjyNn5zx_u2", "7jzXNBn3F_Q", "iclr_2021_Wis-_MNpr4", "iclr_2021_Wis-_MNpr4", "iclr_2021_Wis-_MNpr4", "iclr_2021_Wis-_MNpr4" ]
iclr_2021_8q_ca26L1fz
Revisiting Graph Neural Networks for Link Prediction
Graph neural networks (GNNs) have achieved great success in recent years. Three most common applications include node classification, link prediction, and graph classification. While there is rich literature on node classification and graph classification, GNNs for link prediction is relatively less studied and less un...
withdrawn-rejected-submissions
The authors study the expressive power of Graph Neural Network architectures for the link prediction problem and provides theoretical justification for the strong performance of SEAL on link prediction benchmarks. However, The reviewers think the paper needs to improve in several aspects before it can be published: 1. ...
train
[ "8l3eiQKH9Sr", "a_0wGtMpICB", "Q6u99f_arbr", "AJKjE7iYfiS", "9qVTAc6Jvy", "UM0b73iIikX", "cqKzQBROcv2", "a8XZXV1wIil", "2VsGTaTt-KH", "DoQStqORcm", "zdzJsqaj26W", "XknJGbv94ym", "BAns9_EKeak", "yjAVW19WV5", "qBiz8ADSDIc", "9VnYmids38L", "zPdqbT9Ry7H" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "public", "public", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your detailed response. \n\nLi et al define the distance encoding quite generally, in a manner that is very similar to the presentation in this paper. Their Definition 3.1 defines a distance encoding as a permutation invariant labelling function with dependencies on S and A, which is very similar to the...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 5 ]
[ "XknJGbv94ym", "AJKjE7iYfiS", "9qVTAc6Jvy", "Q6u99f_arbr", "DoQStqORcm", "cqKzQBROcv2", "2VsGTaTt-KH", "iclr_2021_8q_ca26L1fz", "a8XZXV1wIil", "yjAVW19WV5", "qBiz8ADSDIc", "9VnYmids38L", "zPdqbT9Ry7H", "iclr_2021_8q_ca26L1fz", "iclr_2021_8q_ca26L1fz", "iclr_2021_8q_ca26L1fz", "iclr_2...
iclr_2021_eYgI3cTPTq9
How to Avoid Being Eaten by a Grue: Structured Exploration Strategies for Textual Worlds
Text-based games are long puzzles or quests, characterized by a sequence of sparse and potentially deceptive rewards. They provide an ideal platform to develop agents that perceive and act upon the world using a combinatorially sized natural language state-action space. Standard Reinforcement Learning agents are poorly...
withdrawn-rejected-submissions
The paper describes very interesting work that advances the state of the art in Zork by going beyond an important state bottleneck. While there is an important engineering contribution, the reviewers raised important concerns regarding the novelty of the question-answering approach to construct knowledge graphs, the c...
train
[ "4yh6qzZprjZ", "XtdVT5J-nt3", "9uK27M5E07S", "oO6HgmmXtPP", "HRj8OWzRdwd", "G4M9b3fTEFc", "jiZq1PnGM2X", "xwwdz3c01xU" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for their encouraging and helpful comments.\n\n1. I am not convinced that the dependency graph of Zork can be divided into a linear set of \"level\" subdivisions the authors state before Equation 1. For example, how about branches? If there are parallel tasks that need to be performed each wi...
[ -1, -1, -1, -1, 4, 6, 7, 5 ]
[ -1, -1, -1, -1, 4, 2, 4, 4 ]
[ "jiZq1PnGM2X", "G4M9b3fTEFc", "HRj8OWzRdwd", "xwwdz3c01xU", "iclr_2021_eYgI3cTPTq9", "iclr_2021_eYgI3cTPTq9", "iclr_2021_eYgI3cTPTq9", "iclr_2021_eYgI3cTPTq9" ]
iclr_2021__qJXkf347k
Reinforcement Learning Based Asymmetrical DNN Modularization for Optimal Loading
Latency of DNN (Deep Neural Network) based prediction is the summation of model loading latency and inference latency. Model loading latency affects the first response from the applications, whereas inference latency affects the subsequent responses. As model loading latency is directly proportional to the model size, ...
withdrawn-rejected-submissions
The paper proposed a novel RL-based solution to the optimal partial of DNNs which is interesting to readers. However, the paper is not well presented and hard to follow. The lack of comparisons agains existing solutions and inconsistencies in the writing as pointed out by the reviewers largely weakens the submission. ...
train
[ "WfW_SLujjjF", "PkUSS0qBop5", "vAu7n2-MXiN", "Ek8bo1jrzQG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors present a method to optimise the _loading time_ of a deep neural network on e.g. a mobile device. With ever more complicated and bigger networks (notwithstanding approaches to prune networks etc.), loading parameters can cause a noticeable initial delay before the first inference step. The authors prop...
[ 3, 2, 3, 4 ]
[ 4, 5, 3, 4 ]
[ "iclr_2021__qJXkf347k", "iclr_2021__qJXkf347k", "iclr_2021__qJXkf347k", "iclr_2021__qJXkf347k" ]
iclr_2021_NrN8XarA2Iz
Learning to Dynamically Select Between Reward Shaping Signals
Reinforcement learning (RL) algorithms often have the limitation of sample complexity. Previous research has shown that the reliance on large amounts of experience can be mitigated through the presence of additional feedback. Automatic reward shaping is one approach to solving this problem, using automatic identificati...
withdrawn-rejected-submissions
This paper presents an approach to reward shaping in RL centred on the question of how to select between different shaping signals. As such this is an interesting research direction that could make important contributions in the area. Generally the reviewers felt that the paper is too preliminary in its current form. T...
val
[ "5Rrv-voHeO1", "rZogOl43Zd", "jKKdFnvRcFd", "427_VNtsblH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper presents an approach to select the best reward shaping potential signal out of multiple available shaping potentials. The main idea seems to be to select the shaping signal that minimizes the inverse of the difference of potentials between the next state and the current state. The experiments show the pr...
[ 5, 2, 4, 4 ]
[ 3, 5, 4, 4 ]
[ "iclr_2021_NrN8XarA2Iz", "iclr_2021_NrN8XarA2Iz", "iclr_2021_NrN8XarA2Iz", "iclr_2021_NrN8XarA2Iz" ]
iclr_2021_Whq-nTgCbNR
Anomaly detection in dynamical systems from measured time series
The paper addresses a problem of abnormalities detection in nonlinear processes represented by measured time series. Anomaly detection problem is usually formulated as finding outlier data points relative to some usual signals such as unexpected spikes, drops, or trend changes. In nonlinear dynamical systems, there are...
withdrawn-rejected-submissions
The paper focuses on anomaly detection in dynamical systems from time series measurement. The originality of the contribution is to detect anomalies not based on the detection of OOD observations but from identified parameters or statistics of the dynamical system. They are using “polynomial neural networks. All the re...
train
[ "Qmw6yXybBrW", "Gl0TTKlnMJE", "M9ogzBRE6LQ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes the use of polynomial neural architectures (PNN) for anomaly detection in dynamical systems, and more precisely, for feature extraction. The choice of the architecture is motivated by the links between PNNs and ODEs that describe the evolution of dynamical systems. \n\nQuality, clarity, original...
[ 4, 5, 4 ]
[ 4, 4, 5 ]
[ "iclr_2021_Whq-nTgCbNR", "iclr_2021_Whq-nTgCbNR", "iclr_2021_Whq-nTgCbNR" ]
iclr_2021_M_eaMB2DOxw
On Representing (Anti)Symmetric Functions
Permutation-invariant, -equivariant, and -covariant functions and anti-symmetric functions are important in quantum physics, computer vision, and other disciplines. Applications often require most or all of the following properties: (a) a large class of such functions can be approximated, e.g. all continuous function (...
withdrawn-rejected-submissions
The paper received reviews from experts in representation of invariant functions. They all have expressed concerns regarding the novelty of the technical contributions, and the lack of appropriate comparisons to existing results. This applies in particular to representation of symmetric functions using neural networks ...
test
[ "fsaeNYjzv2B", "JCu-sW8HPym", "o2fT2bngqCV", "IqcrkI02Hyj", "n6_-YLByLsI", "jF4Wf4MosV0", "FxxB0hENYAO", "1-B5jF2nkMB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper is about representing functions $\\psi : (\\mathbb{R}^d)^n \\rightarrow \\mathbb{R}$ that are symmetric or asymmetric with respect to the permutation group $S_n$. The aim is to consider neural networks giving only functions that symmetric or asymmetric, and to establish universality results. The motiv...
[ 4, 4, 6, -1, -1, -1, -1, 4 ]
[ 4, 4, 3, -1, -1, -1, -1, 4 ]
[ "iclr_2021_M_eaMB2DOxw", "iclr_2021_M_eaMB2DOxw", "iclr_2021_M_eaMB2DOxw", "1-B5jF2nkMB", "fsaeNYjzv2B", "JCu-sW8HPym", "o2fT2bngqCV", "iclr_2021_M_eaMB2DOxw" ]
iclr_2021_6SXNhWc5HFe
Provable Fictitious Play for General Mean-Field Games
We propose a reinforcement learning algorithm for stationary mean-field games, where the goal is to learn a pair of mean-field state and stationary policy that constitutes the Nash equilibrium. When viewing the mean-field state and the policy as two players, we propose a fictitious play algorithm which alternatively up...
withdrawn-rejected-submissions
This is interesting work, but not yet sufficiently mature for publication. Although the authors propose an novel algorithm and provide an analysis, the reviewers raised several criticisms about the comparison to previous work, the lack of any empirical evaluation, the strength and unnaturalness of the assumptions used...
train
[ "4zSnAAccTDQ", "ASgyxqqXaOc", "BGg_A85VBgr", "owjpBtsjsnN", "f_JKrcQr-d7", "VlgAcA-IDtI", "7OJPe7vMWNq", "fKAgKFEjpgZ", "H2M_LI9xrKY" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for your detailed feedback and constructive comments. Kindly see our detailed responses below.\n\nQ1: A potential issue of computing an approximate Q-function in the algorithm is that it requires fixing the mean-field state. \n\nA1: This is an excellent question. Although it is tempting to vie...
[ -1, -1, -1, -1, -1, 5, 5, 3, 5 ]
[ -1, -1, -1, -1, -1, 3, 3, 5, 5 ]
[ "H2M_LI9xrKY", "VlgAcA-IDtI", "7OJPe7vMWNq", "fKAgKFEjpgZ", "4zSnAAccTDQ", "iclr_2021_6SXNhWc5HFe", "iclr_2021_6SXNhWc5HFe", "iclr_2021_6SXNhWc5HFe", "iclr_2021_6SXNhWc5HFe" ]
iclr_2021_oXQxan1BWgU
Model agnostic meta-learning on trees
In meta-learning, the knowledge learned from previous tasks is transferred to new ones, but this transfer only works if tasks are related, and sharing information between unrelated tasks might hurt performance. A fruitful approach is to share gradients across similar tasks during training, and recent work suggests that...
withdrawn-rejected-submissions
The paper proposes a variant of MAML for meta-learning on tasks with a hierarchical tree structure. The proposed algorithm is evaluated on synthetic datasets, and it compares favorably to MAML. The reviewers identified several significant weaknesses, including: (1) the experimental evaluation is limited, and it only in...
test
[ "FfHOhvQqjG4", "-bf1Ls84qtM", "nBmnE-RNQV5", "F8ZPIm1lSsd", "b5SZy3ChNHw", "R31GImAdF2N", "_gpSY4lXeks", "9z_ERwfOt6", "v5U9qJMdjT" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Re: novelty: I agree that the novelty of the approach lies in modifying MAML to capture the hierarchical structure. However, by modifying an existing algorithm, you necessarily reuse some of the ideas that were unique/novel to its original application. For example, in this case: The use of gradient descent to tune...
[ -1, -1, -1, -1, -1, 3, 5, 4, 3 ]
[ -1, -1, -1, -1, -1, 3, 3, 4, 5 ]
[ "b5SZy3ChNHw", "R31GImAdF2N", "_gpSY4lXeks", "9z_ERwfOt6", "v5U9qJMdjT", "iclr_2021_oXQxan1BWgU", "iclr_2021_oXQxan1BWgU", "iclr_2021_oXQxan1BWgU", "iclr_2021_oXQxan1BWgU" ]
iclr_2021_a4E6SL1rG3F
Optimal allocation of data across training tasks in meta-learning
Meta-learning models transfer the knowledge acquired from previous tasks to quickly learn new ones. They are tested on benchmarks with a fixed number of data-points for each training task, and this number is usually arbitrary, for example, 5 instances per class in few-shot classification. It is unknown how the performa...
withdrawn-rejected-submissions
This paper studies the problem of how data should be balanced among a set of tasks within meta-learning. This problem is interesting, and largely hasn't been studied before. However, the reviewers raised several shortcomings of the current version of the paper, including the significance of the problem setting, the lim...
train
[ "jJHDslEl0iZ", "G6K-UtGmNIy", "FDSw0P3JKNV", "RBFdNSXMQos", "XHLpUegZPyb", "B_CF_50qPPm", "ErkiIc7Bz9w", "KfPSNvz5RQA" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for his careful consideration of our paper and for comments and suggestions. We would like to address some of the reviewer's concerns below, which we feel paint the paper and its results in a somewhat negative light.\n\n> I can't really see the significance of the problem […]\n\nWe appreciate...
[ -1, -1, -1, -1, 6, 4, 4, 4 ]
[ -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "KfPSNvz5RQA", "B_CF_50qPPm", "XHLpUegZPyb", "ErkiIc7Bz9w", "iclr_2021_a4E6SL1rG3F", "iclr_2021_a4E6SL1rG3F", "iclr_2021_a4E6SL1rG3F", "iclr_2021_a4E6SL1rG3F" ]