paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2021_NGBY716p1VR
Towards Understanding Fast Adversarial Training
Current neural-network-based classifiers are susceptible to adversarial examples. The most empirically successful approach to defending against such adversarial examples is adversarial training, which incorporates a strong self-attack during training to enhance its robustness. This approach, however, is computationally...
withdrawn-rejected-submissions
This paper first investigates the behavior (e.g., catastrophic overfitting) of fast adversarial training (FastAdv) through experiments. It finds that the key to its success is the ability to recover from overfitting to weak attacks. Then, it presents a simple fix (FastAdv+) that incorporates PGD adversarial training wh...
test
[ "mf_r8wO1T8b", "AIpIsRm_d4s", "EcGzqV1nlZi", "Ea8fEC6IKG", "THIePPO9UP5", "-Om9dCSL-p0", "LMg1vOnm66l", "1HharJHXsvM", "LhoM7XWy6Po", "oJpbrVZQU80", "zNPW43SaXrZ", "L0ybp9vLHOL", "kpU-COGQEuq", "SBHkrIl_LY0" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "################################## Summary ##################################\n\nThis paper shows that the main reason for the success of Fast Adversarial Training ([1], will be referred to as FBF in this review) is its ability to recover from catastrophic overfitting. Based on this observation, the authors propos...
[ 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "iclr_2021_NGBY716p1VR", "EcGzqV1nlZi", "Ea8fEC6IKG", "THIePPO9UP5", "-Om9dCSL-p0", "LMg1vOnm66l", "LhoM7XWy6Po", "L0ybp9vLHOL", "mf_r8wO1T8b", "kpU-COGQEuq", "SBHkrIl_LY0", "iclr_2021_NGBY716p1VR", "iclr_2021_NGBY716p1VR", "iclr_2021_NGBY716p1VR" ]
iclr_2021_GafvgJTFkgb
A Technical and Normative Investigation of Social Bias Amplification
The conversation around the fairness of machine learning models is growing and evolving. In this work, we focus on the issue of bias amplification: the tendency of models trained from data containing social biases to further amplify these biases. This problem is brought about by the algorithm, on top of the level of bi...
withdrawn-rejected-submissions
The authors study bias amplification [Zhao et al, 2017] and propose an improved metric for measuring it. The authors also discussed normative issues in bias amplification (predicting a sensitive feature), and how to measure amplification when we do not have labels, or where labels correspond to uncertain future events....
val
[ "kbgJFW6A785", "7w-knaHNcze", "I9DimEjZuuJ", "F0IRom7yt2l", "tS4um5m3eyK", "YVWQ68ElYI0", "XhbXpSxPOrQ" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your detailed feedback! In addition to some of the general comments above to all, we also wanted to address a few of your specific notes.\n\n*** Concern #1: The need to better validate the metric, as well as\n\n*** Concern #4: The need to explore fairness benchmark datasets\n\nAs per your suggestion,...
[ -1, -1, -1, -1, 7, 5, 5 ]
[ -1, -1, -1, -1, 4, 4, 4 ]
[ "YVWQ68ElYI0", "tS4um5m3eyK", "XhbXpSxPOrQ", "iclr_2021_GafvgJTFkgb", "iclr_2021_GafvgJTFkgb", "iclr_2021_GafvgJTFkgb", "iclr_2021_GafvgJTFkgb" ]
iclr_2021_OyDjznG-x2e
Graph Permutation Selection for Decoding of Error Correction Codes using Self-Attention
Error correction codes are an integral part of communication applications and boost the reliability of transmission. The optimal decoding of transmitted codewords is the maximum likelihood rule, which is NP-hard. For practical realizations, suboptimal decoding algorithms are employed; however, the lack of theoretical i...
withdrawn-rejected-submissions
The reviewers positively valued the proposed idea of performing permutation selection in permutation decoding via combining node embedding and self-attention, which seems to be of high originality. I found that this paper is mostly clearly written, except Section 3.2 as AnonReviewer5 commented. The main concern among t...
train
[ "Qgbm5d0Frdi", "oB1rwNThMg2", "rsCkcJf7M7V", "9lCKVu_FiBG", "aJHbDroSHn", "mdKD83pnOEU", "ZXFvLqyHYY", "EiHVqaMRRi", "zLP9vmOT_fk", "65_7fLUxaqW", "c41jRnbWeMQ" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper focuses on improving the computational complexity of permutation decoding. In permutation decoding, one aims to decode a permutation of the received codeword in the hope that it will lead to successful decoding as compared to applying the decoding algorithm on the received codeword. In practice, one perf...
[ 6, 5, -1, -1, -1, -1, -1, -1, 6, 5, 4 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "iclr_2021_OyDjznG-x2e", "iclr_2021_OyDjznG-x2e", "zLP9vmOT_fk", "oB1rwNThMg2", "65_7fLUxaqW", "c41jRnbWeMQ", "Qgbm5d0Frdi", "iclr_2021_OyDjznG-x2e", "iclr_2021_OyDjznG-x2e", "iclr_2021_OyDjznG-x2e", "iclr_2021_OyDjznG-x2e" ]
iclr_2021_RmB-zwXOIVC
Imitation with Neural Density Models
We propose a new framework for Imitation Learning (IL) via density estimation of the expert's occupancy measure followed by Maximum Occupancy Entropy Reinforcement Learning (RL) using the density as a reward. Our approach maximizes a non-adversarial model-free RL objective that provably lower bounds reverse Kullback–Le...
withdrawn-rejected-submissions
This is a difficult borderline decision, with the reviewers evenly split in their final recommendation. Overall, the authors provided good responses to the reviewer questions: this was much appreciated. The reviewers requested additional ablations and explanations, which the authors provided. A prevailing concern is...
val
[ "y8bUqIm2JMN", "HyaMQxEuv24", "5AeHytHuTP", "93PLrvf5s1q", "YsDluADEo8J", "Y19aqs4BnGf", "k4jSIIVUGbc", "qdg0pCb701g", "WiGZA2oVasb", "sbFOu3brFNd", "sMB2Y1AVLrQ", "eV7fPEP1du2", "PuSQHsiiQ5r" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "=====POST-REBUTTAL COMMENTS======== \n\nI thank the authors for the response and the efforts in the updated draft. Most of my concerns were clarified and I still think the paper should be accepted. However, I agree with Reviewer 4 that additional experiments would be good to better tease out the reasons for this ...
[ 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2021_RmB-zwXOIVC", "iclr_2021_RmB-zwXOIVC", "93PLrvf5s1q", "YsDluADEo8J", "PuSQHsiiQ5r", "k4jSIIVUGbc", "eV7fPEP1du2", "y8bUqIm2JMN", "sbFOu3brFNd", "sMB2Y1AVLrQ", "iclr_2021_RmB-zwXOIVC", "iclr_2021_RmB-zwXOIVC", "iclr_2021_RmB-zwXOIVC" ]
iclr_2021_cxRUccyjw0S
Learning Disentangled Representations for Image Translation
Recent approaches for unsupervised image translation are strongly reliant on generative adversarial training and architectural locality constraints. Despite their appealing results, it can be easily observed that the learned class and content representations are entangled which often hurts the translation performance. ...
withdrawn-rejected-submissions
This paper was near the borderline, but ultimately, calibrating with the acceptance criteria applied to submissions across the conference, we didn't find sufficient enthusiasm among the reviewers to accept the paper. Two reviewers put it just above the bar for acceptance, on the strength of its results. A third revie...
train
[ "lRcqUOyr7PK", "yIEyBCSMCn", "lR-CdqaoSuD", "elxAx7YNuhB", "1VHZjTaztwO", "xDT8mucrAxn" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "The paper presents a principled approach to style transfer by disentangling class-specific attributes from common (eq. class-independent) attributes. In order to do so, the paper leverages the formulation of a recently proposed disentangling approach called \"LORD\". The proposed approach is called OverLORD, and i...
[ 6, 6, -1, -1, -1, 4 ]
[ 4, 5, -1, -1, -1, 3 ]
[ "iclr_2021_cxRUccyjw0S", "iclr_2021_cxRUccyjw0S", "xDT8mucrAxn", "yIEyBCSMCn", "lRcqUOyr7PK", "iclr_2021_cxRUccyjw0S" ]
iclr_2021_yJHpncwG1B
Federated Learning's Blessing: FedAvg has Linear Speedup
Federated learning (FL) learns a model jointly from a set of participating devices without sharing each other's privately held data. The characteristics of non-\textit{i.i.d.} data across the network, low device participation, high communication costs, and the mandate that data remain private bring challenges in unders...
withdrawn-rejected-submissions
This work provides theoretical analysis for FedAvg, contributing better convergence rates than prior work. Moreover, the paper shows that setting E > 1 can reduce the number of communications. The contribution is incremental.
train
[ "lFQ8Qi3TzkQ", "skSKDfMRmgy", "rOr9nnN7nib", "J008y9XY-YF", "3emg7xel5b", "iyw9TMo4Vo", "PqO_4WIZhmP", "9-xzSl_1HLF", "-MaxB00vFvg", "JwNeW0iT2uj", "s_4dPMwlOS8" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your response! The $E^2 G^2/\\sqrt{KT}$ term in the bound in Theorem 2 indeed also comes from the sampling variance, in a way that is similar to the term $\\frac{\\kappa E^2 G^2 \\mu}{KT}$ in Theorem 1. We apologize that we did not directly address the concern about $E^2 G^2/\\sqrt{KT}$, and have incorp...
[ -1, -1, -1, 6, -1, -1, -1, -1, 5, 6, 5 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, 3, 4, 4 ]
[ "rOr9nnN7nib", "JwNeW0iT2uj", "iyw9TMo4Vo", "iclr_2021_yJHpncwG1B", "-MaxB00vFvg", "s_4dPMwlOS8", "skSKDfMRmgy", "J008y9XY-YF", "iclr_2021_yJHpncwG1B", "iclr_2021_yJHpncwG1B", "iclr_2021_yJHpncwG1B" ]
iclr_2021_iktA2PtTRsK
Watching the World Go By: Representation Learning from Unlabeled Videos
Recent unsupervised representation learning techniques show remarkable success on many single image tasks by using instance discrimination: learning to differentiate between two augmented versions of the same image and a large batch of unrelated images. Prior work uses artificial data augmentation techniques such as ...
withdrawn-rejected-submissions
This paper was a difficult decision. Overall it seems to be a quality paper, well written and with many experiments, in particular evaluating learned representations across various tasks and datasets. The authors were also quite courteous in their replies which is appreciated. I really like the point the paper makes ab...
train
[ "0bTRSiqYLQS", "cgTIowXy9BY", "QzqYLIwJ3pp", "XWYGO_DNtx-", "L7Sjr9-lIb", "G2MJmqKXIbk" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your thorough and encouraging comments. We appreciate that you have taken into account the significance of our findings in your decision. \n\n1. Clarity: Frankly, we are embarrassed to see these failures. When worrying about page limits, one often tests removing various sections without losing the co...
[ -1, -1, -1, 4, 8, 5 ]
[ -1, -1, -1, 5, 3, 3 ]
[ "L7Sjr9-lIb", "XWYGO_DNtx-", "G2MJmqKXIbk", "iclr_2021_iktA2PtTRsK", "iclr_2021_iktA2PtTRsK", "iclr_2021_iktA2PtTRsK" ]
iclr_2021_wC99I7uIFe
D2p-fed:Differentially Private Federated Learning with Efficient Communication
In this paper, we propose the discrete Gaussian based differentially private federated learning (D2p-fed), a unified scheme to achieve both differential privacy (DP) and communication efficiency in federated learning (FL). In particular, compared with the only prior work taking care of both aspects, D2p-fed provides st...
withdrawn-rejected-submissions
The paper considers differentially private federated learning --- a well-motivated problem. The proposed algorithm is a simple modification to existing methods, e.g., DP-FedAvg, but uses a different DP mechanism for noise-adding. The reviewers liked the motivation but criticized the work for its incremental nature and...
train
[ "5FHCs8Nhs5A", "I22IQexrgZe", "1JQhSBu0PW", "MQLAFMDLVtO", "DPp0pBkJAH-", "_EM2brTTve0", "UpyVGESjHlU", "aoeg7d6_eUz", "RBksuMjFVvT", "-7hEkhI8I4v", "a8S6qKUyXZ0" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Q: The writing is not sufficiently precise and clear. For example, before the statement of Thm 1, it is mentioned that the proof is delayed to Appendix A. However, the proof of Thm 1 is not provided therein. Rather the proof of Thm 3 is in that appendix. The proof of Thm 4 in Appendix B is not sufficiently precise...
[ -1, -1, -1, -1, -1, -1, -1, 4, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "I22IQexrgZe", "1JQhSBu0PW", "aoeg7d6_eUz", "RBksuMjFVvT", "_EM2brTTve0", "-7hEkhI8I4v", "a8S6qKUyXZ0", "iclr_2021_wC99I7uIFe", "iclr_2021_wC99I7uIFe", "iclr_2021_wC99I7uIFe", "iclr_2021_wC99I7uIFe" ]
iclr_2021_crAi7c41xTh
Shape Matters: Understanding the Implicit Bias of the Noise Covariance
The noise in stochastic gradient descent (SGD) provides a crucial implicit regularization effect for training overparameterized models. Prior theoretical work largely focuses on spherical Gaussian noise, whereas empirical studies demonstrate the phenomenon that parameter-dependent noise --- induced by mini-batches or l...
withdrawn-rejected-submissions
The paper shows that for a simple nonlinear (quadratically parametrized linear) model, stochastic gradient descent (SGD) with a certain label noise and learning rate schedule recovers the data generating model. In contrast, gradient descent with or without Gaussian noise fails. While the results are novel and interesti...
train
[ "L9Y_gpxuSFx", "GvpjwkH8wNq", "x6WzLamj0Rz", "aIo1FpVXC4H", "fSJbcONL3ti", "yh24nauBw1", "t85NN1vJ95", "MNr1-ET5uw1", "klJv4DqW8t7" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper demonstrates that for a particular model SGD with label noise and proper learning rate schedule recovers the (sparse) data generating model while GD with or without Gaussian noise does not. In the latter case, it fails because a stationary distribution is not achieved. The proofs in the appendix are qui...
[ 6, 6, -1, -1, -1, -1, -1, 7, 6 ]
[ 4, 3, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_crAi7c41xTh", "iclr_2021_crAi7c41xTh", "MNr1-ET5uw1", "klJv4DqW8t7", "L9Y_gpxuSFx", "L9Y_gpxuSFx", "GvpjwkH8wNq", "iclr_2021_crAi7c41xTh", "iclr_2021_crAi7c41xTh" ]
iclr_2021_jyDpkM9lntb
Multi-Task Multicriteria Hyperparameter Optimization
We present a new method for searching optimal hyperparameters among several tasks and several criteria. Multi-Task Multi Criteria method (MTMC) provides several Pareto-optimal solutions, among which one solution is selected with given criteria significance coefficients. The article begins with a mathematical formulatio...
withdrawn-rejected-submissions
The paper has been discussed by the reviewers that have acknowledged the rebuttal and the authors’ responses. However, the reviewers still had the following weaknesses and concerns (not solved post rebuttal): * Expensive procedure (e.g., exhaustive enumeration before finding Pareto frontier) * The experiments should b...
train
[ "wzApf5yUtG5", "Pvjvck4BJYF", "s5S6JAqRuDp", "uHZNeEbjjbD", "F390u8GB68R", "ZXcwwaU8rsD", "UvHfI0MAx7g", "E2UxdMQ-JAl", "y3EINYtDD1Z", "Zha6jGCMcp6", "FOBpG5pask", "ui3-TRi0OnZ", "V-h0kHyB6Np" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for these suggestions! We will make changes and improve our further research.", "Some pointers:\n\n- The introduction is very short and could be improved to provide some justification as to why the multi-task multiobjective problem is important, or an example of situation where this problem arises. \n\...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 2, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "Pvjvck4BJYF", "s5S6JAqRuDp", "uHZNeEbjjbD", "UvHfI0MAx7g", "ZXcwwaU8rsD", "Zha6jGCMcp6", "FOBpG5pask", "V-h0kHyB6Np", "ui3-TRi0OnZ", "iclr_2021_jyDpkM9lntb", "iclr_2021_jyDpkM9lntb", "iclr_2021_jyDpkM9lntb", "iclr_2021_jyDpkM9lntb" ]
iclr_2021_tEw4vEEhHjI
Fixing Asymptotic Uncertainty of Bayesian Neural Networks with Infinite ReLU Features
Approximate Bayesian methods can mitigate overconfidence in ReLU networks. However, far away from the training data, even Bayesian neural networks (BNNs) can still underestimate uncertainty and thus be overconfident. We suggest to fix this by considering an infinite number of ReLU features over the input domain that ar...
withdrawn-rejected-submissions
The reviewers agree that the proposed method for reducing overconfidence in ReLU networks is novel and interesting. However, the presentation of the theoretical results is too informal and imprecise to warrant acceptance without a strong accompanying experimental section, which is unfortunately lacking. I therefore can...
train
[ "b4cw_57Wn2D", "Lpr8gVZHy8e", "uoCK_JRYx05", "pQ0oZJLS5Qx", "cqFbCSaGIrR", "OBrqdPA3H6M", "ap_0dA6k5M", "1-KP5_GnsC1" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "**SUMMARY**\nThe authors consider the issue of overconfidence in ReLU NN and BNNs, particularly for data that are far (in Euclidean distance) from the training data. They address this by modeling the residual (to the NN) in the latent space with a GP. The kernel for this GP is derived as the limit of infinitely ma...
[ 5, -1, 7, -1, -1, -1, -1, 5 ]
[ 4, -1, 3, -1, -1, -1, -1, 4 ]
[ "iclr_2021_tEw4vEEhHjI", "iclr_2021_tEw4vEEhHjI", "iclr_2021_tEw4vEEhHjI", "ap_0dA6k5M", "1-KP5_GnsC1", "b4cw_57Wn2D", "uoCK_JRYx05", "iclr_2021_tEw4vEEhHjI" ]
iclr_2021_HmAhqnu3qu
Graph Representation Learning for Multi-Task Settings: a Meta-Learning Approach
Graph Neural Networks (GNNs) have become the state-of-the-art method for many applications on graph structured data. GNNs are a framework for graph representation learning, where a model learns to generate low dimensional node embeddings that encapsulate structural and feature-related information. GNNs are usually trai...
withdrawn-rejected-submissions
This paper experimentally observes the negative transfer in Multi-task Graph Representation Learning and proposes to solve the negative transfer with a novel Meta-Learning based training procedure. However, the proposed methods seems not technically sound. There are some concerns about this paper:1. The technique contr...
train
[ "ufWPIRwVzNk", "24Z0S87lO-D", "fvJ26-lAB6D", "V3kyFSvBDZo", "Rmf1QNOq5S", "th96Uu1O5h6", "x52RDBfnnLP", "4nMX3y5g8J" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the positive and insightful comments and we provide answers to his questions below.\n\n### Q1:\nThe initial version of the Related Work section was forced by space limitations, we now provide a revised version of the paper with an extended Related Work section.\n\n### Q2:\nWe had to limit...
[ -1, -1, -1, -1, -1, 5, 6, 7 ]
[ -1, -1, -1, -1, -1, 4, 5, 3 ]
[ "4nMX3y5g8J", "th96Uu1O5h6", "x52RDBfnnLP", "x52RDBfnnLP", "iclr_2021_HmAhqnu3qu", "iclr_2021_HmAhqnu3qu", "iclr_2021_HmAhqnu3qu", "iclr_2021_HmAhqnu3qu" ]
iclr_2021_fycxGdpCCmW
Hybrid Discriminative-Generative Training via Contrastive Learning
Contrastive learning and supervised learning have both seen significant progress and success. However, thus far they have largely been treated as two separate objectives, brought together only by having a shared neural network. In this paper we show that through the perspective of hybrid discriminative-generative train...
withdrawn-rejected-submissions
The paper proposes hybrid discriminative + generative training of energy-based models (HDGE) building on JEM. By connecting contrastive loss functions to generative loss, HDGE proposes an alternative loss function that reduces computational cost of training EBMs. The reviewers agree that this is an interesting idea an...
train
[ "MUfUK2TS8rU", "8erDUg2yEwh", "OoMhVjt2j6Q", "t4jT5vv1aMM", "WR_RQHAfTzE", "5uFxwk7Lkl2", "5EPTqE3Y8u_", "5NwFaKQ5y-Y", "OD5vdPJauN_" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "===============Update after rebuttal period================\nThe connection between the contrastive learning objective and discriminative learning is made via \"resemblance\". And the author claims the \"resemblance\" as a theoretical contribution, which the first reason I vote for a clear rejection. This issue ha...
[ 3, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ 5, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "iclr_2021_fycxGdpCCmW", "t4jT5vv1aMM", "MUfUK2TS8rU", "5EPTqE3Y8u_", "OD5vdPJauN_", "5NwFaKQ5y-Y", "iclr_2021_fycxGdpCCmW", "iclr_2021_fycxGdpCCmW", "iclr_2021_fycxGdpCCmW" ]
iclr_2021__QdvdkxOii6
Measuring Progress in Deep Reinforcement Learning Sample Efficiency
Sampled environment transitions are a critical input to deep reinforcement learning (DRL) algorithms. Current DRL benchmarks often allow for the cheap and easy generation of large amounts of samples such that perceived progress in DRL does not necessarily correspond to improved sample efficiency. As simulating real wor...
withdrawn-rejected-submissions
Although all reviewers agree that this is an interesting analysis of sample efficiency in Deep RL over the past few years, there is also a consensus that it is not enough material for an ICLR paper. I also share this sentiment, which motivates the "Reject" decision. This work could have been made stronger by reproducin...
train
[ "KluXSnbjrUK", "aDBthiOtQ52", "KtGQFHMp3m", "6YP9S9KkIZQ", "wfcOvIazy9D", "E4suj2UxrZc", "n6LTubH7AV0", "bQkvpLiFj0u", "QYo0CdbpmnF" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "##################################\n\nSummary: \nThis paper conducts a meta-analysis of the trend in sample efficiency in deep RL. The authors argue that this is an informative measure of the progress in the field, in addition to the usual metrics of reward for given tasks, as it is an important consideration when...
[ 4, 2, -1, -1, -1, -1, -1, 5, 5 ]
[ 4, 5, -1, -1, -1, -1, -1, 5, 5 ]
[ "iclr_2021__QdvdkxOii6", "iclr_2021__QdvdkxOii6", "wfcOvIazy9D", "KluXSnbjrUK", "bQkvpLiFj0u", "aDBthiOtQ52", "QYo0CdbpmnF", "iclr_2021__QdvdkxOii6", "iclr_2021__QdvdkxOii6" ]
iclr_2021_kcqSDWySoy
Sobolev Training for the Neural Network Solutions of PDEs
Approximating the numerical solutions of partial differential equations (PDEs) using neural networks is a promising application of deep learning. The smooth architecture of a fully connected neural network is appropriate for finding the solutions of PDEs; the corresponding loss function can also be intuitively designed...
withdrawn-rejected-submissions
The paper solves a PDE using an additional penalty function between the derivatives of the function. On toy examples and two PDEs it is shown that these additional terms help. Pros: - The motivation is to include derivatives in the computationa - Implementation and testing on several examples, including high...
train
[ "IVyktT5pH4_", "Yp9uS55TvuW", "U9igXRVTrrZ", "leudNwW2Yf", "YsPopXYS9C1", "HdCqMwmghWS", "kQ7Y3RDaLv", "84Nxj8qxoYm" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The idea of using neural networks to approximate the solutions of the pdes is very interesting, specially in high-dimensional setting where classical approaches fail to scale. Although there has been many efforts in this direction, there are still open venues to explore. One of the most important aspect is the cho...
[ 5, -1, -1, -1, -1, -1, 4, 7 ]
[ 4, -1, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2021_kcqSDWySoy", "IVyktT5pH4_", "kQ7Y3RDaLv", "iclr_2021_kcqSDWySoy", "kQ7Y3RDaLv", "84Nxj8qxoYm", "iclr_2021_kcqSDWySoy", "iclr_2021_kcqSDWySoy" ]
iclr_2021_jC9G3ns6jH
Quantifying Statistical Significance of Neural Network Representation-Driven Hypotheses by Selective Inference
In the past few years, various approaches have been developed to explain and interpret deep neural network (DNN) representations, but it has been pointed out that these representations are sometimes unstable and not reproducible. In this paper, we interpret these representations as hypotheses driven by DNN (called DNN-...
withdrawn-rejected-submissions
Four reviewers evaluated your work and provided a detailed review with many suggestions. I also think that there is an interesting idea and encouraging results but there is a lack of numerical results and still some parts are still unclear and need to be polished. Consequently in its current form, the paper can not be...
train
[ "fX_rIKvYEW5", "Hk26x0ARXaP", "jFxPIzkDfkV", "MgePY0NUEqY", "2JVOHyqa_tM", "QCjELwTsYjO", "iNXHjVd4-m1", "bjeh-4wNBuD" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "### Summary\n\nIn this paper, the authors propose a novel approach for quantifying the statistical significance of binary masks predicted by a subclass of deep neural network (DNN) models for image segmentation problems.\n\nIn brief, the manuscript considers the particular setting where a model has been pretrained...
[ 6, -1, -1, -1, -1, 8, 6, 7 ]
[ 4, -1, -1, -1, -1, 2, 3, 2 ]
[ "iclr_2021_jC9G3ns6jH", "QCjELwTsYjO", "bjeh-4wNBuD", "iNXHjVd4-m1", "fX_rIKvYEW5", "iclr_2021_jC9G3ns6jH", "iclr_2021_jC9G3ns6jH", "iclr_2021_jC9G3ns6jH" ]
iclr_2021_65sCF5wmhpv
Learning to Observe with Reinforcement Learning
We consider a decision making problem where an autonomous agent decides on which actions to take based on the observations it collects from the environment. We are interested in revealing the information structure of the observation space illustrating which type of observations are the most important (such as position ...
withdrawn-rejected-submissions
The setting and the problem addressed by this paper has been considered as important and interesting to tackle with reinforcement learning. Yet, the reviewers expressed several concerns about this paper. Especially, the lack of comparison to state-of-the-art methods and to the standard visualization methods was a share...
train
[ "ORSDnE6UQjv", "ziJcM44wi3", "UoeyHudIvy", "Zmv4CxQmgrM", "T9z6ar4Gw1z", "mWjPey7DLau", "pJorKl0ypul", "1lnU4QKsol" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper aims at studying an optimized way of collecting samples from an environment, discarding the ones for which the accuracy of the observation is high. This way the agent focuses on collecting only the samples that improve the knowledge of the state space.\n\nThis paper could be presented better, as the mot...
[ 4, -1, -1, -1, -1, 6, 5, 4 ]
[ 5, -1, -1, -1, -1, 4, 3, 3 ]
[ "iclr_2021_65sCF5wmhpv", "ORSDnE6UQjv", "1lnU4QKsol", "mWjPey7DLau", "pJorKl0ypul", "iclr_2021_65sCF5wmhpv", "iclr_2021_65sCF5wmhpv", "iclr_2021_65sCF5wmhpv" ]
iclr_2021_KJSC_AsN14
Contrastive Learning with Stronger Augmentations
Representation learning has been greatly improved with the advance of contrastive learning methods with the performance being closer to their supervised learning counterparts. Those methods have greatly benefited from various data augmentations that are carefully designated to maintain their identities so that the imag...
withdrawn-rejected-submissions
This paper improves MoCo-based contrastive learning frameworks by enabling stronger views via an additional divergence loss to the standard (weaker) views. Three reviewers suggested acceptance, and one did rejection. Positive reviewers found the proposed method is novel and shows promising empirical results. However, a...
train
[ "Hm-DpRXBgra", "AMoGFJTWXDk", "SSwUdu97UE", "mDGdiHdJp1", "uyIQ8EqcJea", "bgO6ix5nfUh", "YcX4kTIX63", "CvZACf3a2SM", "yJdMEqg0O5", "SYTwselb9w", "EUgoVglDREZ", "oeNKgjl6Kv6" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks a lot for your kind comments!\n\nFirst, the CLSA indeed solidly beats the SOTA method SWAV that must consume much more time. For example, SWAV uses 173 hours (8 gpus) to train over 200 epochs to beat the MoCo V2, which is around 3 times of training time. Those methods cannot beat MoCo V2 if trained with the...
[ -1, 7, -1, -1, -1, -1, -1, -1, -1, 6, 6, 4 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "SSwUdu97UE", "iclr_2021_KJSC_AsN14", "YcX4kTIX63", "yJdMEqg0O5", "EUgoVglDREZ", "SYTwselb9w", "AMoGFJTWXDk", "oeNKgjl6Kv6", "iclr_2021_KJSC_AsN14", "iclr_2021_KJSC_AsN14", "iclr_2021_KJSC_AsN14", "iclr_2021_KJSC_AsN14" ]
iclr_2021_3c3EhwbKoXw
Spectral Synthesis for Satellite-to-Satellite Translation
Earth observing satellites carrying multi-spectral sensors are widely used to monitor the physical and biological states of the atmosphere, land, and oceans. These satellites have different vantage points above the earth and different spectral imaging bands resulting in inconsistent imagery from one to another. This pr...
withdrawn-rejected-submissions
This paper introduces an interesting application of VAE-GAN to the problem of Spectral Synthesis across satellite observations with some additional domain specific changes (new loss, ...). The introduction of a new dataset is also very interesting and can open the door for more methodological development in the communi...
train
[ "1UhJ11HgYbN", "_OqZf5bInLX", "xHrvG1qn_fz", "UgRNT83zTkE", "uMdZM4XjYIQ", "cUSvYJPUn9T" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for noting the importance and effectiveness of our reconstruction task as well as the comments which will improve the paper. The goal of this work is to find a solution to a new task of matching spectral bands across different sensors on a real world problem and benefits from the well underst...
[ -1, -1, -1, 5, 6, 5 ]
[ -1, -1, -1, 4, 4, 5 ]
[ "UgRNT83zTkE", "uMdZM4XjYIQ", "cUSvYJPUn9T", "iclr_2021_3c3EhwbKoXw", "iclr_2021_3c3EhwbKoXw", "iclr_2021_3c3EhwbKoXw" ]
iclr_2021_bW9SYKHcZiz
Cross-Probe BERT for Efficient and Effective Cross-Modal Search
Inspired by the great success of BERT in NLP tasks, many text-vision BERT models emerged recently. Benefited from cross-modal attention, text-vision BERT models have achieved excellent performance in many language-vision tasks including text-image retrieval. Nevertheless, cross-modal attentions used in text-vision ...
withdrawn-rejected-submissions
After the rebuttal phase, all reviewers give borderline scores (leaning slightly positive, one of these noted in the comment rather than final review). While the reviewers recognize the potential merit of the contribution (efficiency while preserving effectiveness), support for acceptance is not sufficient. The major c...
val
[ "EZRaBHsqB6h", "B5Psz-alhcY", "4vTL0-WhKxI", "Y1hq2SxHZvA" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The efficiency of current SOTA methods on image-text retrieval, especially those visual-language cross-modality pre-training models, has always been a critical problem compared with traditional joint-embedding models. This paper aims to solve this problem by integrating the efficiency of two-tower models in lower ...
[ 6, 6, 5, 6 ]
[ 4, 4, 5, 3 ]
[ "iclr_2021_bW9SYKHcZiz", "iclr_2021_bW9SYKHcZiz", "iclr_2021_bW9SYKHcZiz", "iclr_2021_bW9SYKHcZiz" ]
iclr_2021_WkKsWwxnAkt
Subspace Clustering via Robust Self-Supervised Convolutional Neural Network
Subspace clustering (SC) approaches based on the self-representation model achieved encouraging performance when compared with the clustering algorithms that rely on proximity measures between data points. However, they still face serious limitations in real-world applications. One limitation relates to the linearity a...
withdrawn-rejected-submissions
The paper proposes a robust formulation of Deep Subspace Clustering (DSC) based on the correntropy induced metric (CIM) of the error. All three reviewers recommend rejection. Their major critiques are limited novelty, insufficient experiments and similar performance to non-deep methods. The rebuttal highlights that the...
train
[ "VjLUppqKm0D", "H-mQvOtDEXC", "g9IU78GS-B", "BRpTTK26L7A", "j97wq0newWC", "8cz22Zh96v-" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors propose to add a correntropy induced metric (CIM) loss term to improve the robustness of the self-supervised convolutional subspace clustering network (${\\rm S}^2$ConvSCN) to data corruption. The authors show that in a truly unsupervised training environment, the proposed robust ${\\rm S}^2$ConvSCN me...
[ 5, -1, -1, -1, 3, 5 ]
[ 4, -1, -1, -1, 4, 5 ]
[ "iclr_2021_WkKsWwxnAkt", "8cz22Zh96v-", "VjLUppqKm0D", "j97wq0newWC", "iclr_2021_WkKsWwxnAkt", "iclr_2021_WkKsWwxnAkt" ]
iclr_2021_inBTt_wSv0
Exploring Transferability of Perturbations in Deep Reinforcement Learning
The use of Deep Neural Networks (DNNs) as function approximators has led to striking progress for reinforcement learning algorithms and applications. At the same time, deep reinforcement learning agents have inherited the vulnerability of DNNs to imperceptible adversarial perturbations to their inputs. Prior work on ad...
withdrawn-rejected-submissions
The work studies the transferability of perturbations/adversarial attacks on DRL agents. As a way to mitigate the cost of generating individual perturbation for each state in each episode, the authors proposed several variants to use same perturbation across different states across different episodes. While reviewers r...
train
[ "ULPv5ro1NLa", "GawHVnDDcXr", "gRWyZ6B_7pw", "UDcDMrN0hM", "yDEhZ5f9BZ", "wnWcJDQ3lec", "ZnoJkcBkH4w", "6WNkpwRiKBb", "0QYXZtyvuXl", "JDx404flSD6", "MLhGzcpn98w", "RJ-vK3cuXL2", "IFRmg5-1lO8", "gA_g51kk5m", "heTbsQJLYph", "faV4pSOhfU", "-erFiYVdN33" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors studied how perturbations on states would affect the performance of deep reinforcement learning. They defined different types of perturbations, like different perturbations for each state, or apply the perturbations on the initial state on every state. The authors tested these perturbations in some exi...
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2021_inBTt_wSv0", "faV4pSOhfU", "heTbsQJLYph", "ULPv5ro1NLa", "iclr_2021_inBTt_wSv0", "IFRmg5-1lO8", "RJ-vK3cuXL2", "gA_g51kk5m", "JDx404flSD6", "heTbsQJLYph", "heTbsQJLYph", "faV4pSOhfU", "-erFiYVdN33", "ULPv5ro1NLa", "iclr_2021_inBTt_wSv0", "iclr_2021_inBTt_wSv0", "iclr_2021_...
iclr_2021_G0VouKj9HUG
On Learning Read-once DNFs With Neural Networks
Learning functions over Boolean variables is a fundamental problem in machine learning. But not much is known about learning such functions by neural networks. Because learning these functions in the distribution free setting is NP-Hard, they are unlikely to be efficiently learnable by networks in this case. However, a...
withdrawn-rejected-submissions
This paper studies how two-layer neural networks can learn DNFs. The paper provides some theoretical analysis together with empirical evidence. The direction of analyzing how neural networks learn certain concept classes is definitely extremely important, and the authors do make some progress towards this direction. H...
train
[ "Q9QIQFHLkdt", "I6jEJwUWL10", "fLkn8_6JaSd", "J6GrpfDLUCz", "WA0ds_3555F", "7NrvDFCVqZq" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the feedback. We address your concerns below.\n\n###\n\n“The assumption that the whole set of instances of the true read-once DNF is given is too strong. It would be much nicer, given a partial set of instances S \\subset X, one can learn a consistent DNF by using neural networks. Then, previous PAC ...
[ -1, -1, -1, 5, 7, 4 ]
[ -1, -1, -1, 4, 2, 3 ]
[ "7NrvDFCVqZq", "J6GrpfDLUCz", "WA0ds_3555F", "iclr_2021_G0VouKj9HUG", "iclr_2021_G0VouKj9HUG", "iclr_2021_G0VouKj9HUG" ]
iclr_2021_HC5VgCHtU10
Disentangling style and content for low resource video domain adaptation: a case study on keystroke inference attacks
Keystroke inference attacks are a form of side-channels attacks in which an attacker leverages various techniques to recover a user’s keystrokes as she inputs information into some display (for example, while sending a text message or entering her pin). Typically, these attacks leverage machine learning approaches, but...
withdrawn-rejected-submissions
The paper focuses proposes a new framework for low-resource video domain adaptation leveraging synthetic data with supervised disentangled learning for tackling keystroke inference attacks. The paper received contrasting reviews, 2 positive and 2 negative, and the overall confidence of the reviewers is not so high. Ov...
train
[ "_KcmoPlQNvp", "d2Z2JoLJNA", "sJ5ccI0VEn7", "Km70upRmG74", "ii84eau3gQS", "sk0JGYOlhS6", "QSZ5_Eh0wGP", "f5dew310pmD", "owlAvjgqpHE", "He4LJNuj5hv", "cAhFlp21v0", "MKBy5E7ox4N", "XR9wE41F_XS", "sFGxmKzBmQc", "upp-e0CH4DJ", "u6MXaiD89c9", "qPmbbxGNjdC", "BSs9lcKcNdJ" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper is about keystroke inference attacks and proposes a method to assess the threat of deep learning based approaches\nwhen only limited real-life data are available. To this end, it is introduced a video domain adaptation technique\nthat is able to generate data into separate style and content representati...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 7 ]
[ 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 2 ]
[ "iclr_2021_HC5VgCHtU10", "Km70upRmG74", "ii84eau3gQS", "XR9wE41F_XS", "MKBy5E7ox4N", "iclr_2021_HC5VgCHtU10", "f5dew310pmD", "upp-e0CH4DJ", "qPmbbxGNjdC", "qPmbbxGNjdC", "qPmbbxGNjdC", "_KcmoPlQNvp", "_KcmoPlQNvp", "BSs9lcKcNdJ", "u6MXaiD89c9", "iclr_2021_HC5VgCHtU10", "iclr_2021_HC5...
iclr_2021_pAq1h9sQhqd
Stochastic Canonical Correlation Analysis: A Riemannian Approach
We present an efficient stochastic algorithm (RSG+) for canonical correlation analysis (CCA) derived via a differential geometric perspective of the underlying optimization task. We show that exploiting the Riemannian structure of the problem reveals natural strategies for modified forms of manifold stochastic gradien...
withdrawn-rejected-submissions
This paper gives a new algorithm for the CCA problem. The main idea of the new algorithm is to reformulate the matrices in the CCA problem as a product of three matrices: one orthonormal matrix, one rotation and one upper-diagonal matrix. The algorithm then performs remannian gradient descent to these components. The p...
train
[ "0_9Ym4E4Hx6", "1mDMWe9VUve", "r9GK317rkDD", "3TB5BjWMuR_", "9cxwZ5Ouedz", "qyrddLnfRvx", "igA9vszSRsJ", "01DOtEv948y", "0zW7Tn_sbP", "p3VPG-r0j1B", "bUrYiJNWEff", "wfYblg_7hRr", "yMX_CbHRtO7", "D_YudmtLrJr" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1. by `\"asymptotically converge\": does it mean when the number of samples goes to infinity?\n\nAns: Here “asymptotic” is with respect to the number of steps of the Riemannian gradient descent procedure (Bonnabel, 2013 at https://arxiv.org/pdf/1111.5280.pdf), see (4) and Theorem 1. This analysis style has also b...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 4, 4 ]
[ "1mDMWe9VUve", "r9GK317rkDD", "9cxwZ5Ouedz", "qyrddLnfRvx", "igA9vszSRsJ", "wfYblg_7hRr", "01DOtEv948y", "yMX_CbHRtO7", "D_YudmtLrJr", "bUrYiJNWEff", "iclr_2021_pAq1h9sQhqd", "iclr_2021_pAq1h9sQhqd", "iclr_2021_pAq1h9sQhqd", "iclr_2021_pAq1h9sQhqd" ]
iclr_2021_qiydAcw6Re
Geometry of Program Synthesis
We present a new perspective on program synthesis in which programs may be identified with singularities of analytic functions. As an example, Turing machines are synthesised from input-output examples by propagating uncertainty through a smooth relaxation of a universal Turing machine. The posterior distribution over...
withdrawn-rejected-submissions
This paper is a bad fit for ICLR and the authors may consider submitting to more theoretical venues. This paper studies algebraic geometry (an area unfamiliar to most ICLR readers) of program synthesis, with the "hope that algebraic geometry can assist in developing the next generation of synthesis machines." Unfortuna...
train
[ "MwRJAoxb86o", "9Eg93M5KXY", "zvgDLS_Om__", "VnmjOxTBTzy", "4BvKpF0Dxu", "PucRNxSwL7H", "-JEuvp6AckQ", "lIOxjjOZii_" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "- quality : good\n- clarity : very good. I was worried about reading this due to all the math symbols, but it turns out the big pictures are clearly explained.\n- originality : okay\n- significance : not sure\n\nI could not follow the more technical aspects as I'm not a theory person (although I did fail algebraic...
[ 7, -1, -1, -1, -1, -1, 5, 4 ]
[ 2, -1, -1, -1, -1, -1, 1, 1 ]
[ "iclr_2021_qiydAcw6Re", "zvgDLS_Om__", "MwRJAoxb86o", "-JEuvp6AckQ", "lIOxjjOZii_", "iclr_2021_qiydAcw6Re", "iclr_2021_qiydAcw6Re", "iclr_2021_qiydAcw6Re" ]
iclr_2021_tnq_O52RVbR
SHADOWCAST: Controllable Graph Generation with Explainability
We introduce the problem of explaining graph generation, formulated as controlling the generative process to produce desired graphs with explainable structures. By directing this generative process, we can explain the observed outcomes. We propose SHADOWCAST, a controllable generative model capable of mimicking network...
withdrawn-rejected-submissions
This paper proposed a conditional graph generative model closely following the unconditional generative model NetGAN and extending it by adding conditioning on extra information available for graph generation (“shadow” node attributes as the authors call it). Overall this is an extension over NetGAN and gives this cla...
train
[ "tqO0iqqk3Kl", "_CMgm1CAinX", "cuQN0JatYAL", "QTBPMYYSXSV", "hlHsU3h8a1", "3_U4ltjbJco", "T97Qo4W-U-I", "zmJjXVJNrVF", "LlWHDeC9cL1", "20DP2kTpDH9" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewers for all the constructive comments and suggestions. We have made edits to the paper to address all concerns. Significant changes include: \n- In section 1, we revised our wording to clarify early in the paper, newly introduced concepts further.\n- In section 2.1, we revised the problem formul...
[ -1, -1, -1, -1, -1, -1, 5, 5, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 5, 5 ]
[ "iclr_2021_tnq_O52RVbR", "hlHsU3h8a1", "20DP2kTpDH9", "LlWHDeC9cL1", "zmJjXVJNrVF", "T97Qo4W-U-I", "iclr_2021_tnq_O52RVbR", "iclr_2021_tnq_O52RVbR", "iclr_2021_tnq_O52RVbR", "iclr_2021_tnq_O52RVbR" ]
iclr_2021_pHsHaXAv8m-
Towards Principled Representation Learning for Entity Alignment
Knowledge graph (KG) representation learning for entity alignment has recently received great attention. Compared with conventional methods, these embedding-based ones are considered to be robuster for highly-heterogeneous and cross-lingual entity alignment scenarios as they do not rely on the quality of machine trans...
withdrawn-rejected-submissions
The authors study the problem of augmenting embedding-based entity alignment in knowledge graphs (KG) through the use of joint alignment with deduced neural ontologies (more specifically, alignment of the KG 'neural' axioms). Motivated by the observation that the representation between two potentially aligned entities ...
train
[ "_ve0kRP5X0r", "hJ9tD0kci63", "L99lBNns_r", "Jtw3QXPvLE", "cFz-g6WZat", "Up7lIdhzkvV", "VE3TMz0FBs", "dLvFJt30T9V" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary: \nThe paper proposes NeoEA, an approach that further constrains KG embedding with ontology knowledge. The paper first tries to summarize the existing embedding-based entity alignment methods, stating that most of the methods choose TransE as scoring functions. But their embedding features are not aligned ...
[ 5, -1, -1, -1, -1, 5, 5, 8 ]
[ 3, -1, -1, -1, -1, 4, 3, 3 ]
[ "iclr_2021_pHsHaXAv8m-", "VE3TMz0FBs", "_ve0kRP5X0r", "Up7lIdhzkvV", "dLvFJt30T9V", "iclr_2021_pHsHaXAv8m-", "iclr_2021_pHsHaXAv8m-", "iclr_2021_pHsHaXAv8m-" ]
iclr_2021_lEZIPgMIB1
Parametric UMAP: learning embeddings with deep neural networks for representation and semi-supervised learning
We propose Parametric UMAP, a parametric variation of the UMAP (Uniform Manifold Approximation and Projection) algorithm. UMAP is a non-parametric graph-based dimensionality reduction algorithm using applied Riemannian geometry and algebraic topology to find low-dimensional embeddings of structured data. The UMAP algo...
withdrawn-rejected-submissions
We thank the authors (and reviewers) for engaging in a detailed and constructive discussion, and providing a revised version of the paper after the initial round of reviews. Regarding quality, the work is technically correct and the amount of experiments significant. However, as highlighted by reviewers 2 and 3, some ...
test
[ "DjM0EYniFXZ", "3mEHJgWiScp", "HmUhlp6sKIF", "Dpc1NubX7VH", "xLW_mKD7Kbh", "gxT_wJ1aXZT", "vHfphtzsEBJ", "tolgBBR0NTn", "A7E6273fju", "0E4pWMu19G_", "u6nVLhzP70D", "Zc7S_WMdbQP", "9AR-gYG8M7n", "_bblTaVb8E", "BOUSLiq_mMz", "v_HvmO6-He9", "MFDazM0QBQN", "u7KiRT8CYtC", "tthvXjH7Cfv...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", ...
[ "The authors propose a parametric version of UMAP by replacing sampling embeddings in the optimization of UMAP with directly learning weights of a neural network. The paper is very well and clearly written, but I have several significant concerns:\n\n1. I don't see significant methodological novelty. Replacing embe...
[ 7, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 9 ]
[ 5, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2021_lEZIPgMIB1", "iclr_2021_lEZIPgMIB1", "iclr_2021_lEZIPgMIB1", "gxT_wJ1aXZT", "A7E6273fju", "vHfphtzsEBJ", "q6rwPKwFpo4", "bEY_TslxSZm", "xJrVwvwTfWg", "HmUhlp6sKIF", "3mEHJgWiScp", "v_HvmO6-He9", "v_HvmO6-He9", "v_HvmO6-He9", "DjM0EYniFXZ", "oBKhXEKPsCz", "BOUSLiq_mMz", "...
iclr_2021_wabe-NE8-AX
NNGeometry: Easy and Fast Fisher Information Matrices and Neural Tangent Kernels in PyTorch
Fisher Information Matrices (FIM) and Neural Tangent Kernels (NTK) are useful tools in a number of diverse applications related to neural networks. Yet these theoretical tools are often difficult to implement using current libraries for practical size networks, given that they require per-example gradients, and a large...
withdrawn-rejected-submissions
This paper provides a high-level API for working with Neural Tangent Kernels (NTK) and Fisher Information Matrices (FIM). This is an implementation paper, but such concepts are clearly useful in many tasks. However, such methods are available in many in-house code (almost every paper on FIM / NTK uses such methods) I w...
train
[ "Sl-oJ2G56Lz", "pABlBOWWy0", "C3d8TeR4yhs", "ot4h2jLrgwD", "3H_ct-drGc0", "3YZMw2FHX8", "nJtLQrEh2U3", "uO_-1y5Me2", "K91eoFt6o3K" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "### Summary\n\nThe Fisher information matrix and the neural tangent kernel matrix have been used in several recent papers to provide insight into deep neural networks, but operations involving these matrices have so far been less well supported in frameworks such as Tensorflow and PyTorch. The current paper descr...
[ 7, 4, -1, -1, -1, -1, -1, 5, 4 ]
[ 3, 4, -1, -1, -1, -1, -1, 2, 3 ]
[ "iclr_2021_wabe-NE8-AX", "iclr_2021_wabe-NE8-AX", "ot4h2jLrgwD", "pABlBOWWy0", "uO_-1y5Me2", "K91eoFt6o3K", "Sl-oJ2G56Lz", "iclr_2021_wabe-NE8-AX", "iclr_2021_wabe-NE8-AX" ]
iclr_2021_N9oPAFcuYWX
Understanding and Mitigating Accuracy Disparity in Regression
With the widespread deployment of large-scale prediction systems in high-stakes domains, e.g., face recognition, criminal justice, etc., disparity on prediction accuracy between different demographic subgroups has called for fundamental understanding on the source of such disparity and algorithmic intervention to mitig...
withdrawn-rejected-submissions
This papers considers the problem of accuracy disparity in regression for the case of binary sensitive attributes. It provides bounds for accuracy disparity and introduces two methods to enforce this criterion based on representation learning. The reviews are in agreement that the paper is generally clear and well wr...
train
[ "yM07hOhSUYS", "fpfXg4uCvX", "Qnb5tiA-fmY", "BdN4qdVDGq", "JFtPouN8wI", "66IVNk4tE39", "Ww1weCPtFwE", "aCB34rueywO", "oHw6PMJPZ_h", "AX-mIv40MlD" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1. Justification of our theoretical results.\n\nThanks for your insightful comments on our geometric interpretation. The provided counter example could be covered by Theorem 3.3: When the hypothesis outputs a constant (the second term in the upper bound in Theorem 3.3 then equals to zero) and $\\text{Var}[Y|A]$ is...
[ -1, -1, -1, -1, -1, -1, 4, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 3, 3 ]
[ "Ww1weCPtFwE", "aCB34rueywO", "AX-mIv40MlD", "iclr_2021_N9oPAFcuYWX", "yM07hOhSUYS", "oHw6PMJPZ_h", "iclr_2021_N9oPAFcuYWX", "iclr_2021_N9oPAFcuYWX", "iclr_2021_N9oPAFcuYWX", "iclr_2021_N9oPAFcuYWX" ]
iclr_2021_C5kn825mU19
A Coach-Player Framework for Dynamic Team Composition
In real-world multi-agent teams, agents with different capabilities may join or leave "on the fly" without altering the team's overarching goals. Coordinating teams with such dynamic composition remains a challenging problem: the optimal team strategy may vary with its composition. Inspired by real-world team sports, w...
withdrawn-rejected-submissions
This paper proposes an approach for coordinating teams with dynamic composition consisting of an attention mechanism, regularization and communication. The clarity of the paper is currently low seemingly due to the conflated message of the multiple parts of the framework. Improvements to the text via the suggested edit...
train
[ "7HV6wwG6dKc", "7IsozW6hdTn", "6Yym4AEzIC", "NpZqqDILbQj", "Wve3nHAQNva", "UWohJCKVrUo", "9LYsN51laG", "6eVTmfFr2zu", "7twWau2QY52" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary: \nThe authors propose a coach-player framework for dynamic team composition of dynamic and heterogeneous agents based on deep Q learning with an attention mechanism and a variational objective to regularize the learning. The authors design an adaptive communication strategy to minimize communication from ...
[ 6, -1, -1, -1, -1, -1, 7, 4, 5 ]
[ 3, -1, -1, -1, -1, -1, 2, 3, 3 ]
[ "iclr_2021_C5kn825mU19", "iclr_2021_C5kn825mU19", "9LYsN51laG", "7HV6wwG6dKc", "6eVTmfFr2zu", "7twWau2QY52", "iclr_2021_C5kn825mU19", "iclr_2021_C5kn825mU19", "iclr_2021_C5kn825mU19" ]
iclr_2021_g1KmTQhOhag
Memory Representation in Transformer
Transformer-based models have achieved state-of-the-art results in many natural language processing tasks. The self-attention architecture allows transformer to combine information from all elements of a sequence into context-aware representations. However, information about the context is stored mostly in the same ele...
withdrawn-rejected-submissions
The paper studies three kinds of memory-augmented Transformers, focusing on one (the MemTransformer, which adds [MEM] tokens to a document.) This is a nice clean extension of Transformers and a topic well worth investigating. Unfortunately, the experimental results were considered unconvincing: - The baselines were...
train
[ "_0S1DsO1_kU", "eqhju1iena", "6eY9RM0P3Zz", "5odj3wkP4Lc", "EkBx88idgqh", "TsRoXzR9zdj", "eIGJfSohl18", "uJm31kKvY0t", "sfmSW-_b85H", "EU7IPF3ynXx", "Iv5wHNNoCDh", "HPWpv_MHQqQ", "ZQdtZPw9n3", "tJgLAeTKe5Q", "xoXUSLTLpkO", "RtfA1wbIIq8", "2TdaDWGUP77", "3glSHz0IRDb", "7b_VWzn-bf"...
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "...
[ "> Summary: The paper proposes to study three formulations (MemTransformer, MemCtrl and MemBottleneck) of memory-augmented self-attention transformers, and investigate the influence of adding memory tokens to the model to its overall performance. The authors claim via some experiments on MT and LM that memory augme...
[ 4, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, 5, 3 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2021_g1KmTQhOhag", "iclr_2021_g1KmTQhOhag", "eIGJfSohl18", "EkBx88idgqh", "HPWpv_MHQqQ", "uJm31kKvY0t", "EU7IPF3ynXx", "Iv5wHNNoCDh", "iclr_2021_g1KmTQhOhag", "tJgLAeTKe5Q", "RtfA1wbIIq8", "2TdaDWGUP77", "3glSHz0IRDb", "sfmSW-_b85H", "7b_VWzn-bf", "_0S1DsO1_kU", "iclr_2021_g1Km...
iclr_2021_jMc7DlflrMC
Density Constrained Reinforcement Learning
Constrained reinforcement learning (CRL) plays an important role in solving safety-critical and resource-limited tasks. However, existing methods typically rely on tuning reward or cost parameters to encode the constraints, which can be tedious and tend to not generalize well. Instead of building sophisticated cost fun...
withdrawn-rejected-submissions
The reviewers raised several theoretical and empirical questions about the paper. During the rebuttals, the authors seem to successfully address the experimental issues, in particular those raised by Reviewers 1 and 2. However, the theoretical concerns have mainly remained unanswered. Reviewer 2 has a major concern ab...
train
[ "IL50-sVYa-l", "uRQvqVD9asX", "uC0jLOjenei", "9urJ3py5rUc", "uXb6kaLlsfl", "PizzFMbxFJM", "nrODnEsW24_", "vLIPBLQBKta", "GLCvvFWUD0T", "EGDWExEankf", "r8nN9z_gEJh", "Cye82ZdLTg2", "BDbndNNhqXJ", "PfAqa48tfX", "bgmwfYovPyW", "2XLWwTZA6e5" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We are sincerely grateful to the reviewers for their valuable comments and suggestions. We also appreciate that 3 out of 4 reviewers gave us very positive evaluations. One reviewer also raised the score during rebuttal after we added more experiments, which “significantly strengths the empirical evaluation”. We un...
[ -1, -1, -1, -1, -1, -1, 7, -1, -1, -1, -1, -1, -1, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 2, 4, 4 ]
[ "iclr_2021_jMc7DlflrMC", "uC0jLOjenei", "r8nN9z_gEJh", "PizzFMbxFJM", "PizzFMbxFJM", "BDbndNNhqXJ", "iclr_2021_jMc7DlflrMC", "Cye82ZdLTg2", "EGDWExEankf", "PfAqa48tfX", "2XLWwTZA6e5", "nrODnEsW24_", "bgmwfYovPyW", "iclr_2021_jMc7DlflrMC", "iclr_2021_jMc7DlflrMC", "iclr_2021_jMc7DlflrMC...
iclr_2021_LhAqAxwH5cn
Robust Loss Functions for Complementary Labels Learning
In ordinary-label learning, the correct label is given to each training sample. Similarly, a complementary label is also provided for each training sample in complementary-label learning. A complementary label indicates a class that the example does not belong to. Robust learning of classifiers has been investigated fr...
withdrawn-rejected-submissions
the paper undoubtedly tackles an interesting problem in the mainstream of learning with partial / unknown / weak / noisy / complementary labels. The authors have had a set of constructive suggestions and questions from the reviewers (and external comments), some positive, some negative. I find it a bit unsettling that ...
train
[ "To1JEdyxFiX", "WcgomHxZFdu", "6T77RkqCXsr", "4fW94VG_Hqa", "KrG1ARNRmMV", "mrd7htXUnmA", "2GIE2YLKvEd", "1z6RJuECzY-", "IdSf90F0X_D", "V_OExJ8mev5", "Oe0b3Sobol" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1. Answer: Thank you very much for your suggestion. We are studying the two methods in [A] and [B], and considering a more effective model to improve the performance of complementary learning. \nWe have updated the conclusion: More methods should be studied to improve the performance of complementary learning in o...
[ -1, -1, -1, 5, -1, -1, -1, -1, 3, 7, 7 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, 4, 3, 2 ]
[ "Oe0b3Sobol", "1z6RJuECzY-", "mrd7htXUnmA", "iclr_2021_LhAqAxwH5cn", "IdSf90F0X_D", "4fW94VG_Hqa", "V_OExJ8mev5", "iclr_2021_LhAqAxwH5cn", "iclr_2021_LhAqAxwH5cn", "iclr_2021_LhAqAxwH5cn", "iclr_2021_LhAqAxwH5cn" ]
iclr_2021_87Ti3dufEv
A Half-Space Stochastic Projected Gradient Method for Group Sparsity Regularization
Optimizing with group sparsity is significant in enhancing model interpretability in machining learning applications, e.g., feature selection, compressed sensing and model compression. However, for large-scale stochastic training problems, effective group-sparsity exploration are typically hard to achieve. Particularl...
withdrawn-rejected-submissions
The paper received four borderline reviews. Overall, the manuscript has improved after the rebuttal (in particular, an issue in the convergence proof has been fixed), and a reviewer has increased his score to borderline accept. Yet, the paper did not convince the reviewers that the contribution was significant enough ...
train
[ "3On7Zkjmubz", "dC3DsRD8B0Y", "nBP-cNBzWLr", "_qcfnkDoMK9", "7e0ei5NYzYk", "9DEdc6S30ro", "RZa6tSJO8LA", "Pl6CPBY5yb", "Fe6PQ-Wk5Ty", "ONutkfltc7", "IQMqSSD2-Pr" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "[Summary]\nThis paper proposes a new method called Half-space Stochastic Projected Gradient (HSPG) to find a group sparse solution of regularized finite-sum problems. Theoretical analysis tries to show the sparsity identification guarantees. In experiments, the effectiveness of HSPG was verified on the classificat...
[ 6, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, 2, 3, 3 ]
[ "iclr_2021_87Ti3dufEv", "9DEdc6S30ro", "_qcfnkDoMK9", "IQMqSSD2-Pr", "Fe6PQ-Wk5Ty", "3On7Zkjmubz", "Pl6CPBY5yb", "ONutkfltc7", "iclr_2021_87Ti3dufEv", "iclr_2021_87Ti3dufEv", "iclr_2021_87Ti3dufEv" ]
iclr_2021_K_ETaDx3Iv
FLAGNet : Feature Label based Automatic Generation Network for symbolic music
The technology for automatic music generation has been very actively studied in recent years. However, almost in these studies, handling domain knowledge of music was omitted or considered a difficult task. In particular, research that analyzes and utilizes the characteristics of each bar of music is very rare, even th...
withdrawn-rejected-submissions
All Reviewers and myself agree that the paper presents several major issues that require important rethinking of the research done, as well as a full rewriting of the manuscript. Hence, my recommendation is to REJECT the paper. As a brief summary, I highlight below some pros and cons that arose during the review and me...
train
[ "zCz69t4XCG5", "t4Hl9Y5iz8h", "WjGiasd-xZW", "gmZ43F0JFmF", "H1yIxvtIIzr", "xzzOtR9YdBd", "Jn8iXvTmMsC", "Gw6pG_gFZng", "PcL5JBWDm0p", "utfmOZjct9p", "JK4-dX4Akkm" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Thank you for your response. I'm not sure my concerns here are the kind which can be resolved by minor changes. I recommend you think more about how your work builds on existing music generation schemes before submitting again. I also recommend you reach out to a music theorist or musicologist and have them look o...
[ -1, 2, -1, 3, -1, -1, -1, -1, -1, 2, 3 ]
[ -1, 5, -1, 5, -1, -1, -1, -1, -1, 5, 5 ]
[ "xzzOtR9YdBd", "iclr_2021_K_ETaDx3Iv", "PcL5JBWDm0p", "iclr_2021_K_ETaDx3Iv", "Gw6pG_gFZng", "utfmOZjct9p", "JK4-dX4Akkm", "gmZ43F0JFmF", "t4Hl9Y5iz8h", "iclr_2021_K_ETaDx3Iv", "iclr_2021_K_ETaDx3Iv" ]
iclr_2021_D1E1h-K3jso
Learning from Noisy Data with Robust Representation Learning
Learning from noisy data has attracted much attention, where most methods focus on label noise. In this work, we propose a new framework which simultaneously addresses three types of noise commonly seen in real-world data: label noise, out-of-distribution input, and input corruption. In contrast to most existing method...
withdrawn-rejected-submissions
This paper proposes a framework to train a discriminative model robust against (i) label noise, (ii) out-of-distribution input, and (iii) input corruption. To tackle these problems, a complex model is proposed that combines several existing models including InfoNCE-style contrastive learning, prototypical contrastive l...
train
[ "SxFvTUl4uRx", "qm79EGHvQ2_", "cnSpEnOcavL", "Yaww4DN811", "Q3qT4E3ICK5", "3CIznOXLPeH", "f9WRgPI_Xp8", "HwPVdjHOvMc" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The authors have conducted a range of experiments to validate the performance of the proposed method. And according to the results, it is appealing that the proposed method achieves relative improvement compared to current SOTAs . However, there are some concerns as follows.\n\n1) It is quite ad-hoc about the prop...
[ 6, 7, 6, -1, -1, -1, -1, 6 ]
[ 4, 4, 4, -1, -1, -1, -1, 3 ]
[ "iclr_2021_D1E1h-K3jso", "iclr_2021_D1E1h-K3jso", "iclr_2021_D1E1h-K3jso", "SxFvTUl4uRx", "cnSpEnOcavL", "HwPVdjHOvMc", "qm79EGHvQ2_", "iclr_2021_D1E1h-K3jso" ]
iclr_2021_iKXWZru0DS
Attention-driven Robotic Manipulation
Despite the success of reinforcement learning methods, they have yet to have their breakthrough moment when applied to a broad range of robotic manipulation tasks. This is partly due to the fact that reinforcement learning algorithms are notoriously difficult and time consuming to train, which is exacerbated when train...
withdrawn-rejected-submissions
This paper proposes a general manipulation algorithm for tasks that have sparse rewards. The algorithm uses a Q-attention to extract interesting pixel locations with an explicit attention module. A data augmentation method is also proposed to generalize expert demonstrations. While the proposed method and experiments...
train
[ "yGu6FXTgVkP", "kHSTThcLRxh", "bsrZnBmrJpZ", "dlQIeP4IrJJ", "sBe0YsuPQij", "VnTfnM8KF27", "ptOr0bqsw23", "Fj_dDx8nPAb" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary: \n\nThis work focuses on sparse-reward robotic manipulation tasks from image and point cloud inputs, given a few demonstrations, and proposes an algorithm that consists of a Q-attention module and a confidence-aware critic. The Q-attention module is an RL agent, which takes image and point cloud inputs wi...
[ 4, -1, -1, -1, -1, -1, 7, 4 ]
[ 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_iKXWZru0DS", "iclr_2021_iKXWZru0DS", "ptOr0bqsw23", "yGu6FXTgVkP", "VnTfnM8KF27", "Fj_dDx8nPAb", "iclr_2021_iKXWZru0DS", "iclr_2021_iKXWZru0DS" ]
iclr_2021_LLoe0U9ShkN
Global inducing point variational posteriors for Bayesian neural networks and deep Gaussian processes
We derive the optimal approximate posterior over the top-layer weights in a Bayesian neural network for regression, and show that it exhibits strong dependencies on the lower-layer weights. We adapt this result to develop a correlated approximate posterior over the weights at all layers in a Bayesian neural network. We...
withdrawn-rejected-submissions
The paper proposes a variational inference method for Bayesian neural networks where the approximate posterior models the correlations between the weights at all layers, using the concept of “global” inducing points. Some concerns raised by the reviewers regarding how global inducing points allow us to capture uncert...
train
[ "wQsHS2wQOb2", "B0t-KJTUrnT", "uyn4jAraOyU", "xFUG3YADLiW", "E2k5OlmR8ok", "xEi6lDpHqnB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposed a new way of doing Bayesian deep learning in which the optimal conditional posterior for the last layer weights could be reached if the inducing input $Z_0$ is chosen to be the input data $X$ and the pseudo-observation for the last layer $V^L$ is the observation $Y$. Instead of factorizing the i...
[ 7, -1, -1, -1, 7, 6 ]
[ 3, -1, -1, -1, 3, 3 ]
[ "iclr_2021_LLoe0U9ShkN", "wQsHS2wQOb2", "E2k5OlmR8ok", "xEi6lDpHqnB", "iclr_2021_LLoe0U9ShkN", "iclr_2021_LLoe0U9ShkN" ]
iclr_2021_HMEiDPTOTmY
Later Span Adaptation for Language Understanding
Pre-trained contextualized language models (PrLMs) broadly use fine-grained tokens (words or sub-words) as minimal linguistic unit in pre-training phase. Introducing span-level information in pre-training has shown capable of further enhancing PrLMs. However, such methods require enormous resources and are lack of adap...
withdrawn-rejected-submissions
The paper proposes to combine the span-level information into a phrase-level representation in the fine-tuning phrase for pre-trained language models. The phrases are pre-defined in a dictionary. Experiments show improvements in various downstream tasks in the GLUE benchmark. It's a borderline paper. Various concer...
train
[ "Vh-GsVS0GyA", "1_KqmZPQR0V", "FUKq4Leem-t", "lo9isb8gzkU", "0z6KDEnImIL", "lE-u0jVX3aG", "NbG-UyXv4aD", "k5WqL5d-eHM", "iBuyelfEcm", "2u6trT873YO" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewers and AC,\nWe sincerely thank the reviewers for their detailed comments. Reviewers (R2) noted that our method is novel for (a) incorporating span information only during fine-tuning and (b) developing a new way to segment sentences. At the same time, our idea is recognized as well motivated and easy t...
[ -1, -1, -1, -1, -1, -1, 6, 6, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "iclr_2021_HMEiDPTOTmY", "iBuyelfEcm", "NbG-UyXv4aD", "2u6trT873YO", "k5WqL5d-eHM", "iclr_2021_HMEiDPTOTmY", "iclr_2021_HMEiDPTOTmY", "iclr_2021_HMEiDPTOTmY", "iclr_2021_HMEiDPTOTmY", "iclr_2021_HMEiDPTOTmY" ]
iclr_2021_7ODIasgLJlU
Deep Q-Learning with Low Switching Cost
We initiate the study on deep reinforcement learning problems that require low switching cost, i.e., small number of policy switches during training. Such a requirement is ubiquitous in many applications, such as medical domains, recommendation systems, education, robotics, dialogue agents, etc, where the deployed pol...
withdrawn-rejected-submissions
This paper studies RL with low switching cost under the deep RL setting. It provides new heuristics for doing so. The reviewers are worrying about whether the problem is important in practice, whether the policies obtained can be used in practice, and the theories might not be strong enough. The paper can be strengthen...
train
[ "yEDt2Q8FmU-", "mwSQh79xhv8", "tmIpH-rYoMP", "SvziB7Jx5yo", "mVi3Ck0SbD" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "######################################\n\nSummary:\nIn many real world applications for RL such as medicine, there are limits on the number of policies from which we can simulate data. This paper proposes an approach that adaptively decides when to update the simulation policy, based on the difference between it a...
[ 5, -1, 5, 5, 4 ]
[ 3, -1, 4, 3, 5 ]
[ "iclr_2021_7ODIasgLJlU", "iclr_2021_7ODIasgLJlU", "iclr_2021_7ODIasgLJlU", "iclr_2021_7ODIasgLJlU", "iclr_2021_7ODIasgLJlU" ]
iclr_2021_5JnS8wROG9
On the Inductive Bias of a CNN for Distributions with Orthogonal Patterns
Training overparameterized convolutional neural networks with gradient based optimization is the most successful learning method for image classification. However, their generalization properties are far from understood. In this work, we consider a simplified image classification task where images contain orthogonal pa...
withdrawn-rejected-submissions
This paper considers a new model of input data specific for image classification problems. In particular, the high level idea is that each image contains certain patterns, and which patterns it contain decides its label. In this framework, under some stronger assumptions (e.g., patterns are orthogonal, one positive pat...
train
[ "qKuFoCPZQ8Y", "AGxgCB_blq-", "4BmQqvQ1Pf8", "Nnehsl3v65O", "C023Wc8O6v", "llWYmqvYxsO", "bWaBmccdDDQ", "9J7ST3q2a5" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper is concerned with the question of generalization of convolutional neural networks. For that, the authors study a simple toy model, where each data point consists of several patterns. All patterns are assumed to be orthogonal to each other. Those images should be learned with a 3-layer neural network. Th...
[ 6, -1, -1, -1, -1, 5, 6, 5 ]
[ 3, -1, -1, -1, -1, 5, 5, 3 ]
[ "iclr_2021_5JnS8wROG9", "bWaBmccdDDQ", "qKuFoCPZQ8Y", "llWYmqvYxsO", "9J7ST3q2a5", "iclr_2021_5JnS8wROG9", "iclr_2021_5JnS8wROG9", "iclr_2021_5JnS8wROG9" ]
iclr_2021_thhdrl4IdMm
A Chain Graph Interpretation of Real-World Neural Networks
The last decade has witnessed a boom of deep learning research and applications achieving state-of-the-art results in various domains. However, most advances have been established empirically, and their theoretical analysis remains lacking. One major issue is that our current interpretation of neural networks (NNs) as ...
withdrawn-rejected-submissions
In this paper, the authors draw connections between probabilistic graphical models (specifically LWF chain graphs) and neural network models. There was general agreement amongst the reviewers that this is an interesting topic that merits further study, and would be of interest to the ICLR audience. At the same time, al...
train
[ "JkUt_OmaXIJ", "ERG0MBRr1Qo", "oJVyasP9EVW", "Pz9xBNqLDE6", "xa3254eqbw_", "Ci1Md7idTyO", "KdeC8LryUeY", "2jOOtoe3XmN", "FKo8U4FJlZX", "BiZglUUH4Ky", "9A3BfoaoUYc", "0qta0j1HReC", "KC5znrpUtcX" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Update: after reading the feedback and discussing with the other reviewers, I decide to keep my score unchanged.\n\nOriginal comments:\nIn this paper, the authors provide new interpretation of neural networks via chain graphs, which can be used as a new theoretical framework to understand the behavior of neural ne...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2021_thhdrl4IdMm", "iclr_2021_thhdrl4IdMm", "KC5znrpUtcX", "KC5znrpUtcX", "KC5znrpUtcX", "9A3BfoaoUYc", "9A3BfoaoUYc", "0qta0j1HReC", "0qta0j1HReC", "JkUt_OmaXIJ", "iclr_2021_thhdrl4IdMm", "iclr_2021_thhdrl4IdMm", "iclr_2021_thhdrl4IdMm" ]
iclr_2021_yN18f9V1Onp
Adaptive Learning Rates for Multi-Agent Reinforcement Learning
In multi-agent reinforcement learning (MARL), the learning rates of actors and critic are mostly hand-tuned and fixed. This not only requires heavy tuning but more importantly limits the learning. With adaptive learning rates according to gradient patterns, some optimizers have been proposed for general optimizations, ...
withdrawn-rejected-submissions
This paper investigates how to deploy adaptive learning rates in multi-agent RL (MARL). In particular, the learning rates are adaptively chosen based on which directions maximally affect the Q-function, and take into account the interplay and balance between the actors and the critics. The topic is certainly of great i...
val
[ "UomBebrEn10", "FAixv_M-qa", "aTgxHT6D6nT", "YHFc2BTgW2B", "GCS6EMb9V-V", "FJBkMnDZu88", "bz6Ay9QyJrq", "I_SigFtVNo9", "wIC6Ara7fk", "6ppvaLxt7GP", "88Tt76X4Egr" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We have posted the responses to each reviewer and hope they could address all the comments of the reviewers. From the reviews, we acknowledged that some of the reviewers might not be familiar with deep MARL. But, we are willing to discuss any comments and concerns and help the reviewers fully understand the merits...
[ -1, -1, -1, -1, -1, -1, 5, 4, 4, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, 2, 2, 2, 4, 3 ]
[ "iclr_2021_yN18f9V1Onp", "bz6Ay9QyJrq", "I_SigFtVNo9", "wIC6Ara7fk", "88Tt76X4Egr", "6ppvaLxt7GP", "iclr_2021_yN18f9V1Onp", "iclr_2021_yN18f9V1Onp", "iclr_2021_yN18f9V1Onp", "iclr_2021_yN18f9V1Onp", "iclr_2021_yN18f9V1Onp" ]
iclr_2021_EoVmlONgI9e
The Emergence of Individuality in Multi-Agent Reinforcement Learning
Individuality is essential in human society, which induces the division of labor and thus improves the efficiency and productivity. Similarly, it should also be a key to multi-agent cooperation. Inspired by that individuality is of being an individual separate from others, we propose a simple yet efficient method for t...
withdrawn-rejected-submissions
This paper introduces a method to increase diversity/individuality of agents in a MARL setup, based on intrinsic rewards coming from a classifier over behaviours. Reviewers tend to agree that this is an important/interesting problem, which is related to exploration, a central problem in reinforcement learning. Several...
train
[ "zVe0OW2o3ma", "FaZIpMDf1bA", "j4qQBawddLI", "pM5V5GI_nF", "AVcFNomYZbn", "YkX57LvXmGM", "FrlJEGwsDG", "jM2Q0KpXWqt", "ktO1Lf7KirH", "aEgFjI22Qh" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper contributes a method based on reward-shaping to encourage the emergence of distinct agent behaviors in fully-cooperative multi-agent reinforcement learning (MARL), within the paradigm of centralized training with decentralized execution. They propose to learn a classifier that predicts agent identity gi...
[ 6, -1, -1, -1, -1, -1, -1, 5, 4, 6 ]
[ 5, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "iclr_2021_EoVmlONgI9e", "pM5V5GI_nF", "iclr_2021_EoVmlONgI9e", "zVe0OW2o3ma", "jM2Q0KpXWqt", "ktO1Lf7KirH", "aEgFjI22Qh", "iclr_2021_EoVmlONgI9e", "iclr_2021_EoVmlONgI9e", "iclr_2021_EoVmlONgI9e" ]
iclr_2021_aYJr_Rt30p
Learning Representation in Colour Conversion
Colours can be represented in an infinite set of spaces highlighting distinct features. Here, we investigated the impact of colour spaces on the encoding capacity of a visual system that is subject to information compression, specifically variational autoencoders (VAEs) where bottlenecks are imposed. To this end, we pr...
withdrawn-rejected-submissions
This paper proposes a novel unsupervised task of colour conversion. In this respect, the task becomes more like a regression problem -- rather than autoencoding the decoder needs to reconstruct the pixels in a different color system. While the idea is potentially interesting, there are fundamental problems with the pa...
train
[ "GO7zKdic82x", "A2Hdd-G6mE5", "-10C-5EmMNi", "uQtXmjYkaIp", "1FdVdkkNqs", "7oTh550JlUS" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for reviewing our article. We highly appreciate your constructive and detailed comments. They were truly helpful in improving the quality of our manuscript.\n\n> However, this message (pointing out the advantages of opponent representations wrt more trivial non-opponent representations) is not ...
[ -1, -1, -1, 4, 6, 7 ]
[ -1, -1, -1, 5, 3, 5 ]
[ "7oTh550JlUS", "1FdVdkkNqs", "uQtXmjYkaIp", "iclr_2021_aYJr_Rt30p", "iclr_2021_aYJr_Rt30p", "iclr_2021_aYJr_Rt30p" ]
iclr_2021_ng0IIc1mbTu
ARELU: ATTENTION-BASED RECTIFIED LINEAR UNIT
Element-wise activation functions play a critical role in deep neural networks via affecting the expressivity power and the learning dynamics. Learning-based activation functions have recently gained increasing attention and success. We propose a new perspective of learnable activation function through formulating them...
withdrawn-rejected-submissions
I would like to thank the authors for the their time and effort on this work. The paper is proposing an activation function that combines RELU like piecewise activation functions and a primitive attention mechanism. Then, they show that their proposed method works better in transfer settings. I think the approach auth...
train
[ "tBeLlRSu3I0", "Oa4a-RnzpKP", "UZMUtj-BqT", "zZWGmN3iqm4", "9B4ajUiD-U", "mJRve2dllV7", "fLXGyOvkv1F", "Ou8RRmRY5cL", "-eP82eOQS0Q", "sW-xm2FLuJA", "jzSxqGrm3KL" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Description:\n\nThis work presents a novel learned activation function called Attention-based Rectified Linear Unit (AReLU). Element-wise attention module is developed that learns a sign-based attention (ELSA) which is the novel component of AReLU towards mitigating the gradient vanishing issue. Extensive experime...
[ 7, -1, -1, -1, -1, -1, -1, -1, 3, 5, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3 ]
[ "iclr_2021_ng0IIc1mbTu", "iclr_2021_ng0IIc1mbTu", "tBeLlRSu3I0", "Oa4a-RnzpKP", "iclr_2021_ng0IIc1mbTu", "-eP82eOQS0Q", "sW-xm2FLuJA", "jzSxqGrm3KL", "iclr_2021_ng0IIc1mbTu", "iclr_2021_ng0IIc1mbTu", "iclr_2021_ng0IIc1mbTu" ]
iclr_2021_xfNotLXwtQb
Inductive Collaborative Filtering via Relation Graph Learning
Collaborative filtering has shown great power in predicting potential user-item ratings by factorizing an observed user-item rating matrix into products of two sets of latent factors. However, the user-specific latent factors can only be learned in transductive setting and a model trained on existing users cannot adapt...
withdrawn-rejected-submissions
The paper is somewhat borderline, though reviews mostly lean positive. Unfortunately after calibrating compared to other submissions, the work remains somewhat below the bar compared to higher-scoring papers. The reviewers praise the topic, the method, and the experiments (although some of this praise is a little mixe...
train
[ "IJedtHABwCD", "EmOzi2EwS8", "2eErMbJhJY_", "aKVj7kTWYOI", "Wf1383bWMbo", "i3MMD7Jk74X", "q_3yh7OE3zs", "PH6GdhXhJ0P" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "### Quick summary\nThis work explores a popular problem, i.e., collaborative filtering, in an inductive setting, which is very important for real-world recommender systems. To address the challenges in the inductive settings, i.e., learning accurate representations for users who do not occur in the training data, ...
[ 6, -1, -1, -1, -1, 4, 6, 6 ]
[ 5, -1, -1, -1, -1, 5, 4, 4 ]
[ "iclr_2021_xfNotLXwtQb", "i3MMD7Jk74X", "q_3yh7OE3zs", "IJedtHABwCD", "PH6GdhXhJ0P", "iclr_2021_xfNotLXwtQb", "iclr_2021_xfNotLXwtQb", "iclr_2021_xfNotLXwtQb" ]
iclr_2021_NfZ6g2OmXEk
Prioritized Level Replay
Simulated environments with procedurally generated content have become popular benchmarks for testing systematic generalization of reinforcement learning agents. Every level in such an environment is algorithmically created, thereby exhibiting a unique configuration of underlying factors of variation, such as layout, p...
withdrawn-rejected-submissions
The paper presents a method for automatically generating levels of varying complexity for training the agent. The results are well summarized in the paper abstract, "significantly improved sample-efficiency and generalization on the majority of Procgen Benchmark environments as well as two challenging MiniGrid environ...
train
[ "WTfKFzqEG3u", "prdz98BzGD4", "rmojgdtSgyc", "FRRI8s_m1ic", "J80beAR_pvR", "8yn4mbeF5RN", "CZNcx4jYikv", "u1FBWQKew90", "cxj-1kzjfWz", "rzp3TlttQkr", "2zSEIBKUC1P", "4ecdo4dwVTG", "oaELsfPs2Fa", "f5jXzTsAGK8", "KUSGE5Ix8S" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "\n\nThis paper concerns about the use of experience replay in a way that past experience is sampled based on (implicit) levels so as for the agent to better adapt to the current task at hand. The authors defined a replay distribution (where experience is sampled) based on two scores relevant to learning potential ...
[ 7, 7, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ 3, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_NfZ6g2OmXEk", "iclr_2021_NfZ6g2OmXEk", "iclr_2021_NfZ6g2OmXEk", "KUSGE5Ix8S", "f5jXzTsAGK8", "WTfKFzqEG3u", "rmojgdtSgyc", "prdz98BzGD4", "rmojgdtSgyc", "WTfKFzqEG3u", "prdz98BzGD4", "f5jXzTsAGK8", "f5jXzTsAGK8", "iclr_2021_NfZ6g2OmXEk", "iclr_2021_NfZ6g2OmXEk" ]
iclr_2021_WUNF4WVPvMy
Acceleration in Hyperbolic and Spherical Spaces
We further research on the acceleration phenomenon on Riemannian manifolds by introducing the first global first-order method that achieves the same rates as accelerated gradient descent in the Euclidean space for the optimization of smooth and geodesically convex (g-convex) or strongly g-convex functions defined o...
withdrawn-rejected-submissions
Reviewers generally appreciate the theoretical contribution of the paper, namely Accelerated Gradient Descent on the sphere and hyperbolic space with the same convergence rate as the Euclidean counterpart. However, there are several major concerns with the current work. From a theoretical standpoint, the geodesic map, ...
train
[ "u_hJgk14spc", "JB1cHmsn_8R", "x_FgsJ3ah7V", "cySbYV0ChHs", "rYsNCQQY9lb", "vldW3Em60K", "kuIozxrhWUV", "z70y45W6iN-", "XJx2yyBidgy", "5E68Ap3rZaa", "g64i4_ZEJUs", "jsAkYhHDQgf", "IBUhGkEhe8y" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary: This paper provides a generalization of AGD to constant sectional curvature spaces (or subsets of them), and proves the same global rates of convergence that hold in the Euclidean space. Additionally, they provide reductions for the bounded sectional curvature case. Their basic strategy involves the use o...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 5, 5 ]
[ 2, -1, -1, -1, -1, -1, -1, -1, -1, 2, 1, 4, 2 ]
[ "iclr_2021_WUNF4WVPvMy", "XJx2yyBidgy", "u_hJgk14spc", "iclr_2021_WUNF4WVPvMy", "5E68Ap3rZaa", "g64i4_ZEJUs", "jsAkYhHDQgf", "jsAkYhHDQgf", "IBUhGkEhe8y", "iclr_2021_WUNF4WVPvMy", "iclr_2021_WUNF4WVPvMy", "iclr_2021_WUNF4WVPvMy", "iclr_2021_WUNF4WVPvMy" ]
iclr_2021_IpsTSvfIB6
Approximate Birkhoff-von-Neumann decomposition: a differentiable approach
The Birkhoff-von-Neumann (BvN) decomposition is a standard tool used to draw permutation matrices from a doubly stochastic (DS) matrix. The BvN decomposition represents such a DS matrix as a convex combination of several permutation matrices. Currently, most algorithms to compute the BvN decomposition employ either gre...
withdrawn-rejected-submissions
The paper explores the Birkhoff-von-Neumann decomposition in order to propagate gradients through a bi-partite matching. The task is very relevant to the community but the reviewers raised concerns both about the theory and the practice of the work. Unfortunately the work is not ready for publication at ICLR.
train
[ "cN79rbBl9lX", "NLVsp1M8jgh", "fYueEHP8h2L", "yZJpW-ET_fV" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewers for their honest comments, which help us improve this work. We are glad they recognize the interest of the problem and our algorithm. \nWe corrected issues related to our algorithm's presentation and added various discussions in the manuscript's current version. In particular, we introduced ...
[ -1, 4, 4, 5 ]
[ -1, 1, 3, 3 ]
[ "iclr_2021_IpsTSvfIB6", "iclr_2021_IpsTSvfIB6", "iclr_2021_IpsTSvfIB6", "iclr_2021_IpsTSvfIB6" ]
iclr_2021_Z532uNJyG5y
Iterative Graph Self-Distillation
How to discriminatively vectorize graphs is a fundamental challenge that attracts increasing attentions in recent years. Motivated by the recent success of unsupervised contrastive learning, we aim to learn graph-level representation in an unsupervised manner. Specifically, we propose a novel unsupervised graph learnin...
withdrawn-rejected-submissions
This paper proposes an unsupervised graph learning method [Iterative Graph Self-Distillation (IGSD)] by iteratively performing self-distillation to contrast graph pairs under different augmented views. This idea is then extended to semi-supervised setting where via a supervised contrastive loss and self-training. The m...
train
[ "0Cs2HNnWG0W", "g0FAaD_-Pp", "aIdepDO8z74", "iNmBHNzrjii", "gWQecQlmuS8", "8fvzLHjbXMk", "hHM8hNsr8kZ", "eUUwkIWsklu", "NCyneDT55GX", "nXItuB33Ov7", "qbw_OlNrgMF", "sh8EWcCJvE", "k5uTGNPlPFH", "0KSolCXlVNq" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\n\nThis paper proposes a self-distillation based graph augmentation mechanism to alleviate the drawbacks of existing MI based models w.r.t. their high dependency towards negative sampling. Quantitatively the proposed model achieves encouraging results. However it would have been better if the system desig...
[ 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_Z532uNJyG5y", "iclr_2021_Z532uNJyG5y", "iNmBHNzrjii", "qbw_OlNrgMF", "iclr_2021_Z532uNJyG5y", "0KSolCXlVNq", "0Cs2HNnWG0W", "8fvzLHjbXMk", "hHM8hNsr8kZ", "qbw_OlNrgMF", "g0FAaD_-Pp", "k5uTGNPlPFH", "iclr_2021_Z532uNJyG5y", "iclr_2021_Z532uNJyG5y" ]
iclr_2021_HK_B2K0026
Attention Based Joint Learning for Supervised Electrocardiogram Arrhythmia Differentiation with Unsupervised Abnormal Beat Segmentation
Deep learning has shown great promise in arrhythmia classification in electrocar- diogram (ECG). Existing works, when classifying an ECG segment with multiple beats, do not identify the locations of the anomalies, which reduces clinical inter- pretability. On the other hand, segmenting abnormal beats by deep learning u...
withdrawn-rejected-submissions
This paper received 4 reviews with mixed initial ratings: 5, 6, 4, 4. The main concerns of R1, R4 and R2, who gave unfavorable scores, included: insufficient evaluation (lack of experiments on public datasets, small sample size), an ad-hoc nature and overall limited novelty of the method, a number of issues with the pr...
train
[ "nsYbfQVfIwZ", "1eQcpQx2mI", "z6E0ts5WSbB", "16zUzUM8Ig", "QzGcQ-thdL", "lTXSA9uRWAe", "rFHOnxFFtzA", "7AM3ttMwsxI" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents a method for segmentation and classification of ECG data applied to the task to segmenting and detecting Premature Ventricular Contractions (PVC). The taks is semi-supervised, in the sense that segmentation labels are not required by labels for the PVC events (classification) are used.\nThe aut...
[ 5, -1, -1, -1, -1, 4, 6, 5 ]
[ 4, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2021_HK_B2K0026", "lTXSA9uRWAe", "nsYbfQVfIwZ", "rFHOnxFFtzA", "7AM3ttMwsxI", "iclr_2021_HK_B2K0026", "iclr_2021_HK_B2K0026", "iclr_2021_HK_B2K0026" ]
iclr_2021_Siwm2BaNiG
Modal Uncertainty Estimation via Discrete Latent Representations
Many important problems in the real world don't have unique solutions. It is thus important for machine learning models to be capable of proposing different plausible solutions with meaningful probability measures. In this work we propose a novel deep learning based framework, named {\it modal uncertainty estimat...
withdrawn-rejected-submissions
This paper introduces a conditional discrete VAE for uncertainty estimation on high-dimensional data. Reviewers found the paper borderline, and two of the three reviewers stated it doesn't meet the acceptance bar due to lack of clarity in several aspects and limited technical novelty.
train
[ "vMfjvaJQypg", "YgQP_SEo5yi", "yh8b5YiXtj-", "BX2jMU_4DNC", "-EY-QU-luS2", "wr02ZBmizNL", "d47KsYwBrLc" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This manuscript proposes to measure the \"modal uncertainty\" in conditional generative models by forcing a discrete latent intermediate representation (here, C), between inputs X and outputs Y. By then manipulating the estimated categorical distribution likelihoods, an uncertainty estimate can be produced.\n\nThi...
[ 5, -1, -1, -1, -1, 5, 6 ]
[ 4, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_Siwm2BaNiG", "wr02ZBmizNL", "d47KsYwBrLc", "iclr_2021_Siwm2BaNiG", "vMfjvaJQypg", "iclr_2021_Siwm2BaNiG", "iclr_2021_Siwm2BaNiG" ]
iclr_2021_-kfLEqppEm_
Convex Regularization in Monte-Carlo Tree Search
Monte-Carlo planning and Reinforcement Learning (RL) are essential to sequential decision making. The recent AlphaGo and AlphaZero algorithms have shown how to successfully combine these two paradigms to solve large scale sequential decision problems. These methodologies exploit a variant of the well-known UCT algorith...
withdrawn-rejected-submissions
Most of the reviewers pointed out a lack of rigor of this submission, unclear contributions, not too convincing claims and empirical gains. I thank the authors for the effort put in revising the paper and responding to the reviewer concerns. However, the reviewers did not deem them convincing enough.
train
[ "FWiWIi-Ldjt", "-OhKhVfsMte", "UoUmkl-X8KU", "u5sZrQSkuZu", "tJwiV_DEUGu", "W6GOPpnJ9ej", "0HM4Uy_4K0", "iLloUJEHCan", "ELek3t3hHxS" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the effort to review our paper and the insightful feedback.\n\nRegarding the first issue, we remark that when $\\Omega$ is strongly convex, $\\tau \\Omega$ is also strongly convex, thus all the properties shown in Proposition 1 are still correct (other works use the same formula, e.g. Equ...
[ -1, -1, -1, -1, -1, 5, 5, 4, 8 ]
[ -1, -1, -1, -1, -1, 3, 4, 1, 4 ]
[ "W6GOPpnJ9ej", "0HM4Uy_4K0", "ELek3t3hHxS", "iLloUJEHCan", "iclr_2021_-kfLEqppEm_", "iclr_2021_-kfLEqppEm_", "iclr_2021_-kfLEqppEm_", "iclr_2021_-kfLEqppEm_", "iclr_2021_-kfLEqppEm_" ]
iclr_2021_eoQBpdMy81m
Federated Averaging as Expectation Maximization
Federated averaging (FedAvg), despite its simplicity, has been the main approach in training neural networks in the federated learning setting. In this work, we show that the algorithmic choices of the FedAvg algorithm correspond to optimizing a single objective function that involves the global and all of the shard sp...
withdrawn-rejected-submissions
The reviewers agree that the EM perspective of Federated Learning is novel and interesting. However, a common criticism is that the connection made is rather shallow and not sufficiently developed. There look to be quite interesting potentials of the proposed framework and the specific FedSparse method, but I agree wit...
train
[ "TyO97DMLIlK", "Yvd7MIKKEuC", "hP6QFbbYXWs", "VSCPv0NEdwa", "v1K3KfiDMQI", "3yDAMilsXew", "iY72TZPQzZq", "e31VmDx2hXu", "5e4BHdUJndU", "iWl2b98dIxg" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Federated learning has emerged as a promising approach to training models at the edge devices. This paper makes an observation that most algorithms used within federated learning, including the popular FedAvg, could be cast as instances of EM methods. The paper then continues to propose FedSparse, a federated lear...
[ 5, 7, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ 4, 2, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_eoQBpdMy81m", "iclr_2021_eoQBpdMy81m", "iclr_2021_eoQBpdMy81m", "TyO97DMLIlK", "5e4BHdUJndU", "iY72TZPQzZq", "iWl2b98dIxg", "Yvd7MIKKEuC", "iclr_2021_eoQBpdMy81m", "iclr_2021_eoQBpdMy81m" ]
iclr_2021_guEuB3FPcd
AlgebraNets
Neural networks have historically been built layerwise from the set of functions in f:Rn→Rm, i.e. with activations and weights/parameters represented by real numbers, R. Our work considers a richer set of objects for activations and weights, and undertakes a comprehensive study of alternative algebras as number represe...
withdrawn-rejected-submissions
The paper proposes to deep neural network models with elements of the weight from algebras, and considers a wide range of algebras and large scale promising experiments. The paper raised a heated discussion. Pros: - Using algebras, one can hope for more efficient architectures - Numerical experiments on a wide ran...
train
[ "b13emmFhpZs", "VF52BomZ2Nb", "XLutTBTn4O", "x_NtIWstj3y", "kYdrqZBz_b", "beqnp3pzIr", "ULa8RnY-aUC", "PEdsqqhMCw", "9PElhsrS11z", "FxUQ8wGlRce", "AHl56e811mE", "fNwPsjdj4Kr", "KHqXw_Vu8sh", "_sCd8EyDKe", "jwaCE4SlSJG", "EQScenKpK1t", "P9MLsJMrRQW", "k3b56qlIUhz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "The paper proposes an interesting kind of networks, AlgebraNets, which is a general paradigm of replacing the commonly used real-valued algebra with other associative algebras. This paper considers C, H, M2(R) (the set of 2 × 2 real-valued matrices), M2(C), M3(R), M4(R), dual numbers, and the R3 cross product, and...
[ 6, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ 2, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_guEuB3FPcd", "9PElhsrS11z", "x_NtIWstj3y", "kYdrqZBz_b", "iclr_2021_guEuB3FPcd", "b13emmFhpZs", "_sCd8EyDKe", "iclr_2021_guEuB3FPcd", "KHqXw_Vu8sh", "_sCd8EyDKe", "fNwPsjdj4Kr", "jwaCE4SlSJG", "P9MLsJMrRQW", "k3b56qlIUhz", "b13emmFhpZs", "k3b56qlIUhz", "kYdrqZBz_b", "icl...
iclr_2021_yN5kwvn4E1R
Dual Graph Complementary Network
As a powerful representation learning method on graph data, graph neural networks (GNNs) have shown great popularity in tackling graph analytic problems. Although many attempts have been made in literatures to find strategies about extracting better embedding of the target nodes, few of them consider this issue from a ...
withdrawn-rejected-submissions
All four reviewers expressed very significant and consistent concerns on this submission during review. No reviewer is willing to support this submission during discussion. It is clear this submission does not make the bar of ICLR.
train
[ "K-5yHXq05xU", "kM7bDdOZ9XW", "hRbRXROOMZf", "J8bjZNPSCpw", "8sriY-EOBYD", "VAK0BPiSBO", "oQg1LB0ppPk", "x8wVahzvXdE" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for taking the time to review our paper and we appreciate the positive and constructive feedback. In response to some of your concerns, we have made the following responses:\n\nQuote 1: “The paper keeps arguing traditional GNNs only use one-side information, but they are actually leverage both node featu...
[ -1, -1, -1, -1, 3, 2, 4, 4 ]
[ -1, -1, -1, -1, 5, 4, 5, 5 ]
[ "8sriY-EOBYD", "x8wVahzvXdE", "VAK0BPiSBO", "oQg1LB0ppPk", "iclr_2021_yN5kwvn4E1R", "iclr_2021_yN5kwvn4E1R", "iclr_2021_yN5kwvn4E1R", "iclr_2021_yN5kwvn4E1R" ]
iclr_2021_Bi2OvVf1KPn
Provable Robust Learning for Deep Neural Networks under Agnostic Corrupted Supervision
Training deep neural models in the presence of corrupted supervisions is challenging as the corrupted data points may significantly impact the generalization performance. To alleviate this problem, we present an efficient robust algorithm that achieves strong guarantees without any assumption on the type of corruption...
withdrawn-rejected-submissions
The papers studies machine learning tasks in the presence of adversarially corrupt data (during training). In particular, it is assumed that the labels of a small constant fraction of the datapoints are arbitrarily corrupted. The paper proposes a natural method to solve this problem and evaluates it on various dataset...
train
[ "qGpbKojYU6j", "cdRRfWx1qK8", "hrFdFGZOBqQ", "_FSHiMttK9D", "UIFaasfr0RX", "jYyZVEs8jQK", "lnfEUDP3q0w", "Qf90rJu-J40", "N5EM1HwYPRs", "kswogymT5UZ", "ZO82cvdIh-M", "TnBimQRFfj_" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the feedback. And we have a few remarks to briefly communicate before the closure of the interaction: \n\n1 Significance\n \nThe related literature as pointed out by reviewer is fundamentally different from our work and NOT directly comparable because ONLY our approach can be applied to deep learning...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 5, 4 ]
[ "cdRRfWx1qK8", "UIFaasfr0RX", "iclr_2021_Bi2OvVf1KPn", "N5EM1HwYPRs", "kswogymT5UZ", "iclr_2021_Bi2OvVf1KPn", "ZO82cvdIh-M", "TnBimQRFfj_", "iclr_2021_Bi2OvVf1KPn", "iclr_2021_Bi2OvVf1KPn", "iclr_2021_Bi2OvVf1KPn", "iclr_2021_Bi2OvVf1KPn" ]
iclr_2021_xsx58rmaW2p
Making Coherence Out of Nothing At All: Measuring Evolution of Gradient Alignment
We propose a new metric (m-coherence) to experimentally study the alignment of per-example gradients during training. Intuitively, given a sample of size m, m-coherence is the number of examples in the sample that benefit from a small step along the gradient of any one example on average. We show that compared to other...
withdrawn-rejected-submissions
The average score of the reviewers is 6. There are various pros and cons pointed out by the reviewers. Unfortunately, the AC and SAC found that the merit could be outweighed by the limitations of the work, and would like to recommend rejection. For example, a central concern raised by the reviewers is the lack of theor...
train
[ "LHztMzYkm-l", "QeaB-xocRpQ", "9wOHvp1bz8m", "cbAE_9lrkIR", "cBLVb8RqWe1", "7XcOXkgYUu2", "UJ4VQBNVf0g", "0BgvULnh-A8", "XTzwN8E20lv", "Ba-l6gO5vsP", "iFr_YRfRtqK", "yWYUqgknY9t", "efbPr-HdTES" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your response. \n\n---\n\nThis is mostly an empirical paper though we do believe we have theory around the metric that allows for more principled reasoning about what the measured values mean -- please see Sections 2 and 3. \n\nWe cannot theoretically explain why coherence increases in early training...
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, 5, 8, 6 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "9wOHvp1bz8m", "cBLVb8RqWe1", "XTzwN8E20lv", "iclr_2021_xsx58rmaW2p", "UJ4VQBNVf0g", "iclr_2021_xsx58rmaW2p", "cbAE_9lrkIR", "yWYUqgknY9t", "iFr_YRfRtqK", "efbPr-HdTES", "iclr_2021_xsx58rmaW2p", "iclr_2021_xsx58rmaW2p", "iclr_2021_xsx58rmaW2p" ]
iclr_2021_jfPU-u_52Tx
Federated Generalized Bayesian Learning via Distributed Stein Variational Gradient Descent
This paper introduces Distributed Stein Variational Gradient Descent (DSVGD), a non-parametric generalized Bayesian inference framework for federated learning. DSVGD maintains a number of non-random and interacting particles at a central server to represent the current iterate of the model global posterior. The particl...
withdrawn-rejected-submissions
This work presents a distributed SVGD (DSVGD) algorithm as a new non-parametric Bayesian framework for federated learning. The reviewers concerned with the practical advantages of the proposed method, including the communication cost and the constraint of updating one agent per time. The authors rebuttal helped address...
train
[ "pOOSlgIgjG", "6wfUXoaRf1D", "pwle9EVA6gX", "pzyL3UAQxqV", "1dJKsFoGtfV", "W5UChA_Dqg5", "Gpcs_Iy5Bnc", "MW19XRsnGSC", "hTMdqxec_Q", "NVTaCKqVvu", "VXLhXbStwXB", "CUue2eBT133", "q1IQf_uVPt", "Vr5jdJUD6zW", "dGtaXEaBWvf" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Promising but incomplete federated learning algorithm\n\nThis paper proposes a federated version of the Stein Variational Gradient Descent (SVGD) method. The general approach to perform federated learning is based on a previously published method called Partitioned Variational Inference (PVI). This work takes the ...
[ 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_jfPU-u_52Tx", "iclr_2021_jfPU-u_52Tx", "pzyL3UAQxqV", "VXLhXbStwXB", "dGtaXEaBWvf", "Vr5jdJUD6zW", "pOOSlgIgjG", "iclr_2021_jfPU-u_52Tx", "Gpcs_Iy5Bnc", "1dJKsFoGtfV", "6wfUXoaRf1D", "q1IQf_uVPt", "W5UChA_Dqg5", "iclr_2021_jfPU-u_52Tx", "iclr_2021_jfPU-u_52Tx" ]
iclr_2021_bnuU0PzXl0-
Evaluating Gender Bias in Natural Language Inference
Gender-bias stereotypes have recently raised significant ethical concerns in natural language processing. However, progress in the detection and evaluation of gender-bias in natural language understanding through inference is limited and requires further investigation. In this work, we propose an evaluation methodology...
withdrawn-rejected-submissions
This paper offers a new dataset and accompanying metric to measure the degree to which NLI (textual entailment) systems are aware of gender–occupation associations. Pros: - The paper deals with an important issue in the context of a visible set of models and datasets. Cons: - The metric is designed to evaluate bias o...
train
[ "Qla4zJrFS5k", "YG4XU-X43E", "yHj9RyxNfhg", "AIgeZQkrf_-", "_cSrCtAbOfz", "4Rzo9Xxrzsj", "8L7hSvLM738", "rHxs1nuoYTL", "7vO3nScdg6F" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to inform the reviewer that our evaluation set consists of both kinds of gender-specific premises: male and female, thus in a way both entailing and contradicting the hypothesis wrt the stereotypes. \nFor eg. for a hypothesis: \"The guard was at the building\", we have premise 1: \"The text mentions ...
[ -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, 4, 5, 5 ]
[ "YG4XU-X43E", "_cSrCtAbOfz", "7vO3nScdg6F", "iclr_2021_bnuU0PzXl0-", "8L7hSvLM738", "rHxs1nuoYTL", "iclr_2021_bnuU0PzXl0-", "iclr_2021_bnuU0PzXl0-", "iclr_2021_bnuU0PzXl0-" ]
iclr_2021_160xFQdp7HR
Self-Organizing Intelligent Matter: A blueprint for an AI generating algorithm
We propose an artificial life framework aimed at facilitating the emergence of intelligent organisms. In this framework there is no explicit notion of an agent: instead there is an environment made of atomic elements. These elements contain neural operations and interact through exchanges of information and through phy...
withdrawn-rejected-submissions
This paper proposes a framework for artificial life. In the framework, there is no primitive agent construct, but rather a set of basic recurrent network components (such as linear algebra operations). The framework is open-ended and objective-less. The authors describe the emergence of different organisms out of these...
train
[ "GTgZkFG7gj-", "HoXRXlOMXG6", "UPgMXsjNUW6", "5SYt_h6pvl", "O0wEIaXITEO", "LVj8DRXaBiu", "vRJqFq71PDM", "oyszLqcrYpU", "uvLZGrRPHZh", "RxmUS4_J9Pg", "Fcl-dEivMLy", "luPF-nIPNsP" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This is an interesting paper that proposes a particular Alife framework to study the question of how to create an AI-generated algorithm. In the proposed system, each cell in a 2D grid-like environment is controlled by a different neural network. As far as I understand it, these neural networks are randomly mutate...
[ 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 8 ]
[ 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, 4, 1 ]
[ "iclr_2021_160xFQdp7HR", "iclr_2021_160xFQdp7HR", "LVj8DRXaBiu", "O0wEIaXITEO", "oyszLqcrYpU", "RxmUS4_J9Pg", "HoXRXlOMXG6", "GTgZkFG7gj-", "luPF-nIPNsP", "Fcl-dEivMLy", "iclr_2021_160xFQdp7HR", "iclr_2021_160xFQdp7HR" ]
iclr_2021_GKLLd9FOe5l
Online Testing of Subgroup Treatment Effects Based on Value Difference
Online A/B testing plays a critical role in high-tech industry to guide product development and accelerate innovation. It performs a null hypothesis statistical test to determine which variant is better. However, a typical A/B test presents two problems: (i) a fixed-horizon framework inflates the false positive errors ...
withdrawn-rejected-submissions
In this paper, the authors propose a test for subgroup treatment effects in settings where data is obtained online, via a method they call SUBTLE. The authors adopt a semi-parametric (generalized linear model) approach to modeling nuisance functions. The authors derive the form of the distribution of their test stati...
train
[ "r8i9PxhMHBi", "R0hVEuf4z9F", "2TKK8jw2fBl", "MZiNUBR9xNp", "Z6w6A-rk5Ly", "2VCXC5IR5Nf", "fEDvI5RuLbM", "FYu0UZ_p6K4", "-4AFs0uPe2F" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I would like to thank the authors for their rebuttal. \nHowever, I still think that ICLR is not the right venue for publishing this paper, as there is no **representation learning** component in the work.\n", "Thanks for your comments. The replies to each comment are as below:\n\n1. The power of SUBTLE largely d...
[ -1, -1, -1, -1, -1, 7, 3, 5, 7 ]
[ -1, -1, -1, -1, -1, 3, 4, 4, 5 ]
[ "Z6w6A-rk5Ly", "2VCXC5IR5Nf", "-4AFs0uPe2F", "FYu0UZ_p6K4", "fEDvI5RuLbM", "iclr_2021_GKLLd9FOe5l", "iclr_2021_GKLLd9FOe5l", "iclr_2021_GKLLd9FOe5l", "iclr_2021_GKLLd9FOe5l" ]
iclr_2021_qk0FE399OJ
Improving Random-Sampling Neural Architecture Search by Evolving the Proxy Search Space
Random-sampling Neural Architecture Search (RandomNAS) has recently become a prevailing NAS approach because of its search efficiency and simplicity. There are two main steps in RandomNAS: the training step that randomly samples the weight-sharing architectures from a supernet and iteratively updates their weights, and...
withdrawn-rejected-submissions
The paper analyzes the behavior of random search-based NAS and provided new insights (e.g., a low ranking correlation among top-20% candidate architectures in the search phase). An extensive set of experiments were also conducted. However, most reviewers found the incremental nature and similarity with previous works t...
test
[ "pGpwF8Q7FAe", "wGRMMggRThJ", "lH0Mb9yTpkA", "tSSzPSP5ZR", "br1_AZschfU", "GXoIMZKdE4-", "WTd3nt-wddt", "0AZt0pjH9uJ" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the very detailed and useful comments. Below we provide responses to each concern.\nQ1: May missing study with recent NAS methods?\nWe sincerely thank you again for your concern. Also, we’d like to humbly point out that, in the work, we did explore NASBench-201, DARTS search space (CIFAR-...
[ -1, -1, -1, -1, 6, 4, 5, 5 ]
[ -1, -1, -1, -1, 2, 3, 4, 4 ]
[ "WTd3nt-wddt", "GXoIMZKdE4-", "br1_AZschfU", "0AZt0pjH9uJ", "iclr_2021_qk0FE399OJ", "iclr_2021_qk0FE399OJ", "iclr_2021_qk0FE399OJ", "iclr_2021_qk0FE399OJ" ]
iclr_2021_cFpWC6ZMtmj
Explainability for fair machine learning
As the decisions made or influenced by machine learning models increasingly impact our lives, it is crucial to detect, understand, and mitigate unfairness. But even simply determining what ``unfairness'' should mean in a given context is non-trivial: there are many competing definitions, and choosing between them often...
withdrawn-rejected-submissions
All the reviewers found interesting the use of Shapley values to provide feature attributions for fairness, however, the reviewers brought up a number of issues, particularly in terms of presentation and clarity. While the authors' responses did clarify some of these concerns, this was not enough for the reviewers to b...
train
[ "DjGaS9AQKoS", "wQXxT7WMFA", "MnsRQBVNzeh", "MgEc_zl9Wr", "kctyfMBjnwZ", "zvsCC1-UBl", "tX-3TJRSzmm" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The goal of the paper is to design mechanisms to explain the unfairness in the outcomes of a ML model and propose methods to mitigate unfairness. The paper uses the Shapley value framework. The main idea is to alter the prediction function so that instead of providing the classification score, an \"unfairness\" sc...
[ 5, -1, -1, -1, -1, 6, 5 ]
[ 4, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_cFpWC6ZMtmj", "iclr_2021_cFpWC6ZMtmj", "tX-3TJRSzmm", "zvsCC1-UBl", "DjGaS9AQKoS", "iclr_2021_cFpWC6ZMtmj", "iclr_2021_cFpWC6ZMtmj" ]
iclr_2021_nxJ8ugF24q2
Learning Disconnected Manifolds: Avoiding The No Gan's Land by Latent Rejection
Standard formulations of GANs, where a continuous function deforms a connected latent space, have been shown to be misspecified when fitting disconnected manifolds. In particular, when covering different classes of images, the generator will necessarily sample some low quality images in between the modes. Rather than m...
withdrawn-rejected-submissions
The paper proposes to train a rejection sampler in the latent space of a GAN to learn disconnected data manifolds. Reviewers raised concerns about some theoretical aspects of the method as well as about the lack of larger scale datasets (ImageNet) in the experiments. Authors responded to these concerns but some of them...
val
[ "eRI0qPhGCLq", "VuLbZqMAZjE", "LqLTyYUko1", "9JjQ0zEmPn9", "4owPgMppex", "8aZR9RqrwSM", "6LRRozauhQz", "YOLB7Z_zZMu", "bk_wqO46OrD", "LlbcvcnF1Z", "jafny_CBEOw", "axrkiy837Zz", "TopfdzCOt8" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We share the answer to Reviewer 4's concerns. \n\nA neural network can be considered as a bounded Lipschitz function (e.g. a ReLU network is piece-wise linear). Thus, it comes from [1, Theorem 3.1] that the empirical measure $\\frac{1}{n} \\sum w_\\varphi(z)$ converges to $E_{z\\sim Z}[w_\\varphi(z)]$. In our sett...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "LqLTyYUko1", "9JjQ0zEmPn9", "4owPgMppex", "6LRRozauhQz", "YOLB7Z_zZMu", "TopfdzCOt8", "jafny_CBEOw", "bk_wqO46OrD", "8aZR9RqrwSM", "axrkiy837Zz", "iclr_2021_nxJ8ugF24q2", "iclr_2021_nxJ8ugF24q2", "iclr_2021_nxJ8ugF24q2" ]
iclr_2021_xF5r3dVeaEl
Local Information Opponent Modelling Using Variational Autoencoders
Modelling the behaviours of other agents (opponents) is essential for understanding how agents interact and making effective decisions. Existing methods for opponent modelling commonly assume knowledge of the local observations and chosen actions of the modelled opponents, which can significantly limit their applicabil...
withdrawn-rejected-submissions
The submitted paper is well written and easy to follow and also the idea of using VAEs for making inferences about the opponents on which a policy can be conditioned on is sensible. Also the reported performance in comparison to two baselines is good (although I have concerns about the selection of the baselines—see be...
train
[ "mATuUmnMY24", "WNjzYYi6bWG", "UnAA4nU-X6", "EyVVROw2Qu-", "D9ktwxYd0V", "dga4E05LGs", "7pcvkOB8pzi", "CZUdqSJfOTC", "cyxdOqEmGKg", "0gFUImAFWe", "R8GTfEAuYb-", "hDfqYCf3e3O", "fnBbFkZqWom", "TVaSAdtEzHH" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "Summary:\n\nThis paper proposes an opponent modelling technique for imperfect information games. During training, a VAE is trained to encode the agent's observed trajectory to a latent space, and then decode it back to the full trajectory (including opponent observations and actions). The encoder can be used at ...
[ 7, 6, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, 3, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_xF5r3dVeaEl", "iclr_2021_xF5r3dVeaEl", "D9ktwxYd0V", "iclr_2021_xF5r3dVeaEl", "hDfqYCf3e3O", "EyVVROw2Qu-", "mATuUmnMY24", "cyxdOqEmGKg", "7pcvkOB8pzi", "TVaSAdtEzHH", "WNjzYYi6bWG", "EyVVROw2Qu-", "iclr_2021_xF5r3dVeaEl", "iclr_2021_xF5r3dVeaEl" ]
iclr_2021_-YCAwPdyPKw
A Bayesian-Symbolic Approach to Learning and Reasoning for Intuitive Physics
Humans are capable of reasoning about physical phenomena by inferring laws of physics from a very limited set of observations. The inferred laws can potentially depend on unobserved properties, such as mass, texture, charge, etc. This sample-efficient physical reasoning is considered a core domain of human common-sense...
withdrawn-rejected-submissions
This paper proposes a method for learning physics combining symbolic computation and learning in an interesting way, targeting sample efficiency. At the initial evaluation, it was on the fence but leaning towards acceptance, with 3 slightly positive and one slightly negative review. The strengths lie in the combinati...
train
[ "gdMGANvd4Ke", "L2ufEeRQNUT", "gUwOF9rr3xB", "dOhIZ7GGxj", "ovtq0IMw6FW", "ai3drtlY_SY", "Flv7OHapmPZ", "QbkjhQCriTC", "HploPUHwck6" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Re. *\"The related work section is too brief to see the main difference between the present work and prior work ... Prior work has demonstrated that SR can indeed learn the physical law in a much more complex setting [1-2].\"*\n\nThanks for the references, we have added them to the manuscript along with other rele...
[ -1, -1, -1, -1, -1, 6, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "Flv7OHapmPZ", "ai3drtlY_SY", "HploPUHwck6", "QbkjhQCriTC", "iclr_2021_-YCAwPdyPKw", "iclr_2021_-YCAwPdyPKw", "iclr_2021_-YCAwPdyPKw", "iclr_2021_-YCAwPdyPKw", "iclr_2021_-YCAwPdyPKw" ]
iclr_2021_bkincnjT8zx
Neural Dynamical Systems: Balancing Structure and Flexibility in Physical Prediction
We introduce Neural Dynamical Systems (NDS), a method of learning dynamical models in various gray-box settings which incorporates prior knowledge in the form of systems of ordinary differential equations. NDS uses neural networks to estimate free parameters of the system, predicts residual terms, and numerically int...
withdrawn-rejected-submissions
The paper presents a framework for modeling dynamical systems by combining prior knowledge available as ODE and implemented via a differentiable solver, with statistical modules. This is a key problem consisting in complementing available partial knowledge on a physical system with information extracted from available ...
train
[ "mLHNu7tNs48", "i5GyjdcoA4E", "QahLLTFmieE", "ukKiSEAf_7", "_PBi9mM-PfT", "IsBICCCUkL", "Wyk0iYNffG", "CaPulWDrHQD", "2yIKdWhpTcT", "s5xsThpPY0", "4r10zGcQvxd" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a neural-network architecture for modeling dynamical systems that incorporates prior domain knowledge of the system's dynamics. More specifically, the main contributions are the mechanisms for incorporating such knowledge, in terms of fully or partially known structure (differential equations) o...
[ 5, -1, -1, 5, -1, -1, -1, -1, -1, 8, 4 ]
[ 4, -1, -1, 4, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_bkincnjT8zx", "Wyk0iYNffG", "_PBi9mM-PfT", "iclr_2021_bkincnjT8zx", "iclr_2021_bkincnjT8zx", "ukKiSEAf_7", "mLHNu7tNs48", "s5xsThpPY0", "4r10zGcQvxd", "iclr_2021_bkincnjT8zx", "iclr_2021_bkincnjT8zx" ]
iclr_2021_ypJS_nyu-I
A Deeper Look at Discounting Mismatch in Actor-Critic Algorithms
We investigate the discounting mismatch in actor-critic algorithm implementations from a representation learning perspective. Theoretically, actor-critic algorithms usually have discounting for both actor and critic, i.e., there is a γt term in the actor update for the transition observed at time t in a trajectory and ...
withdrawn-rejected-submissions
This paper studies the effect of the discount mismatch in actor-critics: the discount used for evaluation (often 1), the discount used for the critic and the discount used for the actor. There’s notably a representation learning argument supported by a series of experiments. The initial reviews pointed out that this pa...
train
[ "dpFqXilr3b4", "4lsm6Cp0d1B", "NVAea_9EuPv", "64lAUcH-e4y", "vCLBFcZENG6", "Lazi9yFypq", "ONs3vTvFLEQ", "8l9QnSYEeer", "DrFYbbBLdEO", "dwpjMJgRXgh", "50TbmhnJQy2", "COqdJIHNQx0", "I9WuJc3F7km", "Z55VCPn3Ekv", "dXUvyB_O4yd" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Overall I like this direction since this is an important, open problem in RL that does not seem to be widely known (I was unaware of it until I looked into the related work) and could lead to improved algorithms. I encourage the authors to continue to pursue this line of research. However, I have a few clarificati...
[ 6, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 6, 4 ]
[ 4, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_ypJS_nyu-I", "I9WuJc3F7km", "ONs3vTvFLEQ", "vCLBFcZENG6", "50TbmhnJQy2", "iclr_2021_ypJS_nyu-I", "dwpjMJgRXgh", "iclr_2021_ypJS_nyu-I", "Z55VCPn3Ekv", "dXUvyB_O4yd", "COqdJIHNQx0", "Lazi9yFypq", "dpFqXilr3b4", "iclr_2021_ypJS_nyu-I", "iclr_2021_ypJS_nyu-I" ]
iclr_2021_Xxli_LIvYI
When Are Neural Pruning Approximation Bounds Useful?
Approximation bounds for neural network pruning attempt to predict the trade-off between sparsity and fidelity while shrinking neural networks. In the first half of this paper, we empirically evaluate the predictive power of two recently proposed methods based on coreset algorithms. We identify several circumstances...
withdrawn-rejected-submissions
This work studies corset-based pruning strategies for neural networks, and highlights the looseness of approximation bounds, the difference between approximation error and probability, and the importance of considering post-pruning fine-tuning. I found the empirical findings and concerns raised around the utility of ap...
train
[ "EQr6ovc6ID6", "nYCEGdz3lrS", "2B6gVZQj8dy", "TLjCLYJicuB", "mGwoG9dpyM6", "XKxSK0ZcvB", "nDfLQP176wa", "FF-GFqOVODt" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Dear Reviewer #1,\n\nThank you for taking the time to review our paper. Below are a few comments to help clarify our submission.\n\n**Broader Impact and Applications**\n\nWe’ve moved the “Related Work” section from the Appendix into the main body of the paper, which may alleviate your concerns about the broader im...
[ -1, -1, 5, -1, -1, -1, 6, 5 ]
[ -1, -1, 4, -1, -1, -1, 4, 3 ]
[ "FF-GFqOVODt", "iclr_2021_Xxli_LIvYI", "iclr_2021_Xxli_LIvYI", "mGwoG9dpyM6", "XKxSK0ZcvB", "2B6gVZQj8dy", "iclr_2021_Xxli_LIvYI", "iclr_2021_Xxli_LIvYI" ]
iclr_2021_MA8eT-vUPvZ
Adaptive Risk Minimization: A Meta-Learning Approach for Tackling Group Shift
A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution. However, this assumption is violated in almost all practical applications: machine learning systems are regularly tested under distribution shift, due to temporal correlations...
withdrawn-rejected-submissions
Dear Authors, Thank you very much for your detailed feedback to the reviewers in the rebuttal phase. The feedback certainly clarified some of the concerns raised by the reviewers and improved their understanding of your work. Indeed, some of the reviewers have increased their scores. However, overall, we think this p...
train
[ "v4I0PsRBRJk", "m-FVv9ijktx", "3Ua6V7IuHFY", "xq8BvyPBAbx", "0WWPGpLQaJe", "xpJGYL_RPhK", "kg3VG7ibH5x", "yOzdJIS2KY-", "joiuCE8XslM", "23ICXkzkB-n", "UYjcBorLjEq", "pa4vb0DGb45" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author" ]
[ "This paper studies domain adaptation under the assumption that only unlabeled target data is available in training and the domain shift follows a special group shift. The main idea for the proposed method is having an adaptation model that takes only the unlabeled data in and output updated parameters. The propose...
[ 6, -1, -1, -1, 7, -1, -1, -1, 5, -1, -1, -1 ]
[ 4, -1, -1, -1, 5, -1, -1, -1, 3, -1, -1, -1 ]
[ "iclr_2021_MA8eT-vUPvZ", "UYjcBorLjEq", "xq8BvyPBAbx", "23ICXkzkB-n", "iclr_2021_MA8eT-vUPvZ", "iclr_2021_MA8eT-vUPvZ", "iclr_2021_MA8eT-vUPvZ", "joiuCE8XslM", "iclr_2021_MA8eT-vUPvZ", "0WWPGpLQaJe", "v4I0PsRBRJk", "joiuCE8XslM" ]
iclr_2021_szXGN2CLjwf
Adam+: A Stochastic Method with Adaptive Variance Reduction
Adam is a widely used stochastic optimization method for deep learning applications. While practitioners prefer Adam because it requires less parameter tuning, its use is problematic from a theoretical point of view since it may not converge. Variants of Adam have been proposed with provable convergence guarantee, but...
withdrawn-rejected-submissions
This paper proposes a new variant of adaptive stochastic gradient method that has notable differences from Adam, and claims the advantage of adaptive variance reduction. While the algorithm construction looks novel, there are several concerns by the expert reviewers on the theoretical results of the paper, including la...
val
[ "4b6Tl_mq3pp", "s8NR9uFBFbB", "IlezL8NuhWn", "XLpFR-h8SXZ", "GKp_hwnFOH", "rIYXFs3GMUI", "p-Ev8yAglyV", "YtNGoJoOd12", "Vx2up5z-YHc" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Objective of the paper: The paper proposes a new optimizer \"Adam+\" that computes the first moment estimate at extrapolated points and the step size is normalized by the root of the norm of the first moment estimate. The paper establishes a convergence theory for Adam+ and conducts experiments on different deep l...
[ 5, 4, -1, -1, -1, -1, -1, 5, 6 ]
[ 4, 4, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_szXGN2CLjwf", "iclr_2021_szXGN2CLjwf", "s8NR9uFBFbB", "YtNGoJoOd12", "iclr_2021_szXGN2CLjwf", "Vx2up5z-YHc", "4b6Tl_mq3pp", "iclr_2021_szXGN2CLjwf", "iclr_2021_szXGN2CLjwf" ]
iclr_2021_lNrtNGkr-vw
Linear Representation Meta-Reinforcement Learning for Instant Adaptation
This paper introduces Fast Linearized Adaptive Policy (FLAP), a new meta-reinforcement learning (meta-RL) method that is able to extrapolate well to out-of-distribution tasks without the need to reuse data from training, and adapt almost instantaneously with the need of only a few samples during testing. FLAP builds up...
withdrawn-rejected-submissions
Summary of discussions: R1 was positive on the paper in their initial evaluation, and although dissatisfied with the author's feedback, continued to support the paper. I agree with R1's assessment that other reviewers' call for more theory is somewhat unfair, considering the fact that very similar papers don't usually ...
train
[ "Gpaiv-jw3jK", "ROaKLGqsWc4", "Hhw-b0IVEzQ", "dreB-DhpFbY", "KASISbEncwC", "BhqIMqWIECi", "_TClribGli2", "f_H8csz1Ag4" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "## Summary:\nThis paper proposes a novel meta-RL algorithm called Fast Linearized Adaptive Policy (FLAP), which can adapt to both in-distribution and out-of-distribution tasks. FLAP is based on a strong assumption that some policies can be formalized as a linear combination of common task features. Based on this a...
[ 6, 5, -1, -1, -1, -1, -1, 7 ]
[ 4, 3, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_lNrtNGkr-vw", "iclr_2021_lNrtNGkr-vw", "iclr_2021_lNrtNGkr-vw", "KASISbEncwC", "ROaKLGqsWc4", "Gpaiv-jw3jK", "f_H8csz1Ag4", "iclr_2021_lNrtNGkr-vw" ]
iclr_2021_aYbCpFNnHdh
Visual Question Answering From Another Perspective: CLEVR Mental Rotation Tests
Different types of \emph{mental rotation tests} have been used extensively in psychology to understand human visual reasoning and perception. Understanding what an object or visual scene would look like from another viewpoint is a challenging problem that is made even harder if it must be performed from a single image....
withdrawn-rejected-submissions
This paper was reviewed by 4 experts in the field. The reviewers raised their concerns on lack of novelty, unconvincing experiment, and the presentation of this paper, While the paper clearly has merit, the decision is not to recommend acceptance. The authors are encouraged to consider the reviewers' comments when revi...
train
[ "P-OFBR52the", "bq4AkdS-At0", "hOBLSfDHjbj", "qBI7mrLc-hR", "yW6le3mwwKw", "2QfZ2mUMGkE", "gEyxG-hD3z_", "VQz6Yxu_U3y", "HpiDQBdAgH5", "dIecMfK5HH", "_BrD2XsKL1h" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper explores the problem of visual question answering from another perspective. Similar to VQA, a system is provided with a scene and a question. However, the difference is that the question needs to be answered from a viewpoint different from the one provided. Hence, the system needs to perform “mental rota...
[ 4, -1, -1, -1, -1, -1, -1, -1, 6, 4, 4 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2021_aYbCpFNnHdh", "iclr_2021_aYbCpFNnHdh", "iclr_2021_aYbCpFNnHdh", "_BrD2XsKL1h", "HpiDQBdAgH5", "dIecMfK5HH", "P-OFBR52the", "P-OFBR52the", "iclr_2021_aYbCpFNnHdh", "iclr_2021_aYbCpFNnHdh", "iclr_2021_aYbCpFNnHdh" ]
iclr_2021_tuIt1aIb6Co
Counterfactual Self-Training
Unlike traditional supervised learning, in many settings only partial feedback is available. We may only observe outcomes for the chosen actions, but not the counterfactual outcomes associated with other alternatives. Such settings encompass a wide variety of applications including pricing, online marketing and precis...
withdrawn-rejected-submissions
The paper proposes an intriguing approach for "individual treatment effect" estimation from an observational dataset. The approach is developed for multiple discrete actions (beyond binary treatments as typically studied in ITE literature) and discrete outcomes (a special case compared to related literature). The idea...
train
[ "ALenG5Wy2_H", "_MWWeSJAMU2", "KVrcWq9yMEb", "LRq6Br0O4sR", "g9ex2aVGEz", "9aKz3JGkI6z" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "Post-Rebuttal:\n\nI would like to thank the authors for their rebuttal.\nThe updated version of the paper has addressed some of my comments.\nHowever, I still fail to understand why the method should 1) converge in general, and 2) converge to a good solution. \nI have updated my score accordingly.\n\n=============...
[ 4, 6, -1, -1, -1, 5 ]
[ 5, 4, -1, -1, -1, 3 ]
[ "iclr_2021_tuIt1aIb6Co", "iclr_2021_tuIt1aIb6Co", "ALenG5Wy2_H", "_MWWeSJAMU2", "9aKz3JGkI6z", "iclr_2021_tuIt1aIb6Co" ]
iclr_2021_TlPHO_duLv
Towards Noise-resistant Object Detection with Noisy Annotations
Training deep object detectors requires large amounts of human-annotated images with accurate object labels and bounding box coordinates, which are extremely expensive to acquire. Noisy annotations are much more easily accessible, but they could be detrimental for learning. We address the challenging problem of traini...
withdrawn-rejected-submissions
The paper received borderline scores, before and after the rebuttal. Thus, support for paper acceptance isn't sufficiently strong. While the reviewers see merit, concerns which remain after the discussion phase, include how convincing the experimental settings and results are, and uncertainty about the motivation and p...
train
[ "dA4M1CUa8xP", "LubL77J9SE_", "lba1i8jas24", "-d0GHubEhZN", "k5NUJ4s0cUo", "A_QYWYiX3Kw" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "In this work the authors propose a framework to perform object detection when there is noise present in class labels as well as bounding box annotations. The authors propose a two-step process, where in the first step the bounding boxes are corrected in class-agnostic way, and in the second step knowledge distilla...
[ 5, -1, -1, -1, 5, 6 ]
[ 4, -1, -1, -1, 5, 4 ]
[ "iclr_2021_TlPHO_duLv", "k5NUJ4s0cUo", "dA4M1CUa8xP", "A_QYWYiX3Kw", "iclr_2021_TlPHO_duLv", "iclr_2021_TlPHO_duLv" ]
iclr_2021_x9C7Nlwgydy
Consensus Clustering with Unsupervised Representation Learning
Recent advances in deep clustering and unsupervised representation learning are based on the idea that different views of an input image (generated through data augmentation techniques) must either be closer in the representation space, or have a similar cluster assignment. In this work, we leverage this idea together ...
withdrawn-rejected-submissions
This paper proposes a model for learning using ensemble clustering. The reviewers found the general idea promising. However, while promising, all reviewers noted that in its curent form the paper is not fit for publication. The reviewers pointed out missing references, issues with the abstract, lack of motivation for s...
train
[ "JMIpdYqowG1", "lKSsTQjqlpg", "UagM3odwj-Y", "i0ht783aI1D", "-qF0GyvwHS", "U7PrvMRbET3", "5wPnsuQ52KD", "IlaJTYED1_-", "FAwSJC6I3Rd", "z44NVMtft8N", "u7uSaEjunvl", "eke9opXQbfW", "fivm5N4X6p7", "6VdH-_LXZDq" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors propose a learning-based approach for image clustering. In particular, similarly to recent algorithms fro unsupervised representation learning, such a DeepCluster, they propose to iterate between clustering the images in the feature space of the network and updating the network weights to respect the c...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5 ]
[ "iclr_2021_x9C7Nlwgydy", "UagM3odwj-Y", "i0ht783aI1D", "-qF0GyvwHS", "5wPnsuQ52KD", "iclr_2021_x9C7Nlwgydy", "JMIpdYqowG1", "fivm5N4X6p7", "fivm5N4X6p7", "6VdH-_LXZDq", "6VdH-_LXZDq", "6VdH-_LXZDq", "iclr_2021_x9C7Nlwgydy", "iclr_2021_x9C7Nlwgydy" ]
iclr_2021_DAaaaqPv9-q
Self-supervised Graph-level Representation Learning with Local and Global Structure
This paper focuses on unsupervised/self-supervised whole-graph representation learning, which is critical in many tasks including drug and material discovery. Current methods can effectively model the local structure between different graph instances, but they fail to discover the global semantic structure of the entir...
withdrawn-rejected-submissions
This paper proposes a self-supervised learning method for learning representations for graph-structured data, with both local and global objectives. The local objective aims to maximize the mutual information between two correlated graphs generated with attribute masking [Hu et al. 19], with the InfoNCE loss [van den O...
train
[ "lVaICbG3nWr", "FtDCBf4RY8y", "jR0UpSK7Mzg", "sV_pbElOkH", "LqJCuONt-zR", "Y38wFBmjJrW", "w3qHnI1IXyh", "y49jRnRy45N", "FaFBw3mrOOB", "3tguWHr4c8", "kqaztaZiu1", "4-0FFKYXY1j", "4gg5BOeokp8", "J5ddwmXC3L4", "FAm9-78C9-t", "-w8buVLx4o1", "ZJzPCNpK2He", "ciENTA1Mf7x", "AIwlhBeS8Ij"...
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "a...
[ "Thanks for your insightful feedback!\n\nExactly as you said, the main contribution of this work lies in modeling the global-semantic structure of graph embeddings via clustering different graphs in a hierarchical fashion. To the best of our knowledge, it is the first attempt to explore the semantic structure of a ...
[ -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 8 ]
[ -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "sV_pbElOkH", "jR0UpSK7Mzg", "FAm9-78C9-t", "y49jRnRy45N", "w3qHnI1IXyh", "iclr_2021_DAaaaqPv9-q", "ZJzPCNpK2He", "FaFBw3mrOOB", "-w8buVLx4o1", "AIwlhBeS8Ij", "jSgse61B9ao", "b4uaKv95lCI", "Y38wFBmjJrW", "iclr_2021_DAaaaqPv9-q", "jSgse61B9ao", "AIwlhBeS8Ij", "Y38wFBmjJrW", "b4uaKv9...
iclr_2021_fw1-fHJpPK
Decentralized Knowledge Graph Representation Learning
Knowledge graph (KG) representation learning methods have achieved competitive performance in many KG-oriented tasks, among which the best ones are usually based on graph neural networks (GNNs), a powerful family of networks that learns the representation of an entity by aggregating the features of its neighbors and it...
withdrawn-rejected-submissions
This paper brings interesting ideas (decentralized setting, auto-distillation) but it does not meet the very high requirements that a publication at ICLR requires. Three main reasons for that: 1/ Motivation & justification: Ultimately the paper is advocating for a pure decentralized approach "which encodes each entit...
train
[ "7-PpF95yDW3", "I34f-1lavra", "ML2J87jaAsE", "7rSQsY2QBXq", "qCvHej_VPeV", "iCCPCQ_j28g", "TlDz5gMcX17", "ctPaPGaqhIL" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "=== Summary ===\n\nThis paper proposes a \"decentralized\" method for representation learning in knowledge graphs that doesn't explicitly depend on a learned embedding for the entity node of interest, e_i. Rather, the embedding for e_i is constructed in a distributed fashion (similar in motivation to the distribut...
[ 4, 4, -1, -1, -1, -1, 5, 5 ]
[ 3, 4, -1, -1, -1, -1, 4, 2 ]
[ "iclr_2021_fw1-fHJpPK", "iclr_2021_fw1-fHJpPK", "I34f-1lavra", "TlDz5gMcX17", "7-PpF95yDW3", "ctPaPGaqhIL", "iclr_2021_fw1-fHJpPK", "iclr_2021_fw1-fHJpPK" ]
iclr_2021_pWipslK5xVf
Wasserstein Distributional Normalization : Nonparametric Stochastic Modeling for Handling Noisy Labels
We propose a novel Wasserstein distributional normalization (WDN) algorithm to handle noisy labels for accurate classification. In this paper, we split our data into uncertain and certain samples based on small loss criteria. We investigate the geometric relationship between these two different types of samples and en...
withdrawn-rejected-submissions
This paper proposes a potentially very interesting and original approach to handle label noise. The numerical experiments suggest that the method works very well. But the paper itself has been deemed very hard and demanding to read and understand for a general machine learning crowd and even by experts in the fields ...
train
[ "3A9WwuZoedA", "ew0pxf0UC8X", "ZY-eWGpfDVT", "00nFuHdz9Zz", "1MlJU5Rp_xt", "n75M3FGQno", "0VtTZwGd6nu", "0_z3oPosEHg", "0EJPfW4_Cem", "NmGYyr8MTYS", "tGDcp2acfpL", "52CazbIIPnQ", "Qqv0yomX5j", "HhORjFBCG94", "-4o0L39iCmU", "xkBVNKmPEzf", "jQSVXLW0Fuo", "Y77rbw2de5z", "HMX2BWDev2e...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "\nThis paper aims to deal with the learning of noisy/corrupted labels based on the small loss criterion. If I understood well the idea is to consider a new loss function on the Wasserstein space to learn the certain and uncertain data distributions. This loss function is based on kind of penalty term ensuring that...
[ 4, 6, 6, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ 3, 2, 3, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_pWipslK5xVf", "iclr_2021_pWipslK5xVf", "iclr_2021_pWipslK5xVf", "0VtTZwGd6nu", "HMX2BWDev2e", "0_z3oPosEHg", "Qqv0yomX5j", "iclr_2021_pWipslK5xVf", "HMX2BWDev2e", "ZY-eWGpfDVT", "ZY-eWGpfDVT", "iclr_2021_pWipslK5xVf", "0_z3oPosEHg", "ZY-eWGpfDVT", "ZY-eWGpfDVT", "0_z3oPosEHg...
iclr_2021_1IBgFQbj7y
Maximum Categorical Cross Entropy (MCCE): A noise-robust alternative loss function to mitigate racial bias in Convolutional Neural Networks (CNNs) by reducing overfitting
Categorical Cross Entropy (CCE) is the most commonly used loss function in deep neural networks such as Convolutional Neural Networks (CNNs) for multi-class classification problems. In spite of the fact that CCE is highly susceptible to noise; CNN models trained without accounting for the unique noise characteristics o...
withdrawn-rejected-submissions
The paper introduces a new loss, Maximum Categorical Cross-Entropy, which combines the usual cross-entropy loss with a maximum entropy regularisation term on the convolutional kernels, and is evaluated on image classification. The authors have trained a face classification algorithm on two datasets: UTKFace (https://su...
train
[ "RB8oV50_yx-", "DeG6T8ri8tJ", "9IejdRQSqEs", "u6Y-6ZtfR7q", "3e_OHtMARuz" ]
[ "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper was flagged for evaluation by the ethics board based on the following:\n1.\tthe motivation of the paper is to reduce model overfitting and racial bias towards one category. However, there is no further discussion about any \"ethical, societal and practical concerns when dealing with facial datasets, esp...
[ -1, 5, 4, 3, 5 ]
[ -1, 4, 5, 4, 3 ]
[ "iclr_2021_1IBgFQbj7y", "iclr_2021_1IBgFQbj7y", "iclr_2021_1IBgFQbj7y", "iclr_2021_1IBgFQbj7y", "iclr_2021_1IBgFQbj7y" ]
iclr_2021_6GkL6qM3LV
N-Bref : A High-fidelity Decompiler Exploiting Programming Structures
Binary decompilation is a powerful technique for analyzing and understanding software, when source code is unavailable. It is a critical problem in the computer security domain. With the success of neural machine translation (NMT), recent efforts on neural-based decompiler show promising results compared to traditional...
withdrawn-rejected-submissions
The paper describes N-Bref, a new tool for decompilation of stripped binaries. Compared to previous tools for neural-based decompilation, this tool is based on two new ideas: a) to separate the generation of data declarations from the generation of the code itself, and b) the use of more sophisticated network architect...
train
[ "2qLxauzOcoZ", "jeiN0DEhGIH", "rHqz4Kgx__n", "bkY8wnv-0ic", "bQmOsQMhhiF", "ezglM0J2B77", "1x4ihnzcUWY", "_H6wrgteIwo", "XF8c0MoC-1" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We have updated our paper. Please refer to the thread “Revised Paper Uploaded” for a summary of major updates in our revision. Following is a detailed response to Review #3 comments.\n\n1. **Training/evaluation time**:\n \nThe training speed is ~41.6 samples/s (single V100) so training each epoch takes ~7 mins ave...
[ -1, -1, -1, -1, -1, 4, 5, 7, 3 ]
[ -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "1x4ihnzcUWY", "ezglM0J2B77", "XF8c0MoC-1", "iclr_2021_6GkL6qM3LV", "_H6wrgteIwo", "iclr_2021_6GkL6qM3LV", "iclr_2021_6GkL6qM3LV", "iclr_2021_6GkL6qM3LV", "iclr_2021_6GkL6qM3LV" ]
iclr_2021_Sva-fwURywB
Efficiently Disentangle Causal Representations
In this paper, we propose a novel approach to efficiently learning disentangled representations with causal mechanisms, based on the difference of conditional probabilities in original and new distributions. We approximate the difference with model's generalization abilities so that it fits in standard machine learning...
withdrawn-rejected-submissions
This submission builds on recent work by Bengio et al. (2020) to learn disentangled representations of causal mechanisms. The main innovation in this work is that the authors propose to use compare the generalization gap between a model A->B and its causal inverse B->A to determine the true causal mechanism. Reviewer...
train
[ "KTJxQIK8Jp", "dgWXNrsMyFI", "QMnQZlTfVh", "bL9_GR3Uzj", "tqRjEngdEI", "jRXOnX_mi2L" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your comments.\n\nQ1: The phonemes/acoustics example just shows that structured learning may be beneficial, it doesn't rely on causality\n\nA1: The altitude example might be more clear, so that we remove the phonemes/acoustics example.\n\nQ2: The explanation of equation 1 is unclear (or, possibly, wr...
[ -1, -1, -1, 3, 5, 4 ]
[ -1, -1, -1, 5, 4, 3 ]
[ "bL9_GR3Uzj", "tqRjEngdEI", "jRXOnX_mi2L", "iclr_2021_Sva-fwURywB", "iclr_2021_Sva-fwURywB", "iclr_2021_Sva-fwURywB" ]
iclr_2021_ELiYxj9JlyW
ME-MOMENTUM: EXTRACTING HARD CONFIDENT EXAMPLES FROM NOISILY LABELED DATA
Examples that are close to the decision boundary—that we term hard examples, are essential to shaping accurate classifiers. Extracting confident examples has been widely studied in the community of learning with noisy labels. However, it remains elusive how to extract hard confident examples from the noisy training ...
withdrawn-rejected-submissions
The authors propose a process to leverage the memorization effect of deep learning models to filter out examples at the boundary (hard) that the models are confident on, and argue that identifying those hard confident examples help improve the accuracy when learning under noisy data. The process essentially alternates ...
train
[ "nFczKfgmm9", "3IZOw9sGzRv", "f9Jc8LK8Ba", "k-t-BwRNrcW", "BYOmUQYBu7M", "D6L3EuwJGBU", "svTd45rxxj1", "PGZl9qGdqhH", "XLb9bkVW5V0" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes momentum of memorization as a way to distinguish hard examples needed for efficient learning from noisy examples which decrease classification accuracy. The method finds confident, hard examples and updates them dynamically during model training. This is done by iteratively selecting examples w...
[ 4, -1, -1, -1, -1, -1, 4, 7, 8 ]
[ 3, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2021_ELiYxj9JlyW", "svTd45rxxj1", "PGZl9qGdqhH", "nFczKfgmm9", "nFczKfgmm9", "XLb9bkVW5V0", "iclr_2021_ELiYxj9JlyW", "iclr_2021_ELiYxj9JlyW", "iclr_2021_ELiYxj9JlyW" ]
iclr_2021_wOI9hqkvu_
A Text GAN for Language Generation with Non-Autoregressive Generator
Despite the great success of Generative Adversarial Networks (GANs) in generating high-quality images, GANs for text generation still face two major challenges: first, most text GANs are unstable in training mainly due to ineffective optimization of the generator, and they heavily rely on maximum likelihood pretraining...
withdrawn-rejected-submissions
This paper proposes GAN-training of a non-autoregressive generator for text. To circumvent the usual problems with non-differentiability of text GANs, the authors turn to Gumbel-Softmax parameterisation and straight-through estimation. There are a number of aspects to this submission and they are not always clearly p...
test
[ "HLdDYmY3A", "J6YZBCou-cr", "BQLW2YznAyY", "VZIslghG1gD", "f6mQy-PW2Ff", "SQH_EwQ3PNS", "3YwQifvdDip", "N_fodn7rbT8", "oo1kteNSoz", "NJe_92pqN0F", "iq0K5Xm4ww6", "vuZv-au-5jy" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "**Summary**\nThis paper proposed a new text GAN framework by combining non-autoregressive text generator based on transformer, straight-through gradient approximation, and various regularization techniques such as gradient penalty and dropout. The paper demonstrates the superiority of non-autoregressive generator ...
[ 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_wOI9hqkvu_", "vuZv-au-5jy", "HLdDYmY3A", "iclr_2021_wOI9hqkvu_", "iclr_2021_wOI9hqkvu_", "vuZv-au-5jy", "VZIslghG1gD", "VZIslghG1gD", "vuZv-au-5jy", "HLdDYmY3A", "HLdDYmY3A", "iclr_2021_wOI9hqkvu_" ]
iclr_2021_-qB7ZgRNRq
Towards Data Distillation for End-to-end Spoken Conversational Question Answering
In spoken question answering, QA systems are designed to answer questions from contiguous text spans within the related speech transcripts. However, the most natural way that human seek or test their knowledge is via human conversations. Therefore, we propose a new Spoken Conversational Question Answering task (SCQA), ...
withdrawn-rejected-submissions
The authors propose a dataset and a method for the task of SpokenQA. The dataset is generated by using Google TTS to generate audio segments corresponding to the CoQA dataset and then using an ASR system to generate (noisy) transcripts of these speech segments. The authors then propose a method which uses a combination...
train
[ "qFzbdVDzRh6", "zg8cWd3L90r", "aHj_PXJqPaE", "cTZ8XWb2En4", "QahrLY37Us7", "9lF44RwBYvh", "3swdK-jWmVG", "evkZQIWqsX1" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a new task: spoken conversational question answering, which combines conversational question answering (e.g. CoQA) with spoken question answering (e.g. Spoken-SQuAD). The task is to answer a question (in written text) given a question that is given in both audio form and text form. They create ...
[ 6, 5, -1, -1, -1, -1, 4, 5 ]
[ 3, 4, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_-qB7ZgRNRq", "iclr_2021_-qB7ZgRNRq", "3swdK-jWmVG", "qFzbdVDzRh6", "evkZQIWqsX1", "zg8cWd3L90r", "iclr_2021_-qB7ZgRNRq", "iclr_2021_-qB7ZgRNRq" ]
iclr_2021_H38f_9b90BO
Towards Robust Graph Neural Networks against Label Noise
Massive labeled data have been used in training deep neural networks, thus label noise has become an important issue therein. Although learning with noisy labels has made great progress on image datasets in recent years, it has not yet been studied in connection with utilizing GNNs to classify graph nodes. In this pap...
withdrawn-rejected-submissions
We thank the authors for their detailed answers and for providing an updated version of the paper addressing several of the issues raised by the reviewers, including new experimental results. The paper is technically correct. The comparison with other methods is thorough and includes ablation studies clarifying the co...
train
[ "CfiUK9-4t-q", "ja3Hm0EPNr5", "91mrwdpw2Z", "TybGhjr6xIo", "jT_1RoRjwB", "N93hLNfZGYj", "yctm2N1LE3b", "uyXhiMNwchh", "AlA5s0LC9cr", "7RHeTJnpnB3" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " A summary of the major changes is given below:\n- We modify our contribution as the first work to handle with the label noise existing in utilizing GNNs to classify graph nodes, which may serve as a beginning for future research towards robust GNNs against label noise.\n---\n- We describe our meta learning based...
[ -1, -1, -1, -1, -1, -1, 6, 5, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 5, 4 ]
[ "N93hLNfZGYj", "AlA5s0LC9cr", "7RHeTJnpnB3", "yctm2N1LE3b", "uyXhiMNwchh", "iclr_2021_H38f_9b90BO", "iclr_2021_H38f_9b90BO", "iclr_2021_H38f_9b90BO", "iclr_2021_H38f_9b90BO", "iclr_2021_H38f_9b90BO" ]
iclr_2021_KcImcc3j-qS
Fast Predictive Uncertainty for Classification with Bayesian Deep Networks
In Bayesian Deep Learning, distributions over the output of classification neural networks are approximated by first constructing a Gaussian distribution over the weights, then sampling from it to receive a distribution over the categorical output distribution. This is costly. We reconsider old work to construct a Diri...
withdrawn-rejected-submissions
This paper proposes a mechanism for fast sampling from the posterior over the weights of the last layer of neural network, by approximating the logits as a Gaussian through equation 8. This is based on earlier work by MacKay, but has some new empirical investigations. This is a very difficult case. In its favor: * Des...
test
[ "v4BB7W5_Rft", "HX-EeAe5z-R", "IrGmeiLR12o", "0PA5sNlcAgu", "n8fztjDo3mY", "iz8ku_21_m", "u4Rll5ng0N0", "GOHiGJ3g5a6" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your feedback. Both of your criticisms are pretty clear so we will just answer them in chronological order.\n- **Concern 1 - Novelty:** While we agree that our paper is to some extent based on previous work, there are several novel aspects. Just to outline a few: a) The inverse direction of the LB as...
[ -1, -1, -1, -1, 4, 6, 5, 5 ]
[ -1, -1, -1, -1, 5, 5, 4, 4 ]
[ "u4Rll5ng0N0", "GOHiGJ3g5a6", "n8fztjDo3mY", "iz8ku_21_m", "iclr_2021_KcImcc3j-qS", "iclr_2021_KcImcc3j-qS", "iclr_2021_KcImcc3j-qS", "iclr_2021_KcImcc3j-qS" ]
iclr_2021_fm58XfadSTF
Learning a Max-Margin Classifier for Cross-Domain Sentiment Analysis
Sentiment analysis is a costly yet necessary task for enterprises to study the opinions of their costumers to improve their products and services and to determine optimal marketing strategies. Due to existence of a wide range of domains across different products and services, cross-domain sentiment analysis methods ...
withdrawn-rejected-submissions
In this paper, the authors proposed a large-margin-based domain adaptation method for cross-domain sentiment analysis. The idea of developing a large-margin-based method for domain adaptation is not new. Though the proposed method contains some new ideas, the difference between the proposed method and the existing la...
test
[ "HWLx9nMTsjd", "g39b9-IGpHV", "cAw_5Tm8AU", "qvY3ivnfQ_Q", "LZPazrznM_r", "yJYblWXs1R-", "I_4nJU5F_Bw", "10h2vczKY1b", "89qGq_3pmyn", "7hL1xaFEhK1", "dflHV8t1SQL", "1Gvx8H641nh", "xZ_9kj54v_u", "uu-2cccsQY_" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Pros: \nThe proposed SAUM2 is novel. As a contribution, the work introduced large margins between classes in a source domain. This is relevant in a cross-domain sentiment analysis problem. The result shows that the resultant model is domain-independent which is Ok for a general application. \n\nCons: \nAlthough th...
[ 5, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, 5, 5 ]
[ 4, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_fm58XfadSTF", "I_4nJU5F_Bw", "qvY3ivnfQ_Q", "LZPazrznM_r", "yJYblWXs1R-", "dflHV8t1SQL", "iclr_2021_fm58XfadSTF", "iclr_2021_fm58XfadSTF", "HWLx9nMTsjd", "I_4nJU5F_Bw", "xZ_9kj54v_u", "uu-2cccsQY_", "iclr_2021_fm58XfadSTF", "iclr_2021_fm58XfadSTF" ]
iclr_2021_A7-rYAC-np1
Syntactic representations in the human brain: beyond effort-based metrics
We are far from having a complete mechanistic understanding of the brain computations involved in language processing and of the role that syntax plays in those computations. Most language studies do not computationally model syntactic structure, and most studies that do model syntactic processing use effort-based met...
withdrawn-rejected-submissions
This paper explores the brain's activity in response to language, specifically targeting the signatures of syntax in the brain. The authors specifically investigate the signatures of specific syntactic elements against the "typical" effort based syntax measures from some previous work. The title and abstract of the...
train
[ "D_fYEkkIiB_", "ih_E0CTCQyK", "-cD8-fo_h0j", "IjdDUhCxri", "VmaNGZ_OdXw", "Su3E5FfxvxT", "IN32cpE-C20", "uRxVtsLBW2J", "-Dyk6cI2Tmz", "tIrB4E8RZib", "JGvtM46NG9z", "1obJkmhgkOV", "_PNiHjH12Ne", "ACew4UjMjqs" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you very much for increasing the score! We would like to point out that we do test three different types of graph embeddings with each trying to answer a different question. The ConTreGE Comp vectors allow us to investigate if the brain is concentrating on local syntactic information; the ConTreGE vectors in...
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 4 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "-cD8-fo_h0j", "iclr_2021_A7-rYAC-np1", "tIrB4E8RZib", "VmaNGZ_OdXw", "IN32cpE-C20", "ACew4UjMjqs", "_PNiHjH12Ne", "1obJkmhgkOV", "1obJkmhgkOV", "ih_E0CTCQyK", "iclr_2021_A7-rYAC-np1", "iclr_2021_A7-rYAC-np1", "iclr_2021_A7-rYAC-np1", "iclr_2021_A7-rYAC-np1" ]
iclr_2021_bG_lJcLwE3p
Deep Single Image Manipulation
Image manipulation has attracted much research over the years due to the popularity and commercial importance of the task. In recent years, deep neural network methods have been proposed for many image manipulation tasks. A major issue with deep methods is the need to train on large amounts of data from the same distri...
withdrawn-rejected-submissions
The reviews are a bit mixed. While all the reviewers feel that the paper proposed an interesting mechanism to train conditional generators from a single image and demonstrated good image editing results in the experiments, there are also common concerns about the practicality of the proposed method for interactive imag...
test
[ "FY2_muhRcU", "VF5zEq8Ywbx", "bMILW27Ua8H", "PXOPXi-OcmT", "kBm9YAoai6L", "rTXBiCyTOl", "VjV2LDha4__", "70jpLqsjJJX", "qLjE5R6Kc6S" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "This paper proposed a single image-based manipulation method (DeepSIM) using conditional a generative model. The authors addressed this problem by proposing to learn the mapping between a set of primitive representation, which consists of edges and segmentation masks, and an image. They also adopted a thin-plate-s...
[ 6, -1, -1, -1, 7, -1, -1, -1, 5 ]
[ 4, -1, -1, -1, 4, -1, -1, -1, 4 ]
[ "iclr_2021_bG_lJcLwE3p", "bMILW27Ua8H", "70jpLqsjJJX", "VjV2LDha4__", "iclr_2021_bG_lJcLwE3p", "FY2_muhRcU", "kBm9YAoai6L", "qLjE5R6Kc6S", "iclr_2021_bG_lJcLwE3p" ]
iclr_2021_mxIEptSTK6Z
Continual learning with neural activation importance
Continual learning is a concept of online learning along with multiple sequential tasks. One of the critical barriers of continual learning is that a network should learn a new task keeping the knowledge of old tasks without access to any data of the old tasks. In this paper, we propose a neuron importance based regula...
withdrawn-rejected-submissions
Although this paper proposes an intriguing method for using neuron-importance-based regularization to reduce catastrophic forgetting in continual learning, the method is substantially based upon Jung et al (2020), reducing its novelty. Additionally, the experimental evaluation was unconvincing that the proposed method ...
train
[ "GR5lHl1iCkc", "YnI2H7gAdPh", "usDIAAUphFT", "FNEKTi-H9ku", "lJ2ynGitdTU", "dcaCiUmXM9m", "ZXn1x1LpIuP", "UW-KAjjqNDL" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper describes an approach to regularization-based continual learning that looks to preserve parameters based on node importance rather than weight importance. The paper categorises existing techniques into these groups (most prior work is based on weight importance), argues that node importance is a better ...
[ 4, -1, -1, -1, -1, 4, 6, 4 ]
[ 4, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2021_mxIEptSTK6Z", "dcaCiUmXM9m", "UW-KAjjqNDL", "GR5lHl1iCkc", "ZXn1x1LpIuP", "iclr_2021_mxIEptSTK6Z", "iclr_2021_mxIEptSTK6Z", "iclr_2021_mxIEptSTK6Z" ]
iclr_2021_me5hEszKra4
ResPerfNet: Deep Residual Learning for Regressional Performance Modeling of Deep Neural Networks
The rapid advancements of computing technology facilitate the development of diverse deep learning applications. Unfortunately, the efficiency of parallel computing infrastructures varies widely with neural network models, which hinders the exploration of the design space to find high-performance neural network archite...
withdrawn-rejected-submissions
This paper presents a new method to predict the performance of deep neural networks. It evaluates the method on three different networks: LeNet, AlexNet, and VGG16 under two different frameworks, TensorFlow and TensorRT. Reviewer 2 thought that the results were promising but comparison with other approaches was weak (...
train
[ "itaGGgxaB5s", "su1b6BdIypq", "61jxsvmA14", "YYuhncs9w0W", "nisAmAgHQMr", "1inVLKpCa68", "y5iurqPbA82", "eSE5YlO8nOL", "3oPQdCd5N2H" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for the clarifications. It's good that you relate your work to the RACS paper. However, since it wasn't published yet I didn't want to ask for it because that might have compromised the reviewer anonymity. ", "A: Thank you for giving us an opportunity to emphasize the novelty of our work. \nWe aim to tack...
[ -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "1inVLKpCa68", "61jxsvmA14", "YYuhncs9w0W", "y5iurqPbA82", "eSE5YlO8nOL", "3oPQdCd5N2H", "iclr_2021_me5hEszKra4", "iclr_2021_me5hEszKra4", "iclr_2021_me5hEszKra4" ]
iclr_2021_33TBJachvOX
How to compare adversarial robustness of classifiers from a global perspective
Adversarial robustness of machine learning models has attracted considerable attention over recent years. Adversarial attacks undermine the reliability of and trust in machine learning models, but the construction of more robust models hinges on a rigorous understanding of adversarial robustness as a property of a give...
withdrawn-rejected-submissions
The authors study "robustness curves" which are plots of the robust error versus the radius used in the corresponding l_p-ball threat model. Pro: I completely agree with the authors that the current evaluation purely based on evaluation for a single radius is insufficient and one should report the complete curve. Co...
train
[ "vuoGtjl9trN", "cG4Sd-t8fOk", "vf_VyiqdU9P", "onRCKIih59s", "LV40CrSITe", "3P_HB7Etsy", "qKYZoWeYJhe", "GUwAw2WV_rL", "9yBqshCrrPB" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the thoughtful and thorough comments!\n\n\n> The paper clearly describes the problem it deals with and reads easily. On the other hand, I am not entirely convinced by the importance of the problem. If you have a particular specification that you care about (\"Robustness against l_inf attack for eps=0...
[ -1, -1, -1, -1, -1, 5, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "qKYZoWeYJhe", "qKYZoWeYJhe", "9yBqshCrrPB", "3P_HB7Etsy", "GUwAw2WV_rL", "iclr_2021_33TBJachvOX", "iclr_2021_33TBJachvOX", "iclr_2021_33TBJachvOX", "iclr_2021_33TBJachvOX" ]
iclr_2021_zOGdf9K8aC
Self-Supervised Variational Auto-Encoders
Density estimation, compression, and data generation are crucial tasks in artificial intelligence. Variational Auto-Encoders (VAEs) constitute a single framework to achieve these goals. Here, we present a novel class of generative models, called self-supervised Variational Auto-Encoder (selfVAE), that utilizes determin...
withdrawn-rejected-submissions
Reviewers appreciated the model and the ideas presented and found them very interesting. The main reason for rejection is the extent of the empirical work. Unfortunately, and I think what is a bad sign for the ICLR community, the authors could not do adequate empirical work due to their computational resources. Not ...
train
[ "s2btPNPN3I_", "ijZNyPSULiT", "TfZQ_wuGrga", "-kYPs1asE2D", "0o04AuiIvJ", "ZPKmNgEG8bx", "EZPCIMRKXV_", "kmPCa5FOsV" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\n## Summary\n\nThe paper presents a self-supervised variational auto-encoder called selfVAE. The work proposes the use of downscaling and edge detection as simpler representations of the input images to be reconstructed. The model should then learn to improve the low dimensional approximations to recover the high...
[ 5, -1, -1, -1, -1, 4, 4, 6 ]
[ 1, -1, -1, -1, -1, 4, 5, 4 ]
[ "iclr_2021_zOGdf9K8aC", "ZPKmNgEG8bx", "EZPCIMRKXV_", "s2btPNPN3I_", "kmPCa5FOsV", "iclr_2021_zOGdf9K8aC", "iclr_2021_zOGdf9K8aC", "iclr_2021_zOGdf9K8aC" ]
iclr_2021_2Id6XxTjz7c
Post-Training Weighted Quantization of Neural Networks for Language Models
As a practical model compression technique, parameter quantization is effective especially for language models associated with a large memory footprint. Neural network quantization is usually performed to reduce quantization loss assuming that quantization error of each parameter equally contributes to the overall trai...
withdrawn-rejected-submissions
The paper builds upon a recent paper BiQGEMM, providing a binary coding based post training quantization technique. The authors show how to combine magnitude-based importance metrics to these techniques and achieve superior performance. The use of importance metrics for quantization and pruning is not new, and magnitud...
train
[ "fric49kTobj", "AJZtvTbXvEB", "HaGpVtwztsr", "IRLZnqr-02b", "78s86muWQ5O", "Y05kkAWOI-R", "SERKhE7iUpH", "l4HfLBy8e3l", "sG90Q5GXPHv", "1Xv1lzOrn-p", "xsDhHpRzLIZ", "KeoezIVTlje", "EccUIMLoNTq", "pRN1ycoLL7u", "FLwmu-fO17q", "nMAhPKc7TRD", "lplaSf7nsXM", "LNM1QCfsq-z", "pLG4_cx9I...
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", ...
[ "1. To: \"it seems that you consider our search process (including BO) as a simple engineering effort without new research contributions.\" \n\nWhat I think is (and I repeat again): Searching contributes less to the **quantization** method in **research aspect**. And quantization should be the core of the paper. $...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5, 4 ]
[ "AJZtvTbXvEB", "HaGpVtwztsr", "Y05kkAWOI-R", "78s86muWQ5O", "SERKhE7iUpH", "l4HfLBy8e3l", "pRN1ycoLL7u", "sG90Q5GXPHv", "1Xv1lzOrn-p", "nMAhPKc7TRD", "iclr_2021_2Id6XxTjz7c", "LNM1QCfsq-z", "u4L_1ZNfm_G", "FLwmu-fO17q", "pLG4_cx9Ib7", "lplaSf7nsXM", "iclr_2021_2Id6XxTjz7c", "iclr_2...
iclr_2021_LzhEvTWpzH
Switching-Aligned-Words Data Augmentation for Neural Machine Translation
In neural machine translation (NMT), data augmentation methods such as back-translation make it possible to use extra monolingual data to help improve translation performance, while it needs extra training data and the in-domain monolingual data is not always available. In this paper, we present a novel data augmentati...
withdrawn-rejected-submissions
All reviewers agreed to reject.
test
[ "KwKLEfBICBW", "YnqIFp4JWNJ", "i7ZJ44rIP_n", "IF9_gDeTRku", "EwH-tvor1Ut", "owRICe6FYYc", "j5JWWZU-rHB", "rdApDRkuuiY" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your insightful comments.\n\n1) For all experiments, hyperparameters are optimized on a development set and then tested using only a single hyperparameter. We are sorry that we did not clear it explicitly and we will add it to the revised version of the paper.\n\n2) We have tied word embeddings between ...
[ -1, -1, -1, -1, 4, 4, 3, 2 ]
[ -1, -1, -1, -1, 5, 4, 4, 5 ]
[ "j5JWWZU-rHB", "EwH-tvor1Ut", "rdApDRkuuiY", "owRICe6FYYc", "iclr_2021_LzhEvTWpzH", "iclr_2021_LzhEvTWpzH", "iclr_2021_LzhEvTWpzH", "iclr_2021_LzhEvTWpzH" ]