paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2021_UQz4_jo70Ci
SiamCAN:Simple yet Effective Method to enhance Siamese Short-Term Tracking
Most traditional Siamese trackers are used to regard the location of the max response map as the center of target. However, it is difficult for these traditional methods to calculate response value accurately when face the similar object, deformation, background clutters and other challenges. So how to get the reliabl...
withdrawn-rejected-submissions
All three reviewers initially recommended reject. The main concerns were: 1) weak technical contribution and insight [R1, R2, R3, R4]; 2) incremental novelty (another variation of SiamFC) [R1, R2, R3]; 3) unconvincing experiment results against missing SOTA [R1, R2, R3]; The author's response did not assuage these co...
val
[ "EDJFGJcngeG", "U3YOaO-OmZo", "4ZwT92PMSr", "ZDG_DpajYj", "PmP57zG3kXJ", "hVJ3L5xXl-P", "UkdXZncUWJ", "676Tf4Gf46Z" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents a Siamese-based single object tracking method using attention mechanism both in channel-wise and spatial-wise for learning deep correlation between exemplar and candidate images. Extensive experiments on UAV123, VOT2018 and VOT2019 demonstrate the effectiveness of the proposed method, and the 3...
[ 3, -1, -1, -1, -1, 4, 5, 3 ]
[ 5, -1, -1, -1, -1, 5, 4, 4 ]
[ "iclr_2021_UQz4_jo70Ci", "676Tf4Gf46Z", "EDJFGJcngeG", "hVJ3L5xXl-P", "UkdXZncUWJ", "iclr_2021_UQz4_jo70Ci", "iclr_2021_UQz4_jo70Ci", "iclr_2021_UQz4_jo70Ci" ]
iclr_2021_lDjgALS4qs8
To Understand Representation of Layer-aware Sequence Encoders as Multi-order-graph
In this paper, we propose a unified explanation of representation for layer-aware neural sequence encoders, which regards the representation as a revisited multigraph called multi-order-graph (MoG), so that model encoding can be viewed as a processing to capture all subgraphs in MoG. The relationship reflected by Multi...
withdrawn-rejected-submissions
The paper proposes to explain the representation for layer-aware neural sequence encoders with multi-order-graph (MoG). Based on the MoG explanation, it further proposes Graph-Transformer as a graph-based self-attention network empowered Transformer. As commented by the authors, a main purpose of Graph-Transformer is t...
val
[ "_hN00kyWYhs", "6IXdGoRZ7a", "b8f4NCCbNkW", "yaUXVX9HVe", "kCtCLftqV0", "xdSEW3rIrm_", "TzUNGgmuB__", "hMfd59dnRI6", "Cg8uKCA3PtI", "xGMlLY8qcXn", "uooj_jgRAd8" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "### Summary\nThe authors propose a new Transformer variant for neural machine translation. Compared with the standard Transformer framework, this work explains the representation generation process of the encoder via a multi-ordered-graph MoG and develops a novel Graph-Transformer method based on MoG, which is cap...
[ 6, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2021_lDjgALS4qs8", "iclr_2021_lDjgALS4qs8", "iclr_2021_lDjgALS4qs8", "b8f4NCCbNkW", "iclr_2021_lDjgALS4qs8", "b8f4NCCbNkW", "b8f4NCCbNkW", "iclr_2021_lDjgALS4qs8", "6IXdGoRZ7a", "_hN00kyWYhs", "iclr_2021_lDjgALS4qs8" ]
iclr_2021_lo7GKwmakFZ
Average Reward Reinforcement Learning with Monotonic Policy Improvement
In continuing control tasks, an agent’s average reward per time step is a more natural performance measure compared to the commonly used discounting framework since it can better capture an agent’s long-term behavior. We derive a novel lower bound on the difference of the long-term average reward for two policies. The...
withdrawn-rejected-submissions
This paper proposes an extension of the monotonic policy improvement approach to the average reward case. Although the reviewers acknowledge that this work has merits (well written, clearly organized, well-motivated, technically sound) the reviewers have raised several concerns, which have been only partially addressed...
test
[ "QNBgOGqeQRx", "nvVU-vUGL-p", "ku9Sm5NbmFj", "cuz1CHx9cf", "qsbxLziX1QI", "VH4o3Z0U3Kr", "mavSzSxqtpx", "bYJef2VCLce", "cIkeQvCrcs6", "voLI9RA1AJv", "QVnAiDMB5Xa", "2-PBaDQdRNt", "i5ZGejNafA", "CqjY64E8pIL" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper identifies an important problem: policy optimization in undiscounted continuing tasks. This is indeed important since the discounting factor may not be appropriate in certain applications such as health care or robotics. I feel the main contribution of this paper is Theorem 1, an average reward version ...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5 ]
[ "iclr_2021_lo7GKwmakFZ", "iclr_2021_lo7GKwmakFZ", "cuz1CHx9cf", "VH4o3Z0U3Kr", "QVnAiDMB5Xa", "2-PBaDQdRNt", "QNBgOGqeQRx", "i5ZGejNafA", "CqjY64E8pIL", "iclr_2021_lo7GKwmakFZ", "iclr_2021_lo7GKwmakFZ", "iclr_2021_lo7GKwmakFZ", "iclr_2021_lo7GKwmakFZ", "iclr_2021_lo7GKwmakFZ" ]
iclr_2021_NZj7TnMr01
Improving Neural Network Accuracy and Calibration Under Distributional Shift with Prior Augmented Data
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators. However, neural networks are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions. The problem of overconfidence becomes especially...
withdrawn-rejected-submissions
This paper studies the problem of uncertainty estimation under distribution shift. The proposed approach (PAD) addresses this under-estimation issue, by augmenting the training data with inputs that the network has unjustified low uncertainty estimates, and asking the model to correct this under-estimation at those aug...
train
[ "Jg4ahMpSR2", "kEtCHAJFQtW", "4egI8AGwndS", "4xkrKSEWPKP", "EQHT0PogwvB", "AmGdjIsXe6u", "74qLEzjscVe", "0SVRRY9GzNP", "yco8gOAQHo", "kmgyuSNrKgP", "O9ZB87PqKd", "9IuDacY9xY", "YEMQ92LeGlZ", "_-LvAg8qJem", "8XFbUPqzaL5", "P_hFrb-ArqR", "E9gnFkaKaDX", "zsu3wloTB2T", "pPiqgpvnGep",...
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", ...
[ "I have read the authors' responses to all reviews and ultimately elected to leave my score as it is (weak accept). I think the empirical results are strong, and while I am not as troubled by the motivation and framing of the work as reviewers 3 and 4, I think their more conceptual and methodological critiques have...
[ 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_NZj7TnMr01", "iclr_2021_NZj7TnMr01", "iclr_2021_NZj7TnMr01", "Iu3I51wOKSh", "AmGdjIsXe6u", "UMHrYRlMZLG", "0SVRRY9GzNP", "_-LvAg8qJem", "kmgyuSNrKgP", "Ti_KScANC1d", "9IuDacY9xY", "iclr_2021_NZj7TnMr01", "kEtCHAJFQtW", "kEtCHAJFQtW", "vLCKXzHYjb1", "Iu3I51wOKSh", "Jg4ahMpS...
iclr_2021_Zc36Mbb8G6
Data Instance Prior for Transfer Learning in GANs
Recent advances in generative adversarial networks (GANs) have shown remarkable progress in generating high-quality images. However, this gain in performance depends on the availability of a large amount of training data. In limited data regimes, training typically diverges, and therefore the generated samples are of l...
withdrawn-rejected-submissions
The paper proposes to use a feature extractor (encoder) $C(x)$, pre-trained with label supervision or contrastive learning on a large image dataset, to both regularize the discriminator's last feature layer $D_f(x)$ and encode the data $x$ itself as the conditional input of the generator $G(z|G_{emb}(C(x)))$. The main ...
train
[ "W19FHwzaWNo", "3TdX7wH-GJ8", "HXGABEGt47B", "q-L_bYkZb0K", "L750_Q6ooky", "2acDg-rHgL7", "oon5Nihf3xZ", "zljEUjgWYu3", "hSCtanfL2fP", "4Ec335jbOZ-", "4ae1pqab0r", "TFaPtWq-J7x" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank all reviewers for insightful and constructive feedback. We are encouraged to note that reviewers found the approach of Data Instance Prior (DIP) as interesting/convincing (R2,R3,R4); extensive/effective quantitative and qualitative results (R2,R3,R4) and the approach makes sense (R1,R3). We have corrected...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "iclr_2021_Zc36Mbb8G6", "4Ec335jbOZ-", "4ae1pqab0r", "L750_Q6ooky", "hSCtanfL2fP", "oon5Nihf3xZ", "zljEUjgWYu3", "TFaPtWq-J7x", "iclr_2021_Zc36Mbb8G6", "iclr_2021_Zc36Mbb8G6", "iclr_2021_Zc36Mbb8G6", "iclr_2021_Zc36Mbb8G6" ]
iclr_2021_pW--cu2FCHY
An Attention Free Transformer
We introduce Attention Free Transformer (AFT), an efficient variant of Transformers \citep{transformer} that eliminates the need for dot product attention. AFT offers great simplicity and efficiency compared with standard Transformers, where the multi-head attention operation is replaced with the composition of element...
withdrawn-rejected-submissions
The new non linearity proposed in this paper present interesting observations and improvements on image and text datasets. However, reviewers point out that there should’ve been more comparisons to other efficient transformers and on more datasets. The speed improvements are also not clear. I’d encourage the authors to...
train
[ "t8Zh497sEfg", "qsJ2Bsxjydg", "9XOJFL1xyU", "TMYe5B5_dPC", "pLgC3qCpdUn", "ROJLnBrEXuC", "Oi66x-ocNlP", "7jhtW9oFSd", "EoZqv3YU_BN" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for the insightful comments.\n\n1. Motivation and connection to MHA\nWe first apologize for the confusion. Our intention is to propose AFT as a new family of model, not as an approximation to MHA. The \"derivation\" with relu and n_head=n_dim case is indeed trying to make an connection betwee...
[ -1, -1, -1, -1, -1, 4, 6, 5, 4 ]
[ -1, -1, -1, -1, -1, 4, 3, 4, 5 ]
[ "ROJLnBrEXuC", "7jhtW9oFSd", "Oi66x-ocNlP", "EoZqv3YU_BN", "iclr_2021_pW--cu2FCHY", "iclr_2021_pW--cu2FCHY", "iclr_2021_pW--cu2FCHY", "iclr_2021_pW--cu2FCHY", "iclr_2021_pW--cu2FCHY" ]
iclr_2021_rryJiPXifr
Optimization Planning for 3D ConvNets
3D Convolutional Neural Networks (3D ConvNets) have been regarded as a powerful class of models for video recognition. Nevertheless, it is not trivial to optimally learn a 3D ConvNets due to high complexity and various options of the training scheme. The most common hand-tuning process starts from learning 3D ConvNets ...
withdrawn-rejected-submissions
The reviewers appreciate the idea of hyperparameter planning and the thorough experimentation. Some concerns remain regarding the comparison between this method and SlowFast that require to be addressed. Also, the scope of the paper that targets hyperparameter optimization networks for action recognition specifically, ...
val
[ "YsZuEHtGnK7", "G6XQ33tejxg", "gvGGhOWz512", "nj329R2tNfL", "Ct_wkJcBiHY", "CwrtZW5orSw", "_BiKiPSl17q", "hZuFQf1rEaL", "zPVNza_QRK" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary:\n This paper proposed two things: \n 1. hyper-parameter planning for training action recognition models. \n 2. a new 3D net architecture for action recognition. \n\n The results show with the planning, authors can reduce training time significantly. and the proposed DG-P3D also is good for acti...
[ 5, -1, -1, -1, -1, -1, 6, 6, 7 ]
[ 5, -1, -1, -1, -1, -1, 1, 4, 5 ]
[ "iclr_2021_rryJiPXifr", "iclr_2021_rryJiPXifr", "YsZuEHtGnK7", "_BiKiPSl17q", "hZuFQf1rEaL", "zPVNza_QRK", "iclr_2021_rryJiPXifr", "iclr_2021_rryJiPXifr", "iclr_2021_rryJiPXifr" ]
iclr_2021_Ms9zjhVB5R
SOAR: Second-Order Adversarial Regularization
Adversarial training is a common approach to improving the robustness of deep neural networks against adversarial examples. In this work, we propose a novel regularization approach as an alternative. To derive the regularizer, we formulate the adversarial robustness problem under the robust optimization framework and a...
withdrawn-rejected-submissions
This paper proposes a regularization approach based on the second-order Taylor expansion of the loss objective to improve robustness of the trained models against \ell_inf and \ell_2 attacks. It is interesting to explore the second order-based regularization approach for network robustness. However, as pointed out by t...
train
[ "IoQ7arTXGQX", "nh1uYs8OHI", "02iDO21QG9R", "v5LkleW9EFg", "LBqgwAgYfF3", "ERkHq_XBcUn", "XArg7zmVCGf", "eihee1_AhpX", "HggljC8Mkqy", "l5aXHfGm8hn", "2qkrEUk9CA7" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\n\nThe paper proposed a regularizer loss as an alternative to adversarial training to improve the robustness of neural networks against adversarial attacks. The new regularizer is derived from a second-order Tyler series expansion of the loss function in the model robustness optimization problem. Clear ma...
[ 7, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2021_Ms9zjhVB5R", "iclr_2021_Ms9zjhVB5R", "v5LkleW9EFg", "XArg7zmVCGf", "IoQ7arTXGQX", "l5aXHfGm8hn", "eihee1_AhpX", "2qkrEUk9CA7", "2qkrEUk9CA7", "iclr_2021_Ms9zjhVB5R", "iclr_2021_Ms9zjhVB5R" ]
iclr_2021_0n3BaVlNsHI
DJMix: Unsupervised Task-agnostic Augmentation for Improving Robustness
Convolutional Neural Networks (CNNs) are vulnerable to unseen noise on input images at the test time, and thus improving the robustness is crucial. In this paper, we propose DJMix, a data augmentation method to improve the robustness by mixing each training image and its discretized one. Discretization is done in an un...
withdrawn-rejected-submissions
This paper proposes to improve the robustness of computer vision models through a new augmentation strategy. There are two primary contributions of the work, first the use of a bottleneck autoencoder to generate discretized variants of the clean image, and second a slight variant of the task loss, where the task loss i...
train
[ "HMBtwpVABNu", "2UgqAyI8oX", "OnJX-twF7eB", "mTfW1qdALa", "O1SCiRwlry3", "xvMuE0vZA7", "tnjD9_L9VGG", "t1ZTKPi3jmC" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We appreciate your detailed comments and suggestions. In summary of our paper, we proposed a task-agnostic data augmentation method to improve the robustness of CNN models using mixing of images and their discretized ones. We are encouraged that R2 finds our method is neat, the analysis is reasonable, and the resu...
[ -1, -1, -1, -1, 4, 5, 4, 5 ]
[ -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "xvMuE0vZA7", "tnjD9_L9VGG", "t1ZTKPi3jmC", "O1SCiRwlry3", "iclr_2021_0n3BaVlNsHI", "iclr_2021_0n3BaVlNsHI", "iclr_2021_0n3BaVlNsHI", "iclr_2021_0n3BaVlNsHI" ]
iclr_2021_VG3i3CfFN__
PhraseTransformer: Self-Attention using Local Context for Semantic Parsing
Semantic parsing is a challenging task whose purpose is to convert a natural language utterance to machine-understandable information representation. Recently, solutions using Neural Machine Translation have achieved many promising results, especially Transformer because of the ability to learn long-range word dependen...
withdrawn-rejected-submissions
This paper proposes an attention mechanism that works at the phrase level for semantic parsing. Reviewrs agree that the idea has been previously explored outside semantic parsing, that the gains should be shown on less saturated datasets, and that there are issues in the experimental design (observing test set results ...
train
[ "7dO025WVpw3", "YRe1ycoXGBo", "kuhBZ-hNJ2b", "abECky9-lK", "lhkSbMeCO6l", "RSQy37Iuia3", "hmHor482DA2", "UAgO0xMUWXg" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper modifies the self-attention mechanism in transformers to function at the phrase level, rather than at the token level, as a means to improve alignments between input phrases and logical form predicates for Semantic Parsing tasks. They achieve this by using LSTMs on the token representations to form n-gra...
[ 5, -1, -1, -1, -1, 3, 3, 7 ]
[ 5, -1, -1, -1, -1, 5, 3, 4 ]
[ "iclr_2021_VG3i3CfFN__", "RSQy37Iuia3", "hmHor482DA2", "UAgO0xMUWXg", "7dO025WVpw3", "iclr_2021_VG3i3CfFN__", "iclr_2021_VG3i3CfFN__", "iclr_2021_VG3i3CfFN__" ]
iclr_2021_Kzg0XmE6mxu
Adversarial Deep Metric Learning
Learning a distance metric between pairs of examples is widely important for various tasks. Deep Metric Learning (DML) utilizes deep neural network architectures to learn semantic feature embeddings where the distance between similar examples is close and dissimilar examples are far. While the underlying neural network...
withdrawn-rejected-submissions
This paper proposed a novel Adversarial Deep Metric Learning approaches. The reviews pointed out the paper proposes an interesting idea and it is among the rare works that address directly robust metric learning which an important topic for efficient metric learning. Some concerns were raised about the analysis and th...
train
[ "8-Rd2jhmHL", "_QIRY3MBq2", "ptWBcWff91k", "2aqqSJPCeK", "y8zYc1HjBQ", "cqczBt2vrYM", "uALE2-LKYpO", "z_n9HAjtvf7", "W3nsxzF7Mes", "kkoNEZHJzkc", "-nXrRCOCb7", "0DhSBVLWFO", "Z-9uw23WGam", "8ktWr7bd9i", "zCrTGfIYQY-", "EhRiT16WjiJ" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "edit after rebuttal:\n\nMy opinion about the paper has not changed. Although the general idea is interesting, my main concern is that the approach aims at performing defense against a specific attack. The robustness of the approach w.r.t. other attacks (such as L_2 and L_0) needs to be evaluated.\n\n====\n\nThe pa...
[ 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_Kzg0XmE6mxu", "iclr_2021_Kzg0XmE6mxu", "_QIRY3MBq2", "z_n9HAjtvf7", "8-Rd2jhmHL", "zCrTGfIYQY-", "8-Rd2jhmHL", "uALE2-LKYpO", "-nXrRCOCb7", "8ktWr7bd9i", "Z-9uw23WGam", "_QIRY3MBq2", "zCrTGfIYQY-", "EhRiT16WjiJ", "iclr_2021_Kzg0XmE6mxu", "iclr_2021_Kzg0XmE6mxu" ]
iclr_2021_bK-rJMKrOsm
Multi-Head Attention: Collaborate Instead of Concatenate
Attention layers are widely used in natural language processing (NLP) and are beginning to influence computer vision architectures. However, they suffer from over-parameterization. For instance, it was shown that the majority of attention heads could be pruned without impacting accuracy. This work aims to enhance curre...
withdrawn-rejected-submissions
This paper proposes an interesting collaborative multi-head attention (MHA) method to enable heads to share projections, which can reduce parameters and FLOPs of transformer-based models without hurting performance on En-De translation tasks. For pre-trained language models, a tensor decomposition method is used to eas...
test
[ "hMa-yylY1q-", "LtQDXO9FCN5", "JDlEdDi8mCa", "ZEnoRnjqMVv", "8ZL4dIzQ7_o", "_kff3tGLGH", "NC8EHArn2nJ", "LKjoPMngOz5", "R7ybJ1oS4j1", "gaIVHDDyK-Z", "6GmmS2pl843", "lcd2MpjuPx", "K4mGNXyqoGD" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper analyzes the multi-head attention in transformers and suggests to use collaboration instead of concatenation of multiple heads. Empirical results on WMT’16 English-German demonstrates that the proposed approach reduces the of parameters without sacrificing performance. Further experiments on pre-trained...
[ 5, 5, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 5 ]
[ 4, 3, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_bK-rJMKrOsm", "iclr_2021_bK-rJMKrOsm", "ZEnoRnjqMVv", "8ZL4dIzQ7_o", "_kff3tGLGH", "gaIVHDDyK-Z", "iclr_2021_bK-rJMKrOsm", "iclr_2021_bK-rJMKrOsm", "NC8EHArn2nJ", "LtQDXO9FCN5", "hMa-yylY1q-", "K4mGNXyqoGD", "iclr_2021_bK-rJMKrOsm" ]
iclr_2021_Ggx8fbKZ1-D
Adaptive Hierarchical Hyper-gradient Descent
Adaptive learning rates can lead to faster convergence and better final performance for deep learning models. There are several widely known human-designed adap- tive optimizers such as Adam and RMSProp, gradient based adaptive methods such as hyper-descent and L4, and meta learning approaches includ...
withdrawn-rejected-submissions
The paper proposes an optimization framework that automatically adapts the learning rates at different levels of a neural network based on hypergradient descent. The AC and reviewers all found the approach interesting and promising and appreciate the author feedback. We strongly encourage the authors to incorporate ...
test
[ "gRC1Bc8271V", "QzF89c7o_sD", "VSSzTR1sV7C", "WZnnyGucsFl", "7s61pCAaV3Y", "_dXvRS35DPV", "It_MHp9uXB", "B83ZcQ6SmTH", "QprWTFcCXaR", "e33-7yHVZUx", "BekG2FjPKDF", "p545ESVG2yE", "Ico3elPoDqc", "X78J1aL2CgW" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Update: I really appreciate the response from the authors. Some of my original concerns have been addressed, and additional experiments help to show the benefits of CAM-HD, so I have increased my score to 5. But, after reading other reviews and responses, I still believe that this work needs to be compared to adva...
[ 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3 ]
[ "iclr_2021_Ggx8fbKZ1-D", "iclr_2021_Ggx8fbKZ1-D", "gRC1Bc8271V", "QzF89c7o_sD", "It_MHp9uXB", "X78J1aL2CgW", "BekG2FjPKDF", "Ico3elPoDqc", "iclr_2021_Ggx8fbKZ1-D", "_dXvRS35DPV", "B83ZcQ6SmTH", "VSSzTR1sV7C", "iclr_2021_Ggx8fbKZ1-D", "iclr_2021_Ggx8fbKZ1-D" ]
iclr_2021_bodgPrarPUJ
Lipschitz-Bounded Equilibrium Networks
This paper introduces new parameterizations of equilibrium neural networks, i.e. networks defined by implicit equations. This model class includes standard multilayer and residual networks as special cases. The new parameterization admits a Lipschitz bound during training via unconstrained optimizatio...
withdrawn-rejected-submissions
This paper is an extension of Monotone Operator Equilibrium Networks (MON). It first tries to address a key issue in MON: whether the activation function $\sigma$ can be represented by a proximal operator of some function $f$. Then it derives the constraints on the weight $W$. Connections to neural ODEs and convex opti...
train
[ "1hAgXDNxQLz", "_iFN_Jm2wwM", "Q4zUlFNfYZv", "Ws9q30Lscu", "CXyIakaE55f", "YCr3NkAn1Y4", "PSqlLE62Q1", "9mpBrHBk_g", "C1MrdKsPHbK", "EMf2eSshzv" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "\nSummary: The paper introduces a new condition for showing the existence of the solution of a deep equilibrium model (which defines an implicit mapping via the fixed point). The new formulation also comes with a convenient and accurate Lipschitz bound. The proposed condition can be satisfied via reparameterizing ...
[ 6, 6, -1, -1, -1, -1, -1, -1, 7, 8 ]
[ 3, 2, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_bodgPrarPUJ", "iclr_2021_bodgPrarPUJ", "iclr_2021_bodgPrarPUJ", "C1MrdKsPHbK", "C1MrdKsPHbK", "_iFN_Jm2wwM", "1hAgXDNxQLz", "EMf2eSshzv", "iclr_2021_bodgPrarPUJ", "iclr_2021_bodgPrarPUJ" ]
iclr_2021_ysXk8cCHcQN
Fast 3D Acoustic Scattering via Discrete Laplacian Based Implicit Function Encoders
Acoustic properties of objects corresponding to scattering characteristics are frequently used for 3D audio content creation, environmental acoustic effects, localization and acoustic scene analysis, etc. The numeric solvers used to compute these acoustic properties are too slow for interactive applications. We presen...
withdrawn-rejected-submissions
The authors were responsive to the comments of the reviewers, both in the rebuttal and in the revision to the manuscript. However, the reviewers were still concerned about the lack of clarity of the manuscript, the motivation for the design decisions, and errors, present also in the rebuttal revisions.
train
[ "CZWSu6EENqX", "G4LdNu12y9v", "II4tjyqEjMY", "0-SM0_WBvd8", "eLhCwXvXa4N", "6wB9LLpufGb", "tWPKgjQukA", "fJrQZfoYaO" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes to train a neural network to predict the acoustic scattering effects of a 3D object. The input of the system is a 3D point cloud representing the object, and the outputs are 16 coefficients from the spherical harmonic decomposition of the sound field resulting from an incoming planar wave to th...
[ 4, -1, -1, -1, -1, -1, 6, 3 ]
[ 4, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_ysXk8cCHcQN", "CZWSu6EENqX", "CZWSu6EENqX", "tWPKgjQukA", "iclr_2021_ysXk8cCHcQN", "fJrQZfoYaO", "iclr_2021_ysXk8cCHcQN", "iclr_2021_ysXk8cCHcQN" ]
iclr_2021_cT0jK5VvFuS
Uncertainty in Neural Processes
We explore the effects of architecture and training objective choice on amortized posterior predictive inference in probabilistic conditional generative models. We aim this work to be a counterpoint to a recent trend in the literature that stresses achieving good samples when the amount of conditioning data is large. ...
withdrawn-rejected-submissions
This paper analyzes some design choices for neural processes, paying particular attention to their small-data performance, uncertainty, and posterior contraction. This is certainly a worthwhile project, and R3 found the analysis interesting, giving the paper a score of 8. However, R1, R2, and R4 found the experimenta...
test
[ "57Le8gn2iQj", "rlh7ZGAw2Eg", "92G-M0wTWN", "V_ghkTA9jKB", "y9cAwCx21uR", "mXK41eML6y", "k2MKS4r1uCl", "nZUeu9cKub", "ftdzTUyETGf", "vbuB6iBc4Z6" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary: This paper is an empirical investigation into the role of architecture and objective choices in Neural Process (NP) models when the amount of conditioning data is limited. Specifically, they investigate the question of well-calibrated uncertainty. \n\nClarity: The overall quality of the writing is clear, ...
[ 5, -1, -1, -1, -1, -1, -1, 5, 5, 8 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2021_cT0jK5VvFuS", "92G-M0wTWN", "mXK41eML6y", "nZUeu9cKub", "vbuB6iBc4Z6", "ftdzTUyETGf", "57Le8gn2iQj", "iclr_2021_cT0jK5VvFuS", "iclr_2021_cT0jK5VvFuS", "iclr_2021_cT0jK5VvFuS" ]
iclr_2021_4jXnFYaDOuD
Importance-based Multimodal Autoencoder
Integrating information from multiple modalities (e.g., verbal, acoustic and visual data) into meaningful representations has seen great progress in recent years. However, two challenges are not sufficiently addressed by current approaches: (1) computationally efficient training of multimodal autoencoder networks whi...
withdrawn-rejected-submissions
The paper proposes an auto-encoder framework IMA, a scalable model that learns the importance of modalities along with robust multimodal representations through a novel cross-covariance based loss function, in an unsupervised manner. They have compared their approach to SOTA methods via multiple experiments and shown h...
test
[ "iwxcSqVxCU6", "AwjQ_5tabh_", "YuRMFJLKhXH", "HEhN__ejo01", "hC4Zt8Ee_Dy", "C1bJDEU_XHP", "y11X5idzBLs", "qu4MWIC-8Mh", "HtonQQ1koQs", "mW3JD5Z8eN", "h7U0Sk-mlM1" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes the IMA model, a scalable model that learns modality importances and robust multimodal representations through a novel cross-covariance based loss function. The proposed model performs unimodal inference in absence of modalities and also addresses the problem of detecting important subspaces in ...
[ 5, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2021_4jXnFYaDOuD", "iclr_2021_4jXnFYaDOuD", "hC4Zt8Ee_Dy", "h7U0Sk-mlM1", "mW3JD5Z8eN", "iclr_2021_4jXnFYaDOuD", "iwxcSqVxCU6", "HtonQQ1koQs", "iclr_2021_4jXnFYaDOuD", "iclr_2021_4jXnFYaDOuD", "iclr_2021_4jXnFYaDOuD" ]
iclr_2021_cAvgPMAA3hb
GRF: Learning a General Radiance Field for 3D Scene Representation and Rendering
We present a simple yet powerful implicit neural function that can represent and render arbitrarily complex 3D scenes in a single network only from 2D observations. The function models 3D scenes as a general radiance field, which takes a set of 2D images with camera poses and intrinsics as input, constructs an internal...
withdrawn-rejected-submissions
The paper presents an extension of recent implicit representations for view synthesis, such as NeRF. The presented formulation accepts an image set as input at test time, and can thus in principle be applied to new scenes. The idea is sound, but reviewers had concerns with the presentation and the experimental results....
train
[ "M3pPPfQC6N", "7a5hBLcLg2G", "mcBOcIYzf9r", "20n5tfLbgbT", "7H72zZ-0GC7", "yN0jjbpIF-J", "lqMrVcEZ8sh", "KFNtGeN4M4", "cRvHunFUQhi", "JRVc8laZzV", "Qk0GBDPv5K", "DfG_ZIgUWat", "XtQL1ZtCHx" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "**Concern 1**\\\nThanks you for your detailed replies. However, I still think some of my concerns are not well addressed. The main reason is still lack or more experimental results to support the argument made in this ... ... and the writing about methodology still needs to be improved. \n\nFor example, it is stil...
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, 4, 6, 7 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "20n5tfLbgbT", "yN0jjbpIF-J", "7H72zZ-0GC7", "lqMrVcEZ8sh", "iclr_2021_cAvgPMAA3hb", "JRVc8laZzV", "Qk0GBDPv5K", "7H72zZ-0GC7", "DfG_ZIgUWat", "XtQL1ZtCHx", "iclr_2021_cAvgPMAA3hb", "iclr_2021_cAvgPMAA3hb", "iclr_2021_cAvgPMAA3hb" ]
iclr_2021_9nIulvlci5
Neural Random Projection: From the Initial Task To the Input Similarity Problem
The data representation plays an important role in evaluating similarity between objects. In this paper, we propose a novel approach for implicit data representation to evaluate similarity of input data using a trained neural network. In contrast to the previous approach, which uses gradients for representation, we uti...
withdrawn-rejected-submissions
Techniques are introduced for improving representation learning capabilities of neural networks, and the result is interpreted in terms of random projections. In further discussion, even the reviewer with the highest grade said that the paper does not yet have enough clarity to address the reviewers' comments. Particu...
train
[ "qP-Hjb9iFjq", "CImp5Utuq0v", "dE64I-6G7Q-", "zQuphEW0kwT", "AXISW-U9RxY", "PBBk9cCtxFR" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper studies the usage of the representations developed in the last layer of a neural network as a way to measure the similarity between input patterns. The fundamental idea revolves around the concept of orthogonal weight matrices, to decorrelate the activations of the neurons, and which would definitely enr...
[ 4, -1, -1, -1, 7, 3 ]
[ 4, -1, -1, -1, 3, 4 ]
[ "iclr_2021_9nIulvlci5", "AXISW-U9RxY", "qP-Hjb9iFjq", "PBBk9cCtxFR", "iclr_2021_9nIulvlci5", "iclr_2021_9nIulvlci5" ]
iclr_2021_tbwjUvUzQRU
A Communication Efficient Federated Kernel k-Means
A federated kernel k-means algorithm is developed in this paper. This algorithm resolves two challenging issues: 1) how to distributedly solve the optimization problem of kernel k-means under federated settings; 2) how to maintain communication efficiency in the algorithm. To tackle the first challenge, a distributed s...
withdrawn-rejected-submissions
This paper presents a approach to the distributed kernel k-means problem using a combination of random features to efficiently approximate the kernel matrix, a distributed stochastic proximal gradient algorithm which calls a distributed lanczos algorithm as a primitive to find a low-rank approximation to the kernel mat...
train
[ "-bzpPnS8Vmp", "FG_d-Jy3fm7", "JF3IaHcgkSd", "7qPVijr0R6r", "6VfuZqHQsGD", "xYnrYVSzlYM", "KSlRzgmVvc_", "rm0QbwUdiYW", "w_nEcBQSDwL", "D8AkCtziVcq", "10xrRBZCqTc", "sTZMI9AMvEc", "cFIBNaP_aNg", "VrOOrkjXp4H", "HYQw5gSUZwt", "XJEFrCiFDDP" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Summary:\n\nThis paper proposes a federated kernel k-means algorithm (FK k-means). The algorithm consists of two parts: a distributed stochastic proximal gradient descent (DSPGD) update rule, and a communication efficient mechanism (CEM) to reduce the communication cost. Instead of solving the original integer pro...
[ 5, 5, 1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 2, 3, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_tbwjUvUzQRU", "iclr_2021_tbwjUvUzQRU", "iclr_2021_tbwjUvUzQRU", "xYnrYVSzlYM", "iclr_2021_tbwjUvUzQRU", "10xrRBZCqTc", "rm0QbwUdiYW", "w_nEcBQSDwL", "FG_d-Jy3fm7", "10xrRBZCqTc", "JF3IaHcgkSd", "cFIBNaP_aNg", "VrOOrkjXp4H", "XJEFrCiFDDP", "-bzpPnS8Vmp", "iclr_2021_tbwjUvUzQR...
iclr_2021_SQfqNwVoWu
Approximate Probabilistic Inference with Composed Flows
We study the problem of probabilistic inference on the joint distribution defined by a normalizing flow model. Given a pre-trained flow model p(x), we wish to estimate p(x2∣x1) for some arbitrary partitioning of the variables x=(x1,x2). We first show that this task is computationally hard for a large class of flow mode...
withdrawn-rejected-submissions
This paper proposes a method for conditional inference with arbitrary conditioning by creating composed flows. The paper provides a hardness result for arbitrary conditional queries. Motivated by the fact that conditional inference is hard the paper therefore suggests a novel relaxation where the *conditioning* is rela...
train
[ "Ps8inpQVSYq", "sUFP8ex1yOW", "8kECD84zaR0", "C8NVrtaXvIf", "q_uR5ECjeK8", "Ghn-Bje5Np", "sx1Vto5heiA", "w0E_ve-AZ3_", "ZOn9xHK3tP", "uOeclzk31wf", "3-gIYyX1gvd", "TZi-LXY6X1X", "OUOrPyW2k3j", "STKnXsIl2l4" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary: The paper proposes to solve the conditional inference problem by performing a relaxed version of variational inference in the prior space of the flow-based model. The model p(x) is pretrained, and one is interested sampling from p(x|observation). The observation could be some subset of x (inpainting), gra...
[ 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4 ]
[ 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_SQfqNwVoWu", "iclr_2021_SQfqNwVoWu", "C8NVrtaXvIf", "uOeclzk31wf", "TZi-LXY6X1X", "sUFP8ex1yOW", "w0E_ve-AZ3_", "STKnXsIl2l4", "uOeclzk31wf", "Ps8inpQVSYq", "TZi-LXY6X1X", "OUOrPyW2k3j", "iclr_2021_SQfqNwVoWu", "iclr_2021_SQfqNwVoWu" ]
iclr_2021_DQpwoZgqyZ
Model information as an analysis tool in deep learning
Information-theoretic perspectives can provide an alternative dimension of analyzing the learning process and complements usual performance metrics. Recently several works proposed methods for quantifying information content in a model (which we refer to as "model information"). We demonstrate using model information a...
withdrawn-rejected-submissions
This work presented a broad set of interesting applications of model information toward understanding task difficulty, domain similarity, and more. However, reviewers were concerned around the validity and rigor of the conclusions. Going into more depth in a subset of the areas presented would strengthen the paper, as ...
train
[ "Q8JPfwVIy7J", "U8Mt5KPYUB8", "5IqxPvg48pY", "qhwE7d4QWK9", "FYxtMz05j37", "FJVS-gTR0vN", "6WZhE8bFdYj", "jmzXj92LlxD", "XC0GcjYVC_b" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper examines different use-cases of a quantity proposed in prior works which is said to capture the model information. It shows that this quantity behaves as expected overall. Quantifying the amount of information a deep neural network is a very interesting question for the community with both theoretical an...
[ 4, -1, -1, -1, -1, -1, 4, 6, 4 ]
[ 2, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "iclr_2021_DQpwoZgqyZ", "XC0GcjYVC_b", "6WZhE8bFdYj", "FYxtMz05j37", "jmzXj92LlxD", "Q8JPfwVIy7J", "iclr_2021_DQpwoZgqyZ", "iclr_2021_DQpwoZgqyZ", "iclr_2021_DQpwoZgqyZ" ]
iclr_2021_JAlqRs9duhz
Straight to the Gradient: Learning to Use Novel Tokens for Neural Text Generation
Advanced large-scale neural language models have led to significant success in many natural language generation tasks. However, the most commonly used training objective, Maximum Likelihood Estimation (MLE), has been shown to be problematic, where the trained model prefers using dull and repetitive phrases. In this wor...
withdrawn-rejected-submissions
This paper proposes ScaleGrad, a simple technique to encourage generating non-repetitive tokens for text generation tasks. The key idea is to modify a language model's token-level distributions by rescaling the softmax probability for certain words (in the novel set) by a factor of $\gamma$. Experiments show that Scale...
train
[ "Y7_SCEsK8og", "3_jEVP1T5t", "hZDHvFAk2G", "84s-GIHdgFU" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "**I have updated this review after noting the authors’ detailed response.**\n\nThis paper focuses on the problem of “Neural Text Degeneration”—where text sampled from a language model can either be too repetitive and bland or too random and nonsensical. The authors focus largely on the former problem, proposing a ...
[ 4, 6, 5, 6 ]
[ 4, 4, 5, 3 ]
[ "iclr_2021_JAlqRs9duhz", "iclr_2021_JAlqRs9duhz", "iclr_2021_JAlqRs9duhz", "iclr_2021_JAlqRs9duhz" ]
iclr_2021_uJSBC7QCfrX
Differential-Critic GAN: Generating What You Want by a Cue of Preferences
This paper proposes Differential-Critic Generative Adversarial Network (DiCGAN) to learn the distribution of user-desired data when only partial instead of the entire dataset possesses the desired properties. Existing approaches select the desired samples first and train regular GANs on the selected samples to derive t...
withdrawn-rejected-submissions
The paper presents a method to regularize the discriminator in GAN training with a ranking loss based on the user preference for a desired set within a larger dataset. The tradeoff between GAN loss and preference loss dependence on the distance of the set to the full dataset and the authors consider two regimes : "sma...
train
[ "4i0tzCBh1Ia", "IglYO6UxTpv", "MZfvHwkWlHJ", "dHindLVXvDs", "jRsz4kuh3H1", "uECfU5srzM", "4Q-pKsTygTx", "qZfL1z50sJJ", "ZLco_cbTjcO", "jzXpO_L54tP", "EJqG4yUw841", "ZTtxLlQqsLi", "x0SzcNOaYRM", "iufohqZbk3y", "QNWevyBzpf", "ErVMYc36gg" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "- Overview:\n\tThe paper addresses the problem of training a GAN to match the distribution of part of the dataset called the 'desired data distribution', instead of the whole dataset as usually done in the context of GANs. This problem can be of interest when for instance the 'desired training data' is limited an...
[ 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_uJSBC7QCfrX", "iclr_2021_uJSBC7QCfrX", "dHindLVXvDs", "jRsz4kuh3H1", "uECfU5srzM", "4Q-pKsTygTx", "ZLco_cbTjcO", "IglYO6UxTpv", "jzXpO_L54tP", "QNWevyBzpf", "iclr_2021_uJSBC7QCfrX", "4i0tzCBh1Ia", "ErVMYc36gg", "ZTtxLlQqsLi", "iclr_2021_uJSBC7QCfrX", "iclr_2021_uJSBC7QCfrX" ...
iclr_2021_uFHwB6YTxXz
Distribution-Based Invariant Deep Networks for Learning Meta-Features
Recent advances in deep learning from probability distributions successfully achieve classification or regression from distribution samples, thus invariant under permutation of the samples. The first contribution of the paper is to extend these neural architectures to achieve invariance under permutation of the feature...
withdrawn-rejected-submissions
This paper invariantizes distribution based deep networks by using pairwise embedding of the set’s elements. The idea is inspired from De Bie et al. (2019), which allows invariance to be incorporated through the interaction functional. Although the paper is well executed with solid theoretical analysis and solid resp...
train
[ "96PKktH_tWk", "4fyzHw-ERM", "u2MVu6O0v8U", "8ChoyCbSYUW", "RO5uAJzVvpp", "Eik0oSLjzBG", "HlGVOUIONP", "dalatKsAdJt", "JJ6bkttrDDV", "QQyk-PXjYp3", "LUAVOIfsWg5", "2cVCd3R59M" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Dear reviewers, we have submitted a new version of our work, with added experiments on the stability of the meta-features with respect to permutations and sampling strategies (Appendix D.5, see detailed answer to #AnonReviewer1). We thank all reviewers for their constructive feedback, which has helped us improve t...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, 5, 6, 7 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, 2, 3, 3 ]
[ "iclr_2021_uFHwB6YTxXz", "8ChoyCbSYUW", "iclr_2021_uFHwB6YTxXz", "JJ6bkttrDDV", "iclr_2021_uFHwB6YTxXz", "2cVCd3R59M", "QQyk-PXjYp3", "LUAVOIfsWg5", "u2MVu6O0v8U", "iclr_2021_uFHwB6YTxXz", "iclr_2021_uFHwB6YTxXz", "iclr_2021_uFHwB6YTxXz" ]
iclr_2021_Mub9VkGZoZe
Identifying Informative Latent Variables Learned by GIN via Mutual Information
How to learn a good representation of data is one of the most important topics of machine learning. Disentanglement of representations, though believed to be the core feature of good representations, has caused a lot of debates and discussions in recent. Sorrenson et al. (2020), using the techniques developed in nonli...
withdrawn-rejected-submissions
The paper proposes a method to identify informative latent variables by thresholding based on the conditional generative model. While the exposition of the paper has substantially improved during the discussion period, some major concerns remain after the discussion among the reviewers. In particular, the problem consi...
train
[ "Y4BVCW8ivWl", "odthAeSvw_", "ecxvwBCOjSA", "ozUJArn6rU", "g7pCG00q4rm", "ZSqYi-Gdq5L", "LcZwUZ8sL1s", "opF6bGrqkf", "Qbsb3a_pqn", "rMkicHsHbUv", "DiCRXvfcoFd", "1tP-042Vix6" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer" ]
[ "This paper builds on top of the paper “Disentanglement by nonlinear ICA with General Incompressible-Flow Networks (GIN)” (Sorrenson, 2020) and argues that that paper’s method of identifying informative latent variables was wrong and instead suggests that informative latent variables can be identified by thresholdi...
[ 6, 5, 5, -1, -1, -1, -1, -1, -1, -1, 6, 4 ]
[ 2, 3, 3, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_Mub9VkGZoZe", "iclr_2021_Mub9VkGZoZe", "iclr_2021_Mub9VkGZoZe", "iclr_2021_Mub9VkGZoZe", "DiCRXvfcoFd", "odthAeSvw_", "1tP-042Vix6", "Y4BVCW8ivWl", "rMkicHsHbUv", "ecxvwBCOjSA", "iclr_2021_Mub9VkGZoZe", "iclr_2021_Mub9VkGZoZe" ]
iclr_2021_1toB0Fo9CZy
Neural Architecture Search of SPD Manifold Networks
In this paper, we propose a new neural architecture search (NAS) problem of Symmetric Positive Definite (SPD) manifold networks. Unlike the conventional NAS problem, our problem requires to search for a unique computational cell called the SPD cell. This SPD cell serves as a basic building block of SPD neural architect...
withdrawn-rejected-submissions
The paper tries to find better semi-positive definite (SPD) manifold networks using neural architecture search. However, as pointed out by the reviewers, the paper has a few weaknesses: (a) it lacks in novelty, (b) it lacks in experiments that are mentioned in the SOTA papers, (c) the experiments should be performed wi...
train
[ "UpOIkmysxp_", "Ny7nKJ5shof", "V--p-bG-2B", "QxJ50t-_fdX", "xV8gqsM0AY", "ZIlJ2SAdgr3", "HpWvuiC6Cd2", "o0O11VKtCfw", "XaVmZF4KP6-", "qvXBZksYLRn", "bTyO8kbQfdl", "yZ2Pz4yTPAR", "cf2wYGadh6u", "4-ezZrNk7OR", "HsQLJQq6FtC" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank all the reviewers for their constructive feedback. We address the major concerns about the novelty and the significance of the proposed method below, followed by an additional comment on the summary of our major updates in the revision.\n\n- SPD Cell design:\nDifferentiable NAS methods (such as DARTS [1]...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 3, 4 ]
[ "iclr_2021_1toB0Fo9CZy", "4-ezZrNk7OR", "UpOIkmysxp_", "cf2wYGadh6u", "ZIlJ2SAdgr3", "HpWvuiC6Cd2", "QxJ50t-_fdX", "qvXBZksYLRn", "HsQLJQq6FtC", "yZ2Pz4yTPAR", "XaVmZF4KP6-", "iclr_2021_1toB0Fo9CZy", "iclr_2021_1toB0Fo9CZy", "iclr_2021_1toB0Fo9CZy", "iclr_2021_1toB0Fo9CZy" ]
iclr_2021_KiFeuZu24k
Global Self-Attention Networks for Image Recognition
Recently, a series of works in computer vision have shown promising results on various image and video understanding tasks using self-attention. However, due to the quadratic computational and memory complexities of self-attention, these works either apply attention only to low-resolution feature maps in later stages o...
withdrawn-rejected-submissions
Paper was reviewed by four expert reviewers. Unfortunately all reviewers, uniformly felt that paper fell marginally bellow bar and argue for rejection. A number of concerns have been identified by the reviewers in the review phase. Those included: (1) lack of novelty [Reviewer3, Reviewer4], (2) lack of various ablatio...
train
[ "Cp0FV7T2tGa", "h0oKsquPJsR", "HF9HhpXvkDa", "BPVfoLbNlk5", "PMTMTLNa4T4", "G4TYCLjIlp9", "cdfY4gJi4jc", "ToByRiPtdme", "OF845mZJhT", "Owf4_LNJTGP", "wNLWb_ra-l5", "0uHmEfCwrA8", "WE4k-YC5Go", "aGsWVdUBI3q", "cFVM8LrTCrg", "Eh4x80jL3gI", "JmUlAoF5lEK", "W8IKyehQF13", "kWTqXEnEun8...
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "# Post-rebuttal update\n\nI would like to thank the authors for the detailed feedback. I am now convinced about the statistical significance of the results. Regarding the additional study, while it is true that the combination of the changes, in addition to the softmax, was what made the results improve, the chang...
[ 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ "iclr_2021_KiFeuZu24k", "iclr_2021_KiFeuZu24k", "cFVM8LrTCrg", "Eh4x80jL3gI", "W8IKyehQF13", "h0oKsquPJsR", "Cp0FV7T2tGa", "h0oKsquPJsR", "h0oKsquPJsR", "kWTqXEnEun8", "h0oKsquPJsR", "kWTqXEnEun8", "aGsWVdUBI3q", "Eh4x80jL3gI", "W8IKyehQF13", "h0oKsquPJsR", "Cp0FV7T2tGa", "iclr_202...
iclr_2021_4vDf4Qtodh
InstantEmbedding: Efficient Local Node Representations
In this paper, we introduce InstantEmbedding, an efficient method for generating single-node representations using local PageRank computations. We prove that our approach produces globally consistent representations in sublinear time. We demonstrate this empirically by conducting extensive experiments on real-world da...
withdrawn-rejected-submissions
This paper proposes an efficient algorithm to obtain a node embedding based on its local PageRank scores. The proposed approach uses a hashing technique and a local partition approach to make the method more efficient and effective. However, the paper has significant drawback and can be further improved in the followin...
train
[ "FoNIg9OoF-r", "4ekzZfYPW0A", "Ymbxb4FxU1t", "LEMk_tS1Q8u", "pG2n4lP1AkF", "8dqZ6gjsKO_", "25ccV8hEPqX", "Xa_XJwFGTq9", "EpXQR8vU083", "xGKBj59tPR", "Q2OD7b87ErJ" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "[1] Additional runtime measurements have been added in the revised version.\n\n[3] While our method is strongly related to previous work based on explicit matrix factorisation such as Qiu et al. (2018) and Brochier et al. (2019), we remark that our local PPR formulation is different from the truncated random walks...
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 5, 4, 3 ]
[ "4ekzZfYPW0A", "LEMk_tS1Q8u", "Xa_XJwFGTq9", "xGKBj59tPR", "Q2OD7b87ErJ", "25ccV8hEPqX", "EpXQR8vU083", "iclr_2021_4vDf4Qtodh", "iclr_2021_4vDf4Qtodh", "iclr_2021_4vDf4Qtodh", "iclr_2021_4vDf4Qtodh" ]
iclr_2021_xTV-wQ-pMrU
Shuffle to Learn: Self-supervised learning from permutations via differentiable ranking
Self-supervised pre-training using so-called "pretext" tasks has recently shown impressive performance across a wide range of tasks. In this work we advance self-supervised learning from permutations, that consists in shuffling parts of input and training a model to reorder them, improving downstream performance in cla...
withdrawn-rejected-submissions
There is clear consensus on this submission. Reviewers cite a lack of comparison with recent state-of-the-art methods and experiments on more realistic datasets. Though the reviewers find aspects of the approach interesting, the decision is to reject.
train
[ "7QDcoieE_bF", "-PTkDWBKfm", "z-0dWMsSkki" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper applied differentiable ranking operator on unsupervised learning framework that uses permutation based pretext task. They evaluated the proposed approach in audio, video, and image classification tasks. The results show that the proposed differentiable ranking operator is showing better performance than ...
[ 4, 4, 4 ]
[ 2, 4, 4 ]
[ "iclr_2021_xTV-wQ-pMrU", "iclr_2021_xTV-wQ-pMrU", "iclr_2021_xTV-wQ-pMrU" ]
iclr_2021_WtlM9p1bVAw
Unsupervised Class-Incremental Learning through Confusion
While many works on Continual Learning have shown promising results for mitigating catastrophic forgetting, they have relied on supervised training. To successfully learn in a label-agnostic incremental setting, a model must distinguish between learned and novel classes to properly include samples for training. We intr...
withdrawn-rejected-submissions
This paper presents a continual learning method based on a novelty detection technique. All reviewers are concerned about various issues, especially, motivation, experiment, and presentation. One of the reviewers was initially positive about this paper but downgraded his/her score due to unresolved problems in the prop...
test
[ "2uVRZ3cTzD4", "XsDqseYDmis", "Pkl82iRK9Kj", "VM8R-aBLVYZ", "_zxsTe5iLgb", "dgdDV7De7U-", "rgOKd2r-tiS", "oSzYirz_04" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your thorough and detailed review. We appreciate the feedback and will do our best to address your comments. \n\nTerminology for figure 2 has been changed to “repeated” and “non-repeated.” \n\nThe class imbalance ratio represents the proportion of samples used per class from the exemplar versus the n...
[ -1, -1, -1, -1, 3, 3, 4, 6 ]
[ -1, -1, -1, -1, 3, 2, 4, 4 ]
[ "oSzYirz_04", "rgOKd2r-tiS", "dgdDV7De7U-", "_zxsTe5iLgb", "iclr_2021_WtlM9p1bVAw", "iclr_2021_WtlM9p1bVAw", "iclr_2021_WtlM9p1bVAw", "iclr_2021_WtlM9p1bVAw" ]
iclr_2021_f_GA2IU9-K-
Non-decreasing Quantile Function Network with Efficient Exploration for Distributional Reinforcement Learning
Although distributional reinforcement learning (DRL) has been widely examined in the past few years, there are two open questions people are still trying to address. One is how to ensure the validity of the learned quantile function, the other is how to efficiently utilize the distribution information. This paper attem...
withdrawn-rejected-submissions
This work proposes a non-decreasing quantile functional form for distributional RL, and secondly propose using the distributional error as a means of exploration. The experimental results are very exciting. The paper, however, needs further work before acceptance: the reviewers raised concerns about Theorem 1: a full p...
val
[ "vH5Myh4Gr9Q", "VUHWLnvyHS5", "CQZ4sj6rwwu", "qLzzdyKn0zJ", "DIC72YwbF4K", "61uXm9ygeV", "-75O62cG0lu", "H1O3kw91pnQ", "lFI3BFPosBj", "u_ue0wxoD4k", "LUrOqKbxe6w" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Paper Summary: This paper mainly contributes in two parts: 1). A non-decreasing structure for quantile values in quantile-based distributional RL. 2). A curiosity-based intrinsic reward using distribution disagreement.\n\nClarity: \n- Some mathematical expressions, while correct, are hard to interpret, e.g. G_{i,\...
[ 6, 5, -1, -1, -1, -1, -1, -1, -1, 4, 6 ]
[ 5, 4, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2021_f_GA2IU9-K-", "iclr_2021_f_GA2IU9-K-", "vH5Myh4Gr9Q", "vH5Myh4Gr9Q", "LUrOqKbxe6w", "u_ue0wxoD4k", "VUHWLnvyHS5", "VUHWLnvyHS5", "LUrOqKbxe6w", "iclr_2021_f_GA2IU9-K-", "iclr_2021_f_GA2IU9-K-" ]
iclr_2021__ojjh-QFiFr
Language-Mediated, Object-Centric Representation Learning
We present Language-mediated, Object-centric Representation Learning (LORL), learning disentangled, object-centric scene representations from vision and language. LORL builds upon recent advances in unsupervised object segmentation, notably MONet and Slot Attention. Just like these algorithms, LORL also learns an objec...
withdrawn-rejected-submissions
While the paper's topic is on a topic of interest and presents an evaluation on three synthetic datasets, PartNet-Chairs, Shop-VRB-Simple, CLEVR dataset, several concerns and weaknesses remain after the author response. Main Concern and Weaknesses: * The main improvement comes from the additional supervision provided...
train
[ "hfib2bHPXhV", "1wdLnWl8bz", "-usPlmaWfw", "X-13oZfSPwi", "KgcyEm6am5E", "7bVAeSNdMC", "W5oI-0K9u4s", "RX6eAaWEIhb", "9txaRnhGQHw", "Zz05effWqk", "ITHu9S3COQl", "SSv6s9v-arO", "Cxt1kmh4xNB", "03TB8ylHLX", "AFf6ay_3ss", "_x506rwOLvD" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "### Summary\n\nThis paper proposes to combine the neuro-symbolic concept learner for visual reasoning from language (NS-CL; Mao et al., 2019) with recent unsupervised approaches to learning object-centric representations such as MONet (Burgess et al., 2019) and Slot-Attention (Locatello et al., 2020). While NS-CL ...
[ 4, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ 4, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021__ojjh-QFiFr", "iclr_2021__ojjh-QFiFr", "iclr_2021__ojjh-QFiFr", "iclr_2021__ojjh-QFiFr", "hfib2bHPXhV", "KgcyEm6am5E", "-usPlmaWfw", "iclr_2021__ojjh-QFiFr", "Zz05effWqk", "03TB8ylHLX", "W5oI-0K9u4s", "7bVAeSNdMC", "_x506rwOLvD", "1wdLnWl8bz", "X-13oZfSPwi", "iclr_2021__ojjh...
iclr_2021_Oi-Kh379U0
Generalizing and Tensorizing Subgraph Search in the Supernet
Recently, a special kind of graph, i.e., supernet, which allows two nodes connected by multi-choice edges, has exhibited its power in neural architecture search (NAS) by searching better architectures for computer vision (CV) and natural language processing (NLP) tasks. In this paper, we discover that the design of suc...
withdrawn-rejected-submissions
This paper proposes a new and general formulation for supernet, which encodes supernet with tensor network(TN). The idea is interesting and motivated. However, the paper is well presented and the clarify needs to be further improved. The effectiveness of algorithm is not well justified and experimental results are le...
train
[ "UeExtMBTFGb", "4xG41W6lWj2", "6lkd0Jspne_", "_0n8TtbapGe", "PVhsykz5aCc", "pyVuateiBdW", "lYqSf9sLnA", "eB5aHlKM-09" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\n\nThis paper proposes a new and general formulation for supernet, which encodes supernet with tensor network(TN). Based on TN, the topology of supernet can be encoded. Besides, this paper proposes a corresponding algorithm to solve the search problem. \n\nReasons for score:\n\nOverall, I vote for accepti...
[ 5, 4, -1, -1, -1, -1, 5, 5 ]
[ 3, 5, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_Oi-Kh379U0", "iclr_2021_Oi-Kh379U0", "lYqSf9sLnA", "eB5aHlKM-09", "4xG41W6lWj2", "UeExtMBTFGb", "iclr_2021_Oi-Kh379U0", "iclr_2021_Oi-Kh379U0" ]
iclr_2021_XOuAOv_-5Fx
Uncertainty Calibration Error: A New Metric for Multi-Class Classification
Various metrics have recently been proposed to measure uncertainty calibration of deep models for classification. However, these metrics either fail to capture miscalibration correctly or lack interpretability. We propose to use the normalized entropy as a measure of uncertainty and derive the Uncertainty Calibration E...
withdrawn-rejected-submissions
This work proposes a novel metric for measuring calibration error in classification models. Pros: * Novel calibration metric addressing limitations of previously used metrics such as ECE Cons: * Limited experimental validation on CIFAR-10/CIFAR-100 only * Unclear impact beyond proposing a new calibration metric * Unc...
val
[ "C7VCzjPHm2j", "8nTzC1CtA26", "Fd2OJAEd-AD", "4pFBsGON1Yp", "qQ0uD5TVX7A", "CvMv16ohWIe", "b9qljoluQPv", "UHyloZDTaMf", "kX4DvwuE8D", "rFTTGLW8FiE", "TzHBni0BBH", "4wLgZuv5QVo", "cXlTFYNwq4S", "iymdlltTViD", "H6kLa6Z5xi2", "ml__1UKzm7g", "n75AOuEvCDu", "jYJLKBQhSn_", "jyaNcLhjGjG...
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "author", ...
[ "The work addresses an important problem in the study of uncertainty estimation: how does one compare model uncertainty at differing accuracy levels? The work proposes a novel uncertainty metric, relates this to existing methods and provides robust evaluation of the various merits of this approach. The paper is eas...
[ 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_XOuAOv_-5Fx", "iclr_2021_XOuAOv_-5Fx", "cXlTFYNwq4S", "iclr_2021_XOuAOv_-5Fx", "CvMv16ohWIe", "b9qljoluQPv", "UHyloZDTaMf", "wd8-Yb9VujY", "rFTTGLW8FiE", "4wLgZuv5QVo", "cXlTFYNwq4S", "iymdlltTViD", "ml__1UKzm7g", "ml__1UKzm7g", "2nu6qeuQS_B", "C7VCzjPHm2j", "jyaNcLhjGjG",...
iclr_2021_b4ach0lGuYO
Iterative Image Inpainting with Structural Similarity Mask for Anomaly Detection
Autoencoders have emerged as popular methods for unsupervised anomaly detection. Autoencoders trained on the normal data are expected to reconstruct only the normal features, allowing anomaly detection by thresholding reconstruction errors. However, in practice, autoencoders fail to model small detail and yield blurry ...
withdrawn-rejected-submissions
The initial reviews were a bit split. R4 was slightly positive, R3 was slightly negative, and both R1 and R2 voted for rejection. The main issue was lack of proper comparisons with the SOTA methods and missing references. In the rebuttal, the authors added additional experiments as requested, but R1 and R2 were not con...
train
[ "_8wfZwiCKkM", "XFqzzkVRwt", "OiWUzvYyp4_", "J18t4x2XxFT", "kB1MEM7vjsc", "j1qiOvZkjb", "SOKoPizrwwH", "R457kGhcb8V" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your time assessing our paper and your valuable feedback. We revised our papers following your feedback. We appreciate if we can have your review again.\n\nPlease let us explain unclear points here.\n\n> The numbers given in table 1 for the baseline SOT methods are not from the literature. The author...
[ -1, -1, -1, -1, 4, 2, 5, 6 ]
[ -1, -1, -1, -1, 4, 5, 3, 1 ]
[ "kB1MEM7vjsc", "R457kGhcb8V", "j1qiOvZkjb", "SOKoPizrwwH", "iclr_2021_b4ach0lGuYO", "iclr_2021_b4ach0lGuYO", "iclr_2021_b4ach0lGuYO", "iclr_2021_b4ach0lGuYO" ]
iclr_2021_n4IMHNb8_f
Differentiable Spatial Planning using Transformers
We consider the problem of spatial path planning. In contrast to the classical solutions which optimize a new plan from scratch and assume access to the full map with ground truth obstacle locations, we learn a planner from the data in a differentiable manner that allows us to leverage statistical regularities from pas...
withdrawn-rejected-submissions
This paper proposes to jointly learn a mapper and planner for navigation or manipulation in a 2D space represented by an MxM grid, with the mapper taking raw observations as inputs and producing a 2D MxM occupancy and goal location map, and the planner -- pretrained on generic 2D maps of same size MxM -- produces an Mx...
train
[ "_vKPJUUXHXs", "6SgACq6B9np", "J0wH2wfUnT2", "zqQB33s43ql", "oUIqQoi0chW", "8Vdz3pa3Uxu", "qsSYJDqEkJc", "c4nQtsTj6aA", "-imnRSc--9x" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "\nThis paper presents Spatial Planning Transformers (SPTs); neural network modules that perform spatial planning over grid-like state spaces. The paper also goes on to present the idea that differentiable mapping and differntiable planning modules could be trained end-to-end, for better performance. This is evalua...
[ 5, 4, 6, -1, -1, -1, -1, -1, 7 ]
[ 4, 5, 3, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_n4IMHNb8_f", "iclr_2021_n4IMHNb8_f", "iclr_2021_n4IMHNb8_f", "J0wH2wfUnT2", "-imnRSc--9x", "qsSYJDqEkJc", "_vKPJUUXHXs", "6SgACq6B9np", "iclr_2021_n4IMHNb8_f" ]
iclr_2021_muu0gF6BW-
Cubic Spline Smoothing Compensation for Irregularly Sampled Sequences
The marriage of recurrent neural networks and neural ordinary differential networks (ODE-RNN) is effective in modeling irregularly sampled sequences. While ODE produces the smooth hidden states between observation intervals, the RNN will trigger a hidden state jump when a new observation arrives and thus cause th...
withdrawn-rejected-submissions
This paper introduces a form of cubic smoothing for use with ODE-RNNs, to remove the jump when new observations occur. I think this paper's motivation is based on a misunderstanding of what the hidden state of an RNN represents. Specifically, an RNN hidden state is a belief state, not the estimated state of the syste...
train
[ "aJBdyXZ4zex", "myAOjmPs-c", "IdbH70rLEIy", "6oG4E7qNwQ", "-46oJ3Lqi7", "wgB98vzl_fk", "125tSkkH_Eq", "Kg09i2hsr39" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper builds on ODE-RNN model that allows to represent a time series as a continuous trajectory. The authors address the limitation of the ODE-RNN model that the trajectory is continuous everywhere except the observation points. They introduce a compensation term based on cubic splines that transforms the outp...
[ 7, -1, -1, -1, -1, 7, 5, 5 ]
[ 5, -1, -1, -1, -1, 3, 5, 3 ]
[ "iclr_2021_muu0gF6BW-", "125tSkkH_Eq", "Kg09i2hsr39", "aJBdyXZ4zex", "wgB98vzl_fk", "iclr_2021_muu0gF6BW-", "iclr_2021_muu0gF6BW-", "iclr_2021_muu0gF6BW-" ]
iclr_2021_QjINdYOfq0b
ABS: Automatic Bit Sharing for Model Compression
We present Automatic Bit Sharing (ABS) to automatically search for optimal model compression configurations (e.g., pruning ratio and bitwidth). Unlike previous works that consider model pruning and quantization separately, we seek to optimize them jointly. To deal with the resultant large designing space, we propose a ...
withdrawn-rejected-submissions
The paper proposes to integrate multiple bit configurations (including pruning) into a single architecture, and then automatically select bit resolution through binary gates. The overall approach can be differentiable and optimized with parameters. However, as pointed out by the reviewers, the novelty of this paper can...
train
[ "6hQ7hscFGUA", "llViRpE_lgm", "wIuO1pH5xMd", "i4hNJI7A3B7", "QpgLuS162qP", "u2jORPnFfz", "-hrzH16qpf", "U_K4jyhHpK7", "zmSsNI1hbK" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your constructive comments and suggestions.\n\n**Q3.1.** Discussions with concurrent work (van Baalen, 2020).\n\n**A3.1.** Our ABS and Bayesian Bits (van Baalen, 2020) are developed concurrently that share a similar idea of quantization decomposition. Critically, our ABS differs from Bayesian Bits in se...
[ -1, -1, -1, -1, -1, -1, 6, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "zmSsNI1hbK", "U_K4jyhHpK7", "-hrzH16qpf", "-hrzH16qpf", "iclr_2021_QjINdYOfq0b", "U_K4jyhHpK7", "iclr_2021_QjINdYOfq0b", "iclr_2021_QjINdYOfq0b", "iclr_2021_QjINdYOfq0b" ]
iclr_2021_sHSzfA4J7p
Transferable Recognition-Aware Image Processing
Recent progress in image recognition has stimulated the deployment of vision systems at an unprecedented scale. As a result, visual data are now often consumed not only by humans but also by machines. Existing image processing methods only optimize for better human perception, yet the resulting images may not be accura...
withdrawn-rejected-submissions
Three reviewers have reviewed this paper and they maintain their findings after the rebuttal. The reviewers are mainly concerned about the novelty (several highly-related papers exist) and well as the technical contribution (more theoretical developments are needed). Therefore, this paper in its current form cannot be ...
val
[ "Z66Qhybsfk", "WJg3LLG9QWh", "ia45P90nMM5", "RqAOapT30J5", "r6h3elII_t", "5Co74PxjWE", "kujfbUb-rHk", "khPyrXqyU0b", "4FP0CijP9IZ" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a setting called \"recognition-aware image processing.\" The key idea is to make the images output by image processing methods still be readily recognized by image recognition methods. Realizing this will help to better meet the requirement from both human observers and machines. Formally, this...
[ 5, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2021_sHSzfA4J7p", "5Co74PxjWE", "iclr_2021_sHSzfA4J7p", "khPyrXqyU0b", "4FP0CijP9IZ", "Z66Qhybsfk", "RqAOapT30J5", "iclr_2021_sHSzfA4J7p", "iclr_2021_sHSzfA4J7p" ]
iclr_2021_cbtV7xGO9pS
TEAC: Intergrating Trust Region and Max Entropy Actor Critic for Continuous Control
Trust region methods and maximum entropy methods are two state-of-the-art branches used in reinforcement learning (RL) for the benefits of stability and exploration in continuous environments, respectively. This paper proposes to integrate both branches in a unified framework, thus benefiting from both sides. We first ...
withdrawn-rejected-submissions
The paper proposes a reinforcement learning algorithm that combines trust region policy optimization and entropy maximization. The starting point is the Lagrangian of a constrained optimization problem that upper bounds the change in the policy and lower bounds the entropy of the policy. The paper proves that the algor...
test
[ "HHNr4zcdQvc", "nVfPqhpmdx_", "JXf2_a0OUfB", "8-bPab5y27u", "SULS327Q2aK", "-oH8oOtANUX", "vdcLs8qra70", "DDmxdFb7opB", "KYV1i8hVfs3", "ZfSH3aYx7fn", "bFICniWkaiQ", "xN-q_LKU8mh", "zdjzn4DTdu" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes Trust Entropy Actor Critic (TEAC), a novel algorithm for reinforcement learning (RL) combining the idea of TRPO/PPO and max-entropy RL, together with the corresponding critic, actor and dual updates. The high level idea is that trust region methods ensure stability by constraining the KL diverg...
[ 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7 ]
[ 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 1 ]
[ "iclr_2021_cbtV7xGO9pS", "iclr_2021_cbtV7xGO9pS", "-oH8oOtANUX", "SULS327Q2aK", "bFICniWkaiQ", "KYV1i8hVfs3", "zdjzn4DTdu", "xN-q_LKU8mh", "nVfPqhpmdx_", "bFICniWkaiQ", "HHNr4zcdQvc", "iclr_2021_cbtV7xGO9pS", "iclr_2021_cbtV7xGO9pS" ]
iclr_2021_xyGFYKIPTDJ
Learning Causal Semantic Representation for Out-of-Distribution Prediction
Conventional supervised learning methods, especially deep ones, are found to be sensitive to out-of-distribution (OOD) examples, largely because the learned representation mixes the semantic factor with the variation factor due to their domain-specific correlation, while only the semantic factor causes the output. To a...
withdrawn-rejected-submissions
The paper formalizes domain adaptation by taking the causal (generative) direction of dependencies p(image | class, domain). They evaluate an ELBO surrogate loss by fitting a reverse q function that is new for this setup, and add a term to the loss that induces independence between class and domain. The paper also pr...
train
[ "vkOkjAaNV_q", "O3Fg28CmLZw", "RbXHnouEkO", "pa_uEDRYste", "5kOirtrggZi", "DOZBr3QfkR", "O84MiVZIfbV", "n8An03Lahjr" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer" ]
[ "**Summary**\nThe paper focuses on the causal perspective of domain-generalization and domain adaptation setup for images. I.e. classifying an image under some distribution shift at test time. Similar to previous work [1-4], it assumes that some latent semantic-object representation (s) and semantic-domain represen...
[ 6, -1, -1, -1, -1, 5, -1, 7 ]
[ 3, -1, -1, -1, -1, 3, -1, 3 ]
[ "iclr_2021_xyGFYKIPTDJ", "iclr_2021_xyGFYKIPTDJ", "n8An03Lahjr", "vkOkjAaNV_q", "vkOkjAaNV_q", "iclr_2021_xyGFYKIPTDJ", "DOZBr3QfkR", "iclr_2021_xyGFYKIPTDJ" ]
iclr_2021_DM6KlL7GeB
Semi-Relaxed Quantization with DropBits: Training Low-Bit Neural Networks via Bitwise Regularization
Network quantization, which aims to reduce the bit-lengths of the network weights and activations, has emerged as one of the key ingredients to reduce the size of neural networks for their deployments to resource-limited devices. In order to overcome the nature of transforming continuous activations and weights to disc...
withdrawn-rejected-submissions
# Summary The paper was initially well received by reviewers, remarking the new gradient estimator, a new dropbits technique and an interesting observations of better performance when the bitwidth is learned. The experimental results also look promising: showing improved training performance and test performance (inclu...
train
[ "m8ZrJJS6Bm-", "sBIZYYLFXow", "BwaXFb5BEQ", "BEPONgN0k17", "wJxrUn-mz6v", "oKFA94TcLi3", "1DFqPsD2riD", "YV-Z0KUGT6b", "995LJg65nh-", "0KamHEzeQ4", "dBQ7quauTPB", "mrVoH3SQnEA", "5zHkIPd9oTT", "VAC6wtWF_Rn", "Giqhv7i8MV0", "evY69mJMXv6", "CyykC-6V3P_", "uIy2QxyAxif" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper deals with network quantization. It proposes Semi-Relaxed Quantization (SRQ) that uses a multi-class straight-through estimator to effectively reduce the bias and variance, along with a new regularization technique, DropBits that replaces dropout regularization to randomly drop the bits. Extensive expe...
[ 5, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ 4, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2 ]
[ "iclr_2021_DM6KlL7GeB", "YV-Z0KUGT6b", "BEPONgN0k17", "wJxrUn-mz6v", "oKFA94TcLi3", "Giqhv7i8MV0", "iclr_2021_DM6KlL7GeB", "mrVoH3SQnEA", "iclr_2021_DM6KlL7GeB", "iclr_2021_DM6KlL7GeB", "m8ZrJJS6Bm-", "5zHkIPd9oTT", "1DFqPsD2riD", "uIy2QxyAxif", "evY69mJMXv6", "CyykC-6V3P_", "iclr_20...
iclr_2021_YD792AFzt4o
Revisiting Explicit Regularization in Neural Networks for Reliable Predictive Probability
From the statistical learning perspective, complexity control via explicit regularization is a necessity for improving the generalization of over-parameterized models, which deters the memorization of intricate patterns existing only in the training data. However, the impressive generalization performance of over-param...
withdrawn-rejected-submissions
Summary of reviews and discussions: Reviewers were overwhelmingly negative on this paper due to a variety of factors: unclear writing, heuristic motivation, overpromising in the title while underdelivering on results. Although the authors responded to the reviewers' feedback, and some reviewers increased their score to...
train
[ "-b7cJnuUZAX", "4koJmpQ62l", "MSeIIanbANB", "zr2Sm1ympgX", "t5_v8MlCW35", "zE5tUVsvzG", "d7H5SDs54om", "MkjbYx7_t4p", "wySQifgFACp", "2yfWfAHjwz6", "sxz4vgMOuim" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Update: After reading the other reviews/responses, I think there are persistent concerns with the breadth of experiments and the substantiveness of the contribution; although the manuscript is somewhat improved by the authors' updates, I'm keeping my score at 5.\n\nThis paper examines the effect of explicit regula...
[ 5, 4, -1, -1, -1, -1, -1, -1, 5, 5, 3 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "iclr_2021_YD792AFzt4o", "iclr_2021_YD792AFzt4o", "wySQifgFACp", "4koJmpQ62l", "2yfWfAHjwz6", "sxz4vgMOuim", "-b7cJnuUZAX", "iclr_2021_YD792AFzt4o", "iclr_2021_YD792AFzt4o", "iclr_2021_YD792AFzt4o", "iclr_2021_YD792AFzt4o" ]
iclr_2021_9_J4DrgC_db
Deep Coherent Exploration For Continuous Control
In policy search methods for reinforcement learning (RL), exploration is often performed by injecting noise either in action space at each step independently or in parameter space over each full trajectory. In prior work, it has been shown that with linear policies, a more balanced trade-off between these two explorati...
withdrawn-rejected-submissions
Unfortunately some of the reviewers' reactions to the author feedback won't be visible to the authors. The reviewers highly appreciated the replies and revision of the paper Pros: - The paper renders Generalized Exploration tractable for deep RL. - The idea is applicable to many DRL methods and is potentially very val...
test
[ "ygNJxL40MKO", "LHMXslwAuQf", "Cc_XUWx5kc6", "4d5lKJ1CMlX", "Lu4nLzPL7p9", "3aruvU45Jh8", "QEkFnN7VACb", "fD09cgHo5Zx", "nscZAmpeQGQ", "jeF5t42RPFd" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to note that we have now updated the manuscript. In particular, we describe a way to use coherent exploration with SAC that is more consistent with the approach for the on-policy methods, and yields slightly better results. The approach also allows us to update the precision $\\Lambda$ of the search ...
[ -1, -1, -1, -1, -1, -1, 4, 7, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 2, 3, 3 ]
[ "3aruvU45Jh8", "QEkFnN7VACb", "fD09cgHo5Zx", "nscZAmpeQGQ", "jeF5t42RPFd", "iclr_2021_9_J4DrgC_db", "iclr_2021_9_J4DrgC_db", "iclr_2021_9_J4DrgC_db", "iclr_2021_9_J4DrgC_db", "iclr_2021_9_J4DrgC_db" ]
iclr_2021_V8YXffoDUSa
Iterative convergent computation is not a useful inductive bias for ResNets
Recent work has suggested that feedforward residual neural networks (ResNets) approximate iterative recurrent computations. Iterative computations are useful in many domains, so they might provide good solutions for neural networks to learn. Here we quantify the degree to which ResNets learn iterative solutions and int...
withdrawn-rejected-submissions
This work provides evidence against the hypothesis that ResNets implement iterative inference, or that iterative convergent computation is a good inductive bias to have in these models. The reviewers indicate that they think this hypothesis is interesting and relevant to the ICLR community, but they do not find the cur...
train
[ "JFD0ijnlZ03", "vGVFGJbwqw9", "pItLZM4dMtZ", "o90xOiujU78", "DwHC9Q_2dgu", "N0sFnTaGOac", "pqzPqbPBdjn", "HJLFt1Xn3Iv", "71zuAXFpnY3", "zOvjC5Ka7t", "BvQG62n-sFh", "dqBAFda_34p", "qhYGNIWFmb", "RZZc865Tn9v", "aHAvnTpjGpA" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "## Paper Summary\n\nThis paper studies the correspondence between residual networks and iterative algorithms that repeat computations and converge to a solution. The authors suggest that residual networks can in principle implement such iterative algorithms and experimentally show that networks trained in practice...
[ 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2021_V8YXffoDUSa", "iclr_2021_V8YXffoDUSa", "o90xOiujU78", "RZZc865Tn9v", "N0sFnTaGOac", "pqzPqbPBdjn", "vGVFGJbwqw9", "71zuAXFpnY3", "zOvjC5Ka7t", "aHAvnTpjGpA", "dqBAFda_34p", "JFD0ijnlZ03", "iclr_2021_V8YXffoDUSa", "iclr_2021_V8YXffoDUSa", "iclr_2021_V8YXffoDUSa" ]
iclr_2021_McYsRk9-rso
Reducing Implicit Bias in Latent Domain Learning
A fundamental shortcoming of deep neural networks is their specialization to a single task and domain. While recent techniques in multi-domain learning enable the learning of more domain-agnostic features, their success relies firmly on the presence of domain labels, typically requiring manual annotation and careful cu...
withdrawn-rejected-submissions
While all reviewers agree that the topic is interesting and the work has merit, several issues have been pointed out, especially by R1 and R3, that indicate that the work is not ready for acceptance at this stage. the authors are strongly encouraged to continue to work on this topic, taking into account the feedback r...
train
[ "qJ1d_DXvR5A", "sjOvgkXDGI4", "La7gHAJsd_1", "mvfSOYshVdv", "Ws7Md-BeYRw", "WZFzaLot_ws", "olSXfr8QRq", "AXtR7i6mlV", "4oRzT6tXXzv", "3KM9LNsF5T2", "5j3JXy0LiYh", "Kl_QO5GxJqm", "LN5mj9TH940" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "------ Update after discussion with authors ---------\n\nI would like to thanks the author for their efforts by adding additional experiments, which surely enhances the significance of the proposed approach. Based on these, I increased my score to 5.\n\nI have re-checked the final revised version, I think the curr...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3 ]
[ "iclr_2021_McYsRk9-rso", "La7gHAJsd_1", "AXtR7i6mlV", "iclr_2021_McYsRk9-rso", "WZFzaLot_ws", "3KM9LNsF5T2", "5j3JXy0LiYh", "Kl_QO5GxJqm", "LN5mj9TH940", "qJ1d_DXvR5A", "iclr_2021_McYsRk9-rso", "iclr_2021_McYsRk9-rso", "iclr_2021_McYsRk9-rso" ]
iclr_2021_UfJn-cstSF
Learned ISTA with Error-based Thresholding for Adaptive Sparse Coding
The learned iterative shrinkage thresholding algorithm (LISTA) introduces deep unfolding models with learnable thresholds in the shrinkage function for sparse coding. Drawing on some theoretical insights, we advocate an error-based thresholding (EBT) mechanism for LISTA, which leverages a function of the layer-wise rec...
withdrawn-rejected-submissions
The paper received mixed reviews, with one review voting for acceptance, one strongly opposed, and two borderline ones. The discussion essentially involved R1 and R2, who gave the most informative reviews. After discussion, they did not update their score, even though they appreciated the work and effort done by the au...
train
[ "LeFq8XDkxgW", "U8tTGrX0GUA", "XlJIqubq3Ba", "ulm9BAFVOL7", "lsMb7BH-r8g", "VORfIfTRfVD", "89nTa4NuIm", "JFUykbgLHYK", "RN8NdxtN1bu", "N29QmkKSEO", "tT8GftLKBU1", "dESHMouLwP2", "dOelrnEa4Kc" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "### Strengh\n\n- The idea makes sense to adapt the thresholding mechanism to an input distribution with various reconstruction error. It might bring a much better empirical performance compared to thresholds fixed globally and it seems to be adapted in a denoising setting.\n\n\n### Weakness\n\n- The motivation for...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 5, 3 ]
[ "iclr_2021_UfJn-cstSF", "VORfIfTRfVD", "lsMb7BH-r8g", "LeFq8XDkxgW", "89nTa4NuIm", "ulm9BAFVOL7", "dESHMouLwP2", "LeFq8XDkxgW", "dOelrnEa4Kc", "tT8GftLKBU1", "iclr_2021_UfJn-cstSF", "iclr_2021_UfJn-cstSF", "iclr_2021_UfJn-cstSF" ]
iclr_2021_WPO0vDYLXem
Hyperparameter Transfer Across Developer Adjustments
After developer adjustments to a machine learning (ML) algorithm, how can the results of an old hyperparameter optimization (HPO) automatically be used to speedup a new HPO? This question poses a challenging problem, as developer adjustments can change which hyperparameter settings perform well, or even the hyperparame...
withdrawn-rejected-submissions
The paper has been actively discussed, both during and after the rebuttal phase. I enjoyed, and I am thankful for, the active communication that took place between the authors and the reviewers. On the one hand, the reviewers agreed on several pros of the paper, e.g., * Clear, well presented manuscript * The presentat...
train
[ "GFNirA-GMNP", "_Aujm7Xunbp", "Jjb3tdGr2sV", "HEaCtKK0OZG", "kpq4q43rjnj", "2y2NSo1hl6", "6H4FaCc8ozb", "MR_WqhSu4k", "kKh9Lx0o0_q", "ViaG_ysLD-y", "a9QwURyFqST", "ws7UWASYsUu", "y2eiE_6aNNq", "d_KfjzqNwcN", "vcFDS9191YH", "GPNWkoPdV1", "oCmcJqC3rK", "y2uiQnLUfvn", "_H1rl53ao9f",...
[ "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", ...
[ "Thank you for reading our rebuttal very carefully and for responding quickly with another round of helpful comments.\n\n\n9. “It is nice that you replaced TPEs with the more widely adopted (or state of the art?) GPs, but merely replacing one baseline by the another does not qualify as \"increasing the number of ba...
[ -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "2y2NSo1hl6", "kpq4q43rjnj", "kKh9Lx0o0_q", "ws7UWASYsUu", "iclr_2021_WPO0vDYLXem", "X79cX66p_X", "ws7UWASYsUu", "iclr_2021_WPO0vDYLXem", "MR_WqhSu4k", "d_KfjzqNwcN", "y2eiE_6aNNq", "OKYKA1MoYap", "1R0_Z-XNzSF", "y2uiQnLUfvn", "o8JKdk9Y_U", "MR_WqhSu4k", "DqhIHj9Qt9", "o8JKdk9Y_U",...
iclr_2021_jWXBUsWP7N
A Distributional Perspective on Actor-Critic Framework
Recent distributional reinforcement learning methods, despite their successes, still contain fundamental problems that can lead to inaccurate representations of value distributions, such as distributional instability, action type restriction, and conflation between samples and statistics. In this paper, we present a no...
withdrawn-rejected-submissions
The paper proposes a distributional perspective on the value function and uses it to modify PPO for both discrete and continuous control reinforcement learning tasks. The referees had noticed a number of wrong/misleading statements in the initial version of the submission, and the AC had also pointed out several proble...
train
[ "jX5Gw5D5ycB", "RCVGbazoq7e", "a3Clmt4sfxW", "Yz5fYqH52-X", "EET8zQoU7va", "Yyu-o9k02Tc", "m-IHCqRNzh", "H8PIpMmPsXm", "qPnFstnYTUM", "8SwflzlpJGy", "6XRbCHarjJm", "TEZjuxiWEH6", "VLLa0m6AJJu", "OYnDkNb2zMl", "Jg3Yvvpbkdr", "bF0IbGSvCoD", "TcFcFf1ay__", "kepmqyALuT0", "ucOTcD4zjW...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "...
[ "This paper proposes a distributional Actor critic framework (GMAC) based on GMM, Actor critic and Cramer distance.\nAuthors introduce SR(λ) a distributional version of the λ-return algorithm and to minimize the Cramer distance - as opposed to minimizing the Wasserstein distance using Huber quantile regression- bet...
[ 6, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ 2, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2021_jWXBUsWP7N", "iclr_2021_jWXBUsWP7N", "iclr_2021_jWXBUsWP7N", "m-IHCqRNzh", "OYnDkNb2zMl", "H8PIpMmPsXm", "qPnFstnYTUM", "mOg3H5a24rB", "8SwflzlpJGy", "bF0IbGSvCoD", "TEZjuxiWEH6", "VLLa0m6AJJu", "iclr_2021_jWXBUsWP7N", "ucOTcD4zjWJ", "iclr_2021_jWXBUsWP7N", "TcFcFf1ay__", ...
iclr_2021_QB7FkNVAfxa
On the Explicit Role of Initialization on the Convergence and Generalization Properties of Overparametrized Linear Networks
Neural networks trained via gradient descent with random initialization and without any regularization enjoy good generalization performance in practice despite being highly overparametrized. A promising direction to explain this phenomenon is the \emph{Neural Tangent Kernel} (NTK), which characterizes the implicit reg...
withdrawn-rejected-submissions
The authors provide a new analysis of learning of two-layer linear networks with gradient flow, leading to some novel optimization and generalization guarantees incorporating a notion of the imbalance in the weights. While there was some diversity of opinion, the prevailing view was that the results were not sufficien...
test
[ "1WATnh7UIOg", "d-J1FoJHzVf", "oNV9B6-aseS", "LyVbn7-fAd", "2n9gZpZeRwt", "gWg7rTnG3KF", "aKEiI83kB_j", "G4Sfk3Vvbgo", "GtYYaSorf9" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies the optimization and generalization properties of a two-layer linear network. The considered setting is over-parameterized linear regression where the input dimension is D, number of samples is n<D, and the target dimension is m. The hidden width is h. The paper has two main results. The first r...
[ 3, 6, -1, -1, -1, -1, -1, 5, 9 ]
[ 5, 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_QB7FkNVAfxa", "iclr_2021_QB7FkNVAfxa", "d-J1FoJHzVf", "GtYYaSorf9", "gWg7rTnG3KF", "1WATnh7UIOg", "G4Sfk3Vvbgo", "iclr_2021_QB7FkNVAfxa", "iclr_2021_QB7FkNVAfxa" ]
iclr_2021_vlcVTDaufN
Differentiable Combinatorial Losses through Generalized Gradients of Linear Programs
Combinatorial problems with linear objective function play a central role in many computer science applications, and efficient algorithms for solving them are well known. However, the solutions to these problems are not differentiable with respect to the parameters specifying the problem instance – for example, shortes...
withdrawn-rejected-submissions
This paper received high variance in the reviews. I personally agree with AnonReviewer4 that the theoretical results presented in this paper are well-known results on the sensitivity analysis of linear programs. See for instance "Introduction to linear optimization" by Bertsimas and Tsitsiklis, Chapter 5. More genera...
test
[ "hmI4mUbQyEt", "7jSyT4CsZ1V", "pk32YXrcDhD", "yVhPBDFXBxj", "7qSezy0tElK", "MOJj-92FseJ", "sOfvPLz93I", "a3fg_-gk_oR", "O1D80eB4gwi", "Y_4QitrxrTY" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for your comments and suggestions for improvements. Below, we address each of them:\n\n>> ... practitioners that do not have a background in combinatorial optimization may want to use it. The paper does not provide enough details to do so. I'd replace Algorithm 1 with a box specific to bipartite matching...
[ -1, -1, -1, -1, -1, 3, 7, 6, 8, 5 ]
[ -1, -1, -1, -1, -1, 5, 4, 3, 3, 4 ]
[ "Y_4QitrxrTY", "MOJj-92FseJ", "sOfvPLz93I", "a3fg_-gk_oR", "O1D80eB4gwi", "iclr_2021_vlcVTDaufN", "iclr_2021_vlcVTDaufN", "iclr_2021_vlcVTDaufN", "iclr_2021_vlcVTDaufN", "iclr_2021_vlcVTDaufN" ]
iclr_2021_in2qzBZ-Vwr
Cooperating RPN's Improve Few-Shot Object Detection
Learning to detect an object in an image from very few training examples - few-shot object detection - is challenging, because the classifier that sees proposal boxes has very little training data. A particularly challenging training regime occurs when there are one or two training examples. In this case, if the regio...
withdrawn-rejected-submissions
The reviewers have not supported the acceptance of this paper where the key weakness is that the study of the proposal neglect effect is not sufficient (see the reviews for the details). I agree with the assessment of the reviewers and recommend rejecting the paper in its current form.
test
[ "wjaaFt0igpF", "wQEzPV6jaet", "8OtwBPUA1l", "xcjt9LaQdAo", "2TvYBYxi83-", "O_ckt6KUdEI", "FDuFze1NXtk", "2uAInEkPVhw", "sK914J9isLx", "Lw1KK5ymUXz", "vhF4GoWLrs", "9gmCP_Hvivd", "P0PCz1qiJHm" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "Post-review comments:\n\nAfter reading the reply I decided I will keep my low score though it hurts to do so for a paper into which the authors definitely invested a lot of energy. Here is the reason why:\n\nI think there are usually two ways in which a paper can make an important contribution: Through a new insig...
[ 3, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ "iclr_2021_in2qzBZ-Vwr", "iclr_2021_in2qzBZ-Vwr", "wQEzPV6jaet", "P0PCz1qiJHm", "9gmCP_Hvivd", "sK914J9isLx", "vhF4GoWLrs", "wjaaFt0igpF", "2uAInEkPVhw", "iclr_2021_in2qzBZ-Vwr", "iclr_2021_in2qzBZ-Vwr", "iclr_2021_in2qzBZ-Vwr", "iclr_2021_in2qzBZ-Vwr" ]
iclr_2021_PuG6vCSbrV9
Density estimation on low-dimensional manifolds: an inflation-deflation approach
Normalizing Flows (NFs) are universal density estimators based on Neuronal Networks. However, this universality is limited: the density's support needs to be diffeomorphic to a Euclidean space. In this paper, we propose a novel method to overcome this limitation without sacrificing the universality. The proposed method...
withdrawn-rejected-submissions
The paper provides an interesting set of theoretical ideas to improve the estimation of normalizing flows on datasets that fail to be fully dimensional. Although the method is appealing, I believe the paper falls a bit short of acceptance at the conference. Too many practical issues are left out, as discussed by review...
test
[ "KAVQY9omkaT", "K004vVZC-v5", "EoQkD50qeul", "vsjGeLWcBE", "Rooj6vKrGUx", "PHwy6LtjTrh", "1ZPdkqRIYqP", "l4KXKqe9FV8" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewer for reading our manuscript carefully and attentively. The reviewer's comments helped us a lot to better pigeonhole our method into the literature.\n\nIndeed, the scalability of our method is the same as for NFs. We agree that the description of scalability as it was stated at the end of Secti...
[ -1, -1, -1, -1, 7, 6, 6, 5 ]
[ -1, -1, -1, -1, 2, 3, 3, 4 ]
[ "PHwy6LtjTrh", "1ZPdkqRIYqP", "l4KXKqe9FV8", "Rooj6vKrGUx", "iclr_2021_PuG6vCSbrV9", "iclr_2021_PuG6vCSbrV9", "iclr_2021_PuG6vCSbrV9", "iclr_2021_PuG6vCSbrV9" ]
iclr_2021_7Yhok3vJpU
High-Likelihood Area Matters --- Rewarding Correct,Rare Predictions Under Imbalanced Distributions
Learning from natural datasets poses significant challenges for traditional classification methods based on the cross-entropy objective due to imbalanced class distributions. It is intuitive to assume that the examples from rare classes are harder to learn so that the classifier is uncertain of the prediction, which es...
withdrawn-rejected-submissions
This submission got 1 reject and 3 marginally below the threshold. The concerns in the original reviews include (1) lack of theoretical justification. The motivation and claim are from empirical observation; (2) the performance improvement is minor compared with the existing methods; (3) some experiment settings and de...
train
[ "QRQRZxS-51N", "opqKNRDTtVc", "HYso_InX05", "Y4JMBmx-y_s", "w5Y6Y9KDXDk", "ng8ix4KoOT-", "0YjSrnCxtZ2", "mInilwyz0eT", "3WcGM2zccmw", "BEAS7F3O-4" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The submission makes an intriguing claim that retaining focus on correctly predicted rare classes can improve performance for training with class-imbalanced datasets.\n\nTo illustrate this claim, the paper shows that one can find improvements at overall accuracy if a combination of the Focal Loss (which weights do...
[ 5, 5, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ 3, 5, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_7Yhok3vJpU", "iclr_2021_7Yhok3vJpU", "opqKNRDTtVc", "QRQRZxS-51N", "QRQRZxS-51N", "3WcGM2zccmw", "iclr_2021_7Yhok3vJpU", "BEAS7F3O-4", "iclr_2021_7Yhok3vJpU", "iclr_2021_7Yhok3vJpU" ]
iclr_2021_0WWj8muw_rj
Adaptive Gradient Methods Can Be Provably Faster than SGD with Random Shuffling
Adaptive gradient methods have been shown to outperform SGD in many tasks of training neural networks. However, the acceleration effect is yet to be explained in the non-convex setting since the best convergence rate of adaptive gradient methods is worse than that of SGD in literature. In this paper, we prove that adap...
withdrawn-rejected-submissions
Dear authors, Improving the theoretical understanding of powerful algorithms is an important contribution to our field. Nevertheless, most of the reviewers are inclined to reject the paper. I somehow have to agree with them as e.g., adding more restrictive assumptions can allow deriving better bounds, but the question...
train
[ "95XxJ08yglD", "joNRu0M2-Ty", "AGCUpo_bw_w", "KlUBn0ZbINY", "lcO5Vtvdi_i", "WFaelv3Z_Ie", "fogGaQY1LRs", "fVIG-9B_3i", "sP-18U8WXcL", "o_6GmL2DkbE", "b0vCyC8Pb3I", "uUk4t4Qah8Y", "BnGejgEdYD2", "eBQQBZ5q7pZ", "nT5eXGi5E4n", "eq0oDax7Kbu", "pPiF0DLYxmX", "UmTpW-gc7Md", "AWrt-8TCi1...
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_re...
[ "I will initially provide a summary of the paper and list overall strengths and weakness of the paper. Then, I present my additional comments which are related to specific expressions in the main text, proof steps in the appendix etc. I would appreciate it very much if authors could address my questions/concerns un...
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2021_0WWj8muw_rj", "AGCUpo_bw_w", "KlUBn0ZbINY", "WFaelv3Z_Ie", "fVIG-9B_3i", "fogGaQY1LRs", "fVIG-9B_3i", "sP-18U8WXcL", "nT5eXGi5E4n", "b0vCyC8Pb3I", "eq0oDax7Kbu", "pPiF0DLYxmX", "UmTpW-gc7Md", "iclr_2021_0WWj8muw_rj", "95XxJ08yglD", "AWrt-8TCi1I", "iclr_2021_0WWj8muw_rj", ...
iclr_2021_7Z29QbHxIL
FTSO: Effective NAS via First Topology Second Operator
Existing one-shot neural architecture search (NAS) methods generally contain a giant supernet, which leads to heavy computational cost. Our method, named FTSO, separates the whole architecture search into two sub-steps. In the first step, we only search for the topology, and in the second step, we only search for the o...
withdrawn-rejected-submissions
Three reviewers have reviewed this manuscript, and they had severe reservations regarding the presentation quality and the lack of sufficient theoretical support behind empirical observations. Even after rebuttal, the reviewers maintained that the above issues are not fully resolved. Unfortunately, this paper cannot be...
train
[ "kBrCZZJrlbT", "8LQWLpm7HBV", "V6cp8a6Mw8x", "U45Gc-26b1", "gHQh332A_E", "DFUIymuQCm", "j15zneoTybm", "iYQgLp4WIrr", "oyBFIvnpNM" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This work researches the issue of neural architecture search (NAS), which is of significance for practical applications of deep neural networks and has become an active research topic in the past several years. Many methods on NAS have been developed recently. The computational efficiency of search has been one of...
[ 5, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ 3, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "iclr_2021_7Z29QbHxIL", "kBrCZZJrlbT", "iYQgLp4WIrr", "kBrCZZJrlbT", "oyBFIvnpNM", "iYQgLp4WIrr", "iYQgLp4WIrr", "iclr_2021_7Z29QbHxIL", "iclr_2021_7Z29QbHxIL" ]
iclr_2021_9GUTgHZgKCH
Reducing the number of neurons of Deep ReLU Networks based on the current theory of Regularization
We introduce a new Reduction Algorithm which makes use of the properties of ReLU neurons to reduce significantly the number of neurons in a trained Deep Neural Network. This algorithm is based on the recent theory of implicit and explicit regularization in Deep ReLU Networks from (Maennel et al, 2018) and the authors. ...
withdrawn-rejected-submissions
This is a clear reject. None of the reviewers supports publication of this work. The concerns of the reviewers are largely valid.
test
[ "o21IhEgqHbT", "-n3wWLD40m8", "6hRBXxwJpH", "FPiAsxyr-ld", "9hVzFgU-ea", "cYukXIXuRm", "PiskuPL0h13", "Pupl2qoZ_Pq", "oteCBNhqIh", "AwyoiDE5W8", "3bjb6sh6Isp", "jcmv8fjSYTQ", "Sv_CdJ-pRHz", "GvVHhZXZkx2", "nFLnVR93Bld" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I thank the authors for the clarification of the notation and correction of typos. I do now see how the equations work (mathematically), i.e., I retract my concerns on the function g and clusters containing a single neuron. The motivation for this pruning step still requires detailed explanations.\nI am not able t...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 2, 3, 2, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 5, 4 ]
[ "Pupl2qoZ_Pq", "3bjb6sh6Isp", "GvVHhZXZkx2", "9hVzFgU-ea", "nFLnVR93Bld", "nFLnVR93Bld", "Pupl2qoZ_Pq", "jcmv8fjSYTQ", "Sv_CdJ-pRHz", "Sv_CdJ-pRHz", "iclr_2021_9GUTgHZgKCH", "iclr_2021_9GUTgHZgKCH", "iclr_2021_9GUTgHZgKCH", "iclr_2021_9GUTgHZgKCH", "iclr_2021_9GUTgHZgKCH" ]
iclr_2021_9WlOIHve8dU
Learning Binary Trees via Sparse Relaxation
One of the most classical problems in machine learning is how to learn binary trees that split data into meaningful partitions. From classification/regression via decision trees to hierarchical clustering, binary trees are useful because they (a) are often easy to visualize; (b) make computationally-efficient predictio...
withdrawn-rejected-submissions
The main problem as flagged by reviewers is the lack of formal evidence that the approach is a right one to carry out. Decision tree induction has early been the subject of formal studies in ML, whether in statistics (Friedman et al.) or ML (Kearns et al.). It is a bit sad that a new approach that relies on a much diff...
train
[ "eEz4z4bpf-9", "G2ocuQIZxr", "SyewztWrr4C", "kW10XGAxSHQ", "Vb6pN6NHBxL", "2eoxBG94jbt", "Q90UWro3RZ", "erqHQuLwMYN", "YsYqIvhv_L", "aTNHdmykDIT", "sa6giDZLUDO", "YiG1x_10a8q" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We submitted a revised version of the paper with the following updates:\n1. We revised the related work as suggested by reviewers;\n2. We clarified our formulation and its derivation;\n3. We added a pseudocode for the overall optimization procedure in Algorithm 2, following reviewer 2’s suggestion;\n4. We reported...
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "iclr_2021_9WlOIHve8dU", "iclr_2021_9WlOIHve8dU", "Vb6pN6NHBxL", "iclr_2021_9WlOIHve8dU", "YsYqIvhv_L", "aTNHdmykDIT", "sa6giDZLUDO", "YiG1x_10a8q", "iclr_2021_9WlOIHve8dU", "iclr_2021_9WlOIHve8dU", "iclr_2021_9WlOIHve8dU", "iclr_2021_9WlOIHve8dU" ]
iclr_2021_sgnp-qFYtN
Sparsifying Networks via Subdifferential Inclusion
Sparsifying deep neural networks is of paramount interest in many areas, especially when those networks have to be implemented on low-memory devices. In this article, we propose a new formulation of the problem of generating sparse weights for a neural network. By leveraging the properties of standard nonlinear activat...
withdrawn-rejected-submissions
I have serious concerns about how experiments are reported in this paper. Most methods tried to compare at an iteration complexity of roughly 100 epochs because it is known more computation improves performance very significantly but the computational resources are limited for many researchers, especially in academia. ...
train
[ "f1TE2IKKj3j", "x2-eo0T2S2v", "BXlOub-nd0F", "yE68Pl20qX", "Uogc9ddyUJ", "P7CJus0HcCs", "aXt4gvA3puI", "VxTDcltfegt", "rBN5QqNMOS", "c5orApcaLtc" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary:\n\nThe authors propose a new algorithm for inducing sparsity in the weights of neural networks after training. The proposed algorithm exploits the properties of commonly used activation functions to cast the sparsification problem as the minimization of a sparsity measure subject to approximation accuracy...
[ 7, -1, -1, -1, -1, -1, -1, 9, 5, 5 ]
[ 5, -1, -1, -1, -1, -1, -1, 3, 2, 3 ]
[ "iclr_2021_sgnp-qFYtN", "BXlOub-nd0F", "yE68Pl20qX", "f1TE2IKKj3j", "VxTDcltfegt", "rBN5QqNMOS", "c5orApcaLtc", "iclr_2021_sgnp-qFYtN", "iclr_2021_sgnp-qFYtN", "iclr_2021_sgnp-qFYtN" ]
iclr_2021_yvuk0RsLoP7
Improving Model Robustness with Latent Distribution Locally and Globally
We propose a novel adversarial training method which leverages both the local and global information to defend adversarial attacks. Existing adversarial training methods usually generate adversarial perturbations locally in a supervised manner and fail to consider the data manifold information in a global way. Conseq...
withdrawn-rejected-submissions
This paper presents a framework for adversarial robustness by incorporating local and global structures of the data manifold. In particular, the authors use a discriminator-classifier model, where the discriminator tries to differentiate between the original and adversarial spaces and the classifier aims to classify be...
train
[ "hwK5B9Ky7GH", "Kq7ImLsK2OO", "PEioL3rjHTL", "augRWyagFY", "UGYxXi0Lhx7", "NeaS-KvOm0t", "jbcDz14wAMt", "lQavxP8XHuH", "AIQRrsf5c0e", "vKNSD7gK_U", "cIp99ASc2ES", "EIlDln-C5Ne", "pEPBJlVSWN5" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1. Thanks for your further comments. \n\n It is observed from Table 1 and Table 3 that this phenomenon can also be observed from the results of FS, i.e. the white-box attacks appear weaker than the black-box attacks. Note that the results of FS were obtained directly by running the source codes implemented by the...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 4, 3 ]
[ "Kq7ImLsK2OO", "augRWyagFY", "vKNSD7gK_U", "pEPBJlVSWN5", "vKNSD7gK_U", "vKNSD7gK_U", "EIlDln-C5Ne", "cIp99ASc2ES", "iclr_2021_yvuk0RsLoP7", "iclr_2021_yvuk0RsLoP7", "iclr_2021_yvuk0RsLoP7", "iclr_2021_yvuk0RsLoP7", "iclr_2021_yvuk0RsLoP7" ]
iclr_2021_D9pSaTGUemb
Implicit Acceleration of Gradient Flow in Overparameterized Linear Models
We study the implicit acceleration of gradient flow in over-parameterized two-layer linear models. We show that implicit acceleration emerges from a conservation law that constrains the dynamics to follow certain trajectories. More precisely, gradient flow preserves the difference of the Gramian~matrices of the input a...
withdrawn-rejected-submissions
This paper studies the implicit acceleration of gradient flow in over-parameterized two-layer linear models. The authors show that the amount of acceleration depends on the spectrum of the data without assuming small, balanced, or spectral initialization for the weights, and establish interesting connections between ma...
train
[ "3XfTUXgqATh", "zsCNV3qUz1u", "7ChSa8QLkAx", "Rd68ELY8T6", "e4UBdIHTy1v", "EzZ0Ro8mEN7", "9bhtVOMNk4Q", "b7S7YqsqdUO" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper studies the implicit acceleration of gradient flow for training a two-layer linear model. Compared with the one-layer linear model, the authors show that gradient flow over an overparameterized two-layer linear model may achieve a faster convergence rate, given a nice data spectrum and proper initializa...
[ 5, -1, -1, -1, -1, 6, 7, 6 ]
[ 4, -1, -1, -1, -1, 3, 5, 4 ]
[ "iclr_2021_D9pSaTGUemb", "3XfTUXgqATh", "EzZ0Ro8mEN7", "9bhtVOMNk4Q", "b7S7YqsqdUO", "iclr_2021_D9pSaTGUemb", "iclr_2021_D9pSaTGUemb", "iclr_2021_D9pSaTGUemb" ]
iclr_2021_7IDIy7Jb00l
Offline Meta Learning of Exploration
Consider the following problem: given the complete training histories of N conventional RL agents, trained on N different tasks, design a meta-agent that can quickly maximize reward in a new, unseen task from the same task distribution. In particular, while each conventional RL agent explored and exploited its own diff...
withdrawn-rejected-submissions
The paper studies offline meta reinforcement learning. Overall the scope of this contribution seems limited. Reviewers have raised concerns about the significance of the presented results given the assumptions, and that the experimental environments are not extensive and do not fully support the claimed advances.
train
[ "Zg7Cip3ImEm", "ClbklHjRi0L", "1BXxkkXGAP1", "ukTwTdT-W9E", "btBt627CMMq", "OdDnWFf0VUw", "EW11Lmf3Wg3", "Kfl8muFfcQe", "n0-U71BaHaV", "eNTWnmqzzyF", "SfGg5K7OxO5", "7dxH7BC0Lvh", "jz-Ikw-TJxT", "Btrex-_ReSf" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Summary:\nThis submission studies the meta-learning problem in RL under offline settings. A new algorithm is proposed to address this problem by extending the recent VariBAD algorithm designed for online meta-RL. The key modifications to adapt the original VariBAD to offline settings are the state re-labelling and...
[ 6, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_7IDIy7Jb00l", "iclr_2021_7IDIy7Jb00l", "iclr_2021_7IDIy7Jb00l", "btBt627CMMq", "OdDnWFf0VUw", "EW11Lmf3Wg3", "eNTWnmqzzyF", "ClbklHjRi0L", "Zg7Cip3ImEm", "jz-Ikw-TJxT", "Btrex-_ReSf", "iclr_2021_7IDIy7Jb00l", "iclr_2021_7IDIy7Jb00l", "iclr_2021_7IDIy7Jb00l" ]
iclr_2021_1hkYtDXAgOZ
Feature Integration and Group Transformers for Action Proposal Generation
The task of temporal action proposal generation (TAPG) aims to provide high-quality video segments, i.e., proposals that potentially contain action events. The performance of tackling the TAPG task heavily depends on two key issues, feature representation and scoring mechanism. To simultaneously take account of both as...
withdrawn-rejected-submissions
The paper focuses on the task of finding higher fidelity action proposals for temporal action proposal detection. As the reviewers mentioned, this task is a pre-task to temporal activity localization/detection in video, which is the main task to be solved. The paper may be perceived differently if it were presented as ...
train
[ "DDt6tNPAed4", "oGDECSBRvyv", "PzAPzZloP4p", "ywXfrHXoU7", "uMJyCYOxGx3", "tz_WmGbbJBb", "w-5PFwDrBFf", "cj3Fb5tkvmC" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper tackles the problem of temporal action proposal generation (TAPG). The authors address the problem from two perspectives: features wise and score fusion wise. They use non-local blocks to integrate appearance features and motion features together. For score fusion, they propose transformer based module ...
[ 5, 5, 5, -1, -1, -1, -1, 6 ]
[ 3, 3, 5, -1, -1, -1, -1, 4 ]
[ "iclr_2021_1hkYtDXAgOZ", "iclr_2021_1hkYtDXAgOZ", "iclr_2021_1hkYtDXAgOZ", "DDt6tNPAed4", "oGDECSBRvyv", "PzAPzZloP4p", "cj3Fb5tkvmC", "iclr_2021_1hkYtDXAgOZ" ]
iclr_2021_Oecm1tBcguW
Meta-Learning Bayesian Neural Network Priors Based on PAC-Bayesian Theory
Bayesian deep learning is a promising approach towards improved uncertainty quantification and sample efficiency. Due to their complex parameter space, choosing informative priors for Bayesian Neural Networks (BNNs) is challenging. Thus, often a naive, zero-centered Gaussian is used, resulting both in bad general...
withdrawn-rejected-submissions
The paper addresses the problem of prior selection in Bayesian neural networks by proposing a meta-learning framework based on PAC-Bayesian theory. The authors optimize a PAC bound called PACOH in the space of possible posterior distributions of BNN weights. The method does not rely on nested optimization schemes, inst...
train
[ "Ie1WzXJCzB", "UxRQDNN2edq", "kAyPKctFfo", "4BNKK-Llspc", "JHOCuQKc8uA", "_YbIz8hGsoq", "o87y8Guq5a1", "CfyXrvzLIW6", "8iIskaRhQhy", "JPtRJzQlJ27", "BmQha9LMPcS", "mbx-2Ozp4x", "kwcSlote-9", "I8-dlxiWKO6" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In response to the suggestion of AnonReviewer2, we conducted experiments with the GP-based method of Rothfuss et al. (2020) on the five regression environments. The respective results have been added to Table 1 & 2 and the experiment description / discussion has been modified accordingly.", "> I believe it is fi...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 4 ]
[ "8iIskaRhQhy", "kAyPKctFfo", "JHOCuQKc8uA", "8iIskaRhQhy", "mbx-2Ozp4x", "BmQha9LMPcS", "kwcSlote-9", "I8-dlxiWKO6", "iclr_2021_Oecm1tBcguW", "kwcSlote-9", "iclr_2021_Oecm1tBcguW", "iclr_2021_Oecm1tBcguW", "iclr_2021_Oecm1tBcguW", "iclr_2021_Oecm1tBcguW" ]
iclr_2021_uFA24r7v4wL
BDS-GCN: Efficient Full-Graph Training of Graph Convolutional Nets with Partition-Parallelism and Boundary Sampling
Graph Convolutional Networks (GCNs) have emerged as the state-of-the-art model for graph-based learning tasks. However, it is still challenging to train GCNs at scale, limiting their applications to real-world large graphs and hindering the exploration of deeper and more sophisticated GCN architectures. While it can be...
withdrawn-rejected-submissions
The paper is concerned with improving the scalability of GCNs which is an important problem and relevant to the ICLR community. For this purpose, the authors propose a new distributed training method for GCNs which uses a boundary sampling strategy to reduce the number of boundary nodes. The paper is written well and, ...
train
[ "hX-RQ3clZck", "VBQivZj0zHj", "LHkG3VytslQ", "JJsTEkyG1M6", "GJQUJaGT9c6", "3GO0yJRgDKx", "Y8tvm06WUmB", "LcRu734gLE", "BNqeJKHx0Jk", "cmKI2iJ-Itv" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "#### **3. Comparison with p=0**\nWe evaluate BDS-GCN with p=0 corresponding to settings of Table 2 under different numbers of partitions for a thorough analysis. The new Table 2 can be found in our updated manuscript. As a quick summary, we provide the test accuracy comparison below. \n\n| Sampling rate \t| Reddit...
[ -1, -1, -1, -1, -1, -1, 4, 4, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 5, 5, 4, 4 ]
[ "GJQUJaGT9c6", "cmKI2iJ-Itv", "Y8tvm06WUmB", "BNqeJKHx0Jk", "iclr_2021_uFA24r7v4wL", "LcRu734gLE", "iclr_2021_uFA24r7v4wL", "iclr_2021_uFA24r7v4wL", "iclr_2021_uFA24r7v4wL", "iclr_2021_uFA24r7v4wL" ]
iclr_2021_0NQdxInFWT_
Active Deep Probabilistic Subsampling
Subsampling a signal of interest can reduce costly data transfer, battery drain, radiation exposure and acquisition time in a wide range of problems. The recently proposed Deep Probabilistic Subsampling (DPS) method effectively integrates subsampling in an end-to-end deep learning model, but learns a static pattern for...
withdrawn-rejected-submissions
The review phase was very constructive, where reviewers raised several opportunities for improvements. The authors did a very good job in their rebuttal, which led some reviewers to change their opinion in a positive direction. Overall, reviewers agree that this is the borderline paper with remaining concerns about the...
test
[ "gYuamGOTJMa", "Im23lluzCeN", "KI_Z-Ca98_b", "IeYgK74BQ87", "ZiStn5QgWsR", "lFo0kbISg_e", "mKlj7AvGiYV", "XEoejfOTsWD", "9X5Rx_U6rJa", "AKFqqdsHpeD", "AXUszoKRCFL" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author" ]
[ "SUMMARY:\nThe paper at hand deals with compressed sensing (CS) and introduces an extension to deep probabilistic subsampling (DPS) called active deep probabilistic subsampling (A-DPS): instead of learning a sampling pattern that is equal for each element of the dataset, A-DPS adaptively selects entries (of each el...
[ 6, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2021_0NQdxInFWT_", "iclr_2021_0NQdxInFWT_", "iclr_2021_0NQdxInFWT_", "Im23lluzCeN", "KI_Z-Ca98_b", "gYuamGOTJMa", "iclr_2021_0NQdxInFWT_", "iclr_2021_0NQdxInFWT_", "Im23lluzCeN", "KI_Z-Ca98_b", "gYuamGOTJMa" ]
iclr_2021_GIeGTl8EYx
Deep Graph Neural Networks with Shallow Subgraph Samplers
While Graph Neural Networks (GNNs) are powerful models for learning representations on graphs, most state-of-the-art models do not have significant accuracy gain beyond two to three layers. Deep GNNs fundamentally need to address: 1). expressivity challenge due to oversmoothing, and 2). computation challenge due to nei...
withdrawn-rejected-submissions
In this paper, the authors propose a simple yet interesting new graph sampling method for graph neural networks. It addresses the two main problems that GNNs have not previously been extended to deep GNNs: expressivity and computational cost. Through experiments, the authors show the effectiveness of the proposed alg...
train
[ "NRo6CiTm1zt", "9V1UmhbbO_4", "SrcHEVdiZhy", "fmf-yfAWch1", "h142oZ8jmmI", "7k8y1DjCRzr", "PGDmp6dLPy1", "sw3IYhzSnqS", "GtMiZDdkZf", "pv7_D0oopQw", "OuL7vHP94_", "Fu1oAlpDEHM", "S9_iSvR9PzC", "W4DMsNYXAqi", "exXM2FCaCMa", "rFBTVwIa3ua", "thd5yZa83t6", "0HFiG4lNsPg" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\nTo address the oversmoothing problem and reduce the computational cost of GNNs, this paper proposes to train deep GNNs with shallow subgraph samplers. The following two theoretical proofs provide insightful motivations of Shadow-GNN: (1 )Obtaining node embeddings within shallow subgraphs can avoid oversmoothing;...
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 4 ]
[ "iclr_2021_GIeGTl8EYx", "iclr_2021_GIeGTl8EYx", "S9_iSvR9PzC", "W4DMsNYXAqi", "exXM2FCaCMa", "Fu1oAlpDEHM", "NRo6CiTm1zt", "PGDmp6dLPy1", "rFBTVwIa3ua", "thd5yZa83t6", "0HFiG4lNsPg", "OuL7vHP94_", "pv7_D0oopQw", "sw3IYhzSnqS", "GtMiZDdkZf", "iclr_2021_GIeGTl8EYx", "iclr_2021_GIeGTl8E...
iclr_2021_u8APpiJX3u
ItNet: iterative neural networks for fast and efficient anytime prediction
Deep neural networks have usually to be compressed and accelerated for their usage in low-power, e.g. mobile, devices. Common requirements are high accuracy, high throughput, low latency, and a small memory footprint. A good trade-off between accuracy and latency has been shown by networks comprising multiple intermedi...
withdrawn-rejected-submissions
All reviewers agree that the paper is well written and some of the experiments are interesting. However, the paper did not clearly highlight how this work fits in with prior research, neither did it show what the advantages of the presented homogeneous network are. The authors addressed some of these concerns in the re...
test
[ "G9PixfODBM", "mjsktBdmcfx", "2oHxRdFogC-", "Z9JkRRTt9e5", "xlgp-OYL40N", "9PzDS-6ul-e", "G0Slri0jYDZ", "fh3jAD0VzZ4", "hlJWiMl2W9c", "AkEJA-oRuie", "vQVsRe3AXqx", "RWuCgXzCnX7", "7SSzHDP8S2O", "mFw8UNvlYBW", "XguDW6xFPmi", "BG6ospN2E1o" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We now also compare our numbers to Recurrent U-Nets and ENets. In both cases, ItNets outperform these networks in mIoU over the size of the computational graph and in mIoU over MACs.", "I added the reference numbers for recurrent U-Nets (Wang et al) that are significantly outperformed by ItNets.", "> While I a...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 3, 4 ]
[ "AkEJA-oRuie", "2oHxRdFogC-", "9PzDS-6ul-e", "vQVsRe3AXqx", "AkEJA-oRuie", "BG6ospN2E1o", "mFw8UNvlYBW", "XguDW6xFPmi", "7SSzHDP8S2O", "iclr_2021_u8APpiJX3u", "RWuCgXzCnX7", "XguDW6xFPmi", "iclr_2021_u8APpiJX3u", "iclr_2021_u8APpiJX3u", "iclr_2021_u8APpiJX3u", "iclr_2021_u8APpiJX3u" ]
iclr_2021_hypDstHla7
Neuron Activation Analysis for Multi-Joint Robot Reinforcement Learning
Recent experiments indicate that pre-training of end-to-end Reinforcement Learning neural networks on general tasks can speed up the training process for specific robotic applications. However, it remains open if these networks form general feature extractors and a hierarchical organization that are reused as apparent ...
withdrawn-rejected-submissions
The paper analyzes neuron activations for neural networks trained via RL to perform reaching with planar robot arms. This analysis includes an evaluation of the correlation between neurons of different models trained to control arms with different degrees-of-freedom. In performing these evaluations, the paper proposes ...
train
[ "qpRRH9UvWqu", "yxS0PGeDbRI", "SQ9AiSN-LuA" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "### Summary\n\nThe authors investigate individual neuron activations over time, and compare the neuron activations within individual networks all-to-all and layer wise. \n\nA distance metric is introduced and utilized to set up a pruning procedure to maximize the information density in learned neural networks and...
[ 5, 4, 5 ]
[ 4, 4, 3 ]
[ "iclr_2021_hypDstHla7", "iclr_2021_hypDstHla7", "iclr_2021_hypDstHla7" ]
iclr_2021_trj4iYJpIvy
Approximation Algorithms for Sparse Principal Component Analysis
Principal component analysis (PCA) is a widely used dimension reduction technique in machine learning and multivariate statistics. To improve the interpretability of PCA, various approaches to obtain sparse principal direction loadings have been proposed, which are termed Sparse Principal Component Analysis (SPCA). In ...
withdrawn-rejected-submissions
The paper proposes three algorithms for the sparse PCA problem, where one imposes the additional constraint that the vectors have a small number of non-zero entries. The proposed algorithms run in polynomial time and achieve provable approximation guarantees on the accuracy and sparsity. The reviewers identified the fo...
train
[ "f2jh9fM2V41", "Fmn0aFGFYeY", "SNFRBnJv6pS", "qMiqFmXG0Qr", "xAXBrLyyvks", "wuR1dDSpbhK", "h0Oa1Nl8dz5", "PDimeKVgz7F" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the third reviewer for their detailed feedback. We agree that more thorough comparison to the works of [d’Aspremont et al., 2014] and [Papailiopoulos et al., 2013] would have been helpful in distinguishing our results. Namely, [d’Aspremont et al., 2014] evaluates the SDP relaxation of the problem without...
[ -1, -1, -1, -1, 4, 5, 4, 7 ]
[ -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "xAXBrLyyvks", "PDimeKVgz7F", "wuR1dDSpbhK", "h0Oa1Nl8dz5", "iclr_2021_trj4iYJpIvy", "iclr_2021_trj4iYJpIvy", "iclr_2021_trj4iYJpIvy", "iclr_2021_trj4iYJpIvy" ]
iclr_2021_xyEx4_lHqvB
Ensemble-based Adversarial Defense Using Diversified Distance Mapping
We propose an ensemble-based defense against adversarial examples using distance map layers (DMLs). Similar to fully connected layers, DMLs can be used to output logits for a multi-class classification model. We show in this paper how DMLs can be deployed to prevent transferability of attacks across ensemble members b...
withdrawn-rejected-submissions
The paper proposes a method to improve adversarial robustness by diversifying the ensemble. Novelty: As pointed out by several reviewers, promoting diversity of ensembles has been done in the literature, but there's still a moderate novelty in proposing the DML layer. Empirical validations: The original submission ...
train
[ "4A2QeTBWkVi", "wjhdxxCsNE", "Qq9eBM2HjUT", "EJdofBpS6rJ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Post-rebuttal:\nMy concerns are not addressed, so I would like to keep my original score.\n-------------------------------\nThis paper proposes to use the distance map layers for ensemble-based defense. By randomly choosing the centers to vary over classifiers, and imposing the I-covariance matrices to be dissimil...
[ 4, 5, 5, 5 ]
[ 3, 3, 3, 4 ]
[ "iclr_2021_xyEx4_lHqvB", "iclr_2021_xyEx4_lHqvB", "iclr_2021_xyEx4_lHqvB", "iclr_2021_xyEx4_lHqvB" ]
iclr_2021_tqc8n6oHCtZ
Length-Adaptive Transformer: Train Once with Length Drop, Use Anytime with Search
Although transformers have achieved impressive accuracies in various tasks in natural language processing, they often come with a prohibitive computational cost, that prevents their use in scenarios with limited computational resources for inference. This need for computational efficiency in inference has been addresse...
withdrawn-rejected-submissions
The paper attempts to reduce computational cost of Transformer models. In this regard, authors generalizer PoWER-BERT by proposing a variant of dropout that reduces training cost by randomly sampling a fraction of the length of a sequence to use at each layer. Further, a sandwich training method is used which trains a ...
val
[ "edTeUw-WAh", "ufF2ydueJe_", "vIQwxM6BT0-", "0Dn3PPqhQd4", "LtnvMDcG5tr", "iOVWTuidoIa", "erF4GmtY4Ln", "c8HNRTz3Fa", "yKDcZ1zqZx3", "hfSpA-BICvP" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper aims to make inference more efficient for finetuned contextual representation models such as BERT. The authors extend PowerBERT, a method introduced recently to perform efficient inference by dynamically reducing input tokens as the model goes deeper. The authors address two limitations of PowerBERT: t...
[ 5, -1, -1, -1, -1, -1, -1, 5, 4, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "iclr_2021_tqc8n6oHCtZ", "iclr_2021_tqc8n6oHCtZ", "hfSpA-BICvP", "yKDcZ1zqZx3", "c8HNRTz3Fa", "edTeUw-WAh", "iclr_2021_tqc8n6oHCtZ", "iclr_2021_tqc8n6oHCtZ", "iclr_2021_tqc8n6oHCtZ", "iclr_2021_tqc8n6oHCtZ" ]
iclr_2021__TGlfdZOHY3
On Episodes, Prototypical Networks, and Few-Shot Learning
Episodic learning is a popular practice among researchers and practitioners interested in few-shot learning. It consists of organising training in a series of learning problems, each relying on small “support” and “query” sets to mimic the few-shot circumstances encountered during evaluation. In this paper, we in...
withdrawn-rejected-submissions
This paper is right on the borderline. It questions the utility of episodic training from a novel perspective, driven by a comparison to NCA, with thorough experiments. The hypothesis that more pairwise comparisons per batch/episode benefit learning is also quite interesting, but some reviewers didn’t feel this was con...
train
[ "3r0gbxU-Dfd", "FgXSA1gvj_B", "9NHkZZJK-Oo", "1N85PZYIy5u", "ugIVEZg1Cpa", "bvSgT0ilRNP", "Rn6JpS-8Wn6", "Zbu5UKkWm9s", "_QYoOFQo_Yv", "lt-Ql_h2js", "A_g1zDJJWzi", "UVTuwdBzdAe", "lczoUYgg_fq", "KoBxsNgB3Qy", "GtQqX7QbymE", "GJu6ea7LFis", "zd5p5m0VIcE", "tvrcW8Eyefp", "oVtw72_1b6...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Summary: This paper proposes to use neighborhood component analysis in lieu of prototype loss to train embedding functions of few-shot learning. This method takes full advantage of relations between all sampled points in an episode to facilitate learning, and it removes the distinction between support and query sa...
[ 5, 4, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ 4, 5, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021__TGlfdZOHY3", "iclr_2021__TGlfdZOHY3", "Rn6JpS-8Wn6", "GtQqX7QbymE", "iclr_2021__TGlfdZOHY3", "iclr_2021__TGlfdZOHY3", "Zbu5UKkWm9s", "lczoUYgg_fq", "3r0gbxU-Dfd", "A_g1zDJJWzi", "oVtw72_1b6S", "ugIVEZg1Cpa", "KoBxsNgB3Qy", "_QYoOFQo_Yv", "tvrcW8Eyefp", "zd5p5m0VIcE", "FgX...
iclr_2021_jAJrc-kzVd0
Revisiting Prioritized Experience Replay: A Value Perspective
Reinforcement learning (RL) agents need to learn from past experiences. Prioritized experience replay that weighs experiences by their surprise (the magnitude of the temporal-difference error) significantly improves the learning efficiency for RL algorithms. Intuitively, surprise quantifies the unexpectedness of an exp...
withdrawn-rejected-submissions
This paper is certainly on the way to be a solid contribution: it's an interesting research question, and we need more understanding papers (rather than yet another algorithmic trick paper). The reviewers thought the paper was not yet ready. The reviewers suggested: (1) more motivation of why the proposed metrics were...
train
[ "bMlgP87rVrx", "oUHor_0p6UT", "-Mi41C_o_ke", "gZJWWUPL5YL", "EFYsYF9U5RF", "ybQrU_KUh9", "LzGCgEsfVk", "QxkRd5ozRRy", "DB3Atcpj9_" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank all the reviewers for the insightful comments. Based on them, we have substantially improved the manuscript by *(1)* better motivation: a new paragraph in introduction as well as a motivating example (Section 2.4), *(2)* deriving the lower bounds for EVB and EIV for soft Q-learning (Theorem 4.2) an...
[ -1, -1, -1, -1, -1, 4, 5, 3, 6 ]
[ -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "iclr_2021_jAJrc-kzVd0", "ybQrU_KUh9", "LzGCgEsfVk", "QxkRd5ozRRy", "DB3Atcpj9_", "iclr_2021_jAJrc-kzVd0", "iclr_2021_jAJrc-kzVd0", "iclr_2021_jAJrc-kzVd0", "iclr_2021_jAJrc-kzVd0" ]
iclr_2021_R6tNszN_QfA
Adversarial Problems for Generative Networks
We are interested in the design of generative networks. The training of these mathematical structures is mostly performed with the help of adversarial (min-max) optimization problems. We propose a simple methodology for constructing such problems assuring, at the same time, consistency of the corresponding solution. We...
withdrawn-rejected-submissions
This paper proposed a new family of losses for GANs and showed that this family is quite general and encompasses a number of existing losses as well as some new loss functions. The paper compared experimentally the existing losses and the new proposed losses. But the benefit of this family is not clear theoretically, a...
train
[ "Tv8JIuD3bl3", "D2U3b-rEVP", "2m_ljw3Eq_B", "aW5UkCZQWQc", "Axc493PfvM4", "QBIqYE8PKkw", "wCy49lcuPyo", "QssmiSKeZhV" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Overall, this paper provides impacts on understanding the core of generative models with adversarial optimization problems.\nThis paper shows the diverse possibilities of formulating the generative model optimization problems that the researchers can further investigate for better performances. \nAlso, this paper ...
[ 7, -1, -1, -1, -1, 4, 6, 4 ]
[ 3, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2021_R6tNszN_QfA", "Tv8JIuD3bl3", "QssmiSKeZhV", "wCy49lcuPyo", "QBIqYE8PKkw", "iclr_2021_R6tNszN_QfA", "iclr_2021_R6tNszN_QfA", "iclr_2021_R6tNszN_QfA" ]
iclr_2021_FPpZrRfz6Ss
To Learn Effective Features: Understanding the Task-Specific Adaptation of MAML
Meta learning, an effective way for learning unseen tasks with few samples, is an important research area in machine learning. Model Agnostic Meta-Learning~(MAML)~(\cite{finn2017model}) is one of the most well-known gradient-based meta learning algorithms, that learns the meta-initialization through t...
withdrawn-rejected-submissions
MAML is a well-known gradient-based bi-level optimization to learn a good initialization over a set of relevant tasks. This paper investigate different variants of MAML, providing empirical analysis of two new algorithms (RDP and MCL). Reviewers agree that it is interesting to see what the change of optimization mechan...
train
[ "vsXDm2uaW8", "Bloq7TJggN6", "R7cXz4sCTjN", "epKM0dLB9Fv", "Qvzke5sJH_7", "DZJFyO0tTGe", "UGH212E0Vyn", "FOzn_m5LnuI", "uvRdnjX-495", "SVDp2_-8Cn0" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper adds to a series of papers that look at understanding how and why MAML works as well as it does, and what exactly is going on in the adaptation process. There are many open questions in this regard, and so I think the topic of the paper is interesting and of interest to the community. However I found th...
[ 5, -1, -1, -1, -1, -1, -1, 4, 5, 3 ]
[ 3, -1, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "iclr_2021_FPpZrRfz6Ss", "vsXDm2uaW8", "vsXDm2uaW8", "vsXDm2uaW8", "SVDp2_-8Cn0", "uvRdnjX-495", "FOzn_m5LnuI", "iclr_2021_FPpZrRfz6Ss", "iclr_2021_FPpZrRfz6Ss", "iclr_2021_FPpZrRfz6Ss" ]
iclr_2021_yvzMA5im3h
Graph Joint Attention Networks
Graph attention networks (GATs) have been recognized as powerful tools for learning in graph structured data. However, how to enable the attention mechanisms in GATs to smoothly consider both structural and feature information is still very challenging. In this paper, we propose Graph Joint Attention Networks (JATs) to...
withdrawn-rejected-submissions
One referee recommends acceptance, while three referees recommend rejection. All referees agree that augmenting GAT with structural information is an interesting direction to explore; however, they raised concerns about the empirical validation of the method, the related work covered, as well as the discussion of insig...
train
[ "F_xK-MU9wn9", "RZEebxFHjbR", "45ILCcPqu2", "4Raf578gtzC", "uJ-qpTt8g7", "sNklQ3TGL4", "L-a4aKzcjbb", "-h13_USbSNt", "f6GxQ-VXjrG", "AO5-nAQ6rJl", "ZGR-EAJHMc7", "TZtTIeZfNhu", "lESlAN8wzD4", "oKNnKFaDadh", "l7mzZG7gyep" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper discusses how to introduce some form on non-local structural information into the node representations by collating cluster similarity information from a topological subspace clustering with a classical spatial graph convolution. Both information are mediated by attentional mechanisms which also take car...
[ 4, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7 ]
[ 5, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2021_yvzMA5im3h", "iclr_2021_yvzMA5im3h", "iclr_2021_yvzMA5im3h", "ZGR-EAJHMc7", "iclr_2021_yvzMA5im3h", "lESlAN8wzD4", "-h13_USbSNt", "AO5-nAQ6rJl", "l7mzZG7gyep", "F_xK-MU9wn9", "TZtTIeZfNhu", "45ILCcPqu2", "RZEebxFHjbR", "iclr_2021_yvzMA5im3h", "iclr_2021_yvzMA5im3h" ]
iclr_2021_lJuOUWlAC8i
Learning Contextualized Knowledge Graph Structures for Commonsense Reasoning
Recently, neural-symbolic architectures have achieved success on commonsense reasoning through effectively encoding relational structures retrieved from external knowledge graphs (KGs) and obtained state-of-the-art results in tasks such as (commonsense) question answering and natural language inference. However, curren...
withdrawn-rejected-submissions
The paper proposes an interesting step in the direction of neuro-symbolic reasoning. While there is no consensus among reviewers about the key novelty of the method, all acknowledge the interest of the direction. All of them also recognize that the submission improved greatly during the discussion phase: clarification ...
train
[ "55ZmquMImnm", "q9h9xwYPmS1", "26h38yh3hAK", "HCRGfkeNLf", "DaksNRwDid", "bxYn5ektF5E", "RDL_XNdchnc", "DrFIkRqacfn", "Z8UFagFl1dW", "PCGBH7odeI-", "UdpljQPZXK", "HfH-ZxDS723", "bDcEr4opJ5g", "DHwaMj0U8BT", "VPqWeqgASAp", "1p6wG2vjvv", "kJxGp7rAsTW", "JzSfOkUXAkc" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "=== Summary ===\n\nIn this paper, the authors propose a new approach towards incorporating knowledge graphs (KG) into commonsense QA frameworks. KGs are helpful for adding structured \"world\" information, which neural-symbolic architectures can leverage to do commonsense reasoning, e.g., \"what is the expensive r...
[ 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 5 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2021_lJuOUWlAC8i", "iclr_2021_lJuOUWlAC8i", "UdpljQPZXK", "26h38yh3hAK", "HfH-ZxDS723", "HCRGfkeNLf", "55ZmquMImnm", "Z8UFagFl1dW", "PCGBH7odeI-", "JzSfOkUXAkc", "iclr_2021_lJuOUWlAC8i", "HCRGfkeNLf", "55ZmquMImnm", "JzSfOkUXAkc", "kJxGp7rAsTW", "kJxGp7rAsTW", "26h38yh3hAK", ...
iclr_2021_uIc4W6MtbDA
ERMAS: Learning Policies Robust to Reality Gaps in Multi-Agent Simulations
Policies for real-world multi-agent problems, such as optimal taxation, can be learned in multi-agent simulations with AI agents that emulate humans. However, simulations can suffer from reality gaps as humans often act suboptimally or optimize for different objectives (i.e., bounded rationality). We introduce ϵ-Robust...
withdrawn-rejected-submissions
The paper presents a multi-agent RL algorithm where the rewards of the other agents are only known up to some accuracy. The setting is somewhat restrictive, in the sense that the transition is assumed to be known. It would perhaps have been more interesting for the paper to also consider unknown transitions, so as to b...
train
[ "ahC5kuaM5Ro", "Odgj86dEWyG", "W40J-v6V6BD", "Wn-L440Zdu6", "gdFyjDBRGtn", "Pzos_xIApLT", "8gglch0PQYH", "IlHbHhL63Q3", "2uhs6R15_Xn", "dIMpqeQAldl", "Vu7tfyAcRNg", "pwRMUOdv_Od", "U_HgesTKfCw", "HnN8ueVOnEI" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "Summary:\n\nThis paper tackles robust RL under the multi-agent setting. They formulate the multi-agent adversarial robustness problem as a nested optimization problem and propose a practical algorithm (ERMAS) to solve it. Theoretical proof and empirical study on two environments are provided to demonstrate the eff...
[ 6, 6, 6, -1, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 1, 3, 3, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2021_uIc4W6MtbDA", "iclr_2021_uIc4W6MtbDA", "iclr_2021_uIc4W6MtbDA", "gdFyjDBRGtn", "2uhs6R15_Xn", "iclr_2021_uIc4W6MtbDA", "U_HgesTKfCw", "HnN8ueVOnEI", "Vu7tfyAcRNg", "Odgj86dEWyG", "W40J-v6V6BD", "ahC5kuaM5Ro", "Pzos_xIApLT", "iclr_2021_uIc4W6MtbDA" ]
iclr_2021_TMUR2ovJfjE
Co-complexity: An Extended Perspective on Generalization Error
It is well known that the complexity of a classifier's function space controls its generalization gap, with two important examples being VC-dimension and Rademacher complexity (R-Complexity). We note that these traditional generalization error bounds consider the ground truth label generating function (LGF) to be fixed...
withdrawn-rejected-submissions
The paper considers generalization in setups in which the training sample may be generated by a different distribution than the one genertaing the test data. This sounds much like transfer learning, and similarly sounding considerations, of a space of possible generating distributions, ways of measuring the statictica...
train
[ "n1cQvpurLQ1", "Dn74CUPVmrX", "MfOCbTQyR_8", "oJXKaj_auKv", "bhL1IkVV3MJ", "nWd0cxQOYum", "COWs8TUj__f", "Q6oryK0UcEb", "TQ7y_O6pETh", "n4VWg7Vi6kY" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would like to thank all reviewers for their critical reviews and valuable suggestions. Here we summarize the changes made in the revised paper.\n\n**Motivation**: Changed the write-up of section 2, now focusing on how the unknowability of the label generating function can affect the true generalization gap. Imp...
[ -1, -1, -1, -1, -1, -1, 4, 5, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "iclr_2021_TMUR2ovJfjE", "MfOCbTQyR_8", "COWs8TUj__f", "Q6oryK0UcEb", "TQ7y_O6pETh", "n4VWg7Vi6kY", "iclr_2021_TMUR2ovJfjE", "iclr_2021_TMUR2ovJfjE", "iclr_2021_TMUR2ovJfjE", "iclr_2021_TMUR2ovJfjE" ]
iclr_2021_t4hNn7IvNZX
Certified Distributional Robustness via Smoothed Classifiers
The robustness of deep neural networks against adversarial example attacks has received much attention recently. We focus on certified robustness of smoothed classifiers in this work, and propose to use the worst-case population loss over noisy inputs as a robustness metric. Under this metric, we provide a tractable up...
withdrawn-rejected-submissions
The authors present a framework for deriving distributional robustness certificates for smoothed classifiers under perturbations of the input distribution bounded under the Wasserstein metric. Several authors raised concerns regarding the correctness of results presented in the initial version of the paper. While the...
train
[ "yK85iVt5xY", "pFpZLW2hKS", "rdFHST7UED_", "BFKUAth0zj", "H-pXBMhcCUz", "4JZ9-Doot9A", "0ri6Jv6GeRB", "SuYEd0S-tdM" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors use some of the theory from optimal transport to certify the degree of robustness of an image classifier to adversarial perturbation. The theoretical contributions of the work, however, are largely already present, or easily deduced from results that are either already published or available online in ...
[ 2, -1, -1, -1, -1, 2, 3, 6 ]
[ 5, -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2021_t4hNn7IvNZX", "4JZ9-Doot9A", "yK85iVt5xY", "0ri6Jv6GeRB", "SuYEd0S-tdM", "iclr_2021_t4hNn7IvNZX", "iclr_2021_t4hNn7IvNZX", "iclr_2021_t4hNn7IvNZX" ]
iclr_2021_wiSgdeJ29ee
Fine-Tuning Offline Reinforcement Learning with Model-Based Policy Optimization
In offline reinforcement learning (RL), we attempt to learn a control policy from a fixed dataset of environment interactions. This setting has the potential benefit of allowing us to learn effective policies without needing to collect additional interactive data, which can be expensive or dangerous in real-world syste...
withdrawn-rejected-submissions
This paper proposes a method for offline reinforcement learning methods with model-based policy optimization where they first learn a model of the environment to learn the transition dynamics, a critic and the policy in an offline manner. They basically learn the model by training an ensemble of probabilistic dynamics ...
train
[ "FhS5g2QUXeQ", "EiXORgsKHPC", "Miz-bBb_TbF", "JqCY4mdsPD", "PZo5VAWLhfN", "p8AxY4SHUdH", "fhz3WK-usXK", "1JDVGBbykH1", "kY_l2uOooiH", "Trd5ruDLsuE" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "**Final recommendation**\nI do not recommend accepting the paper. The results have been greatly improved. They now look decent and I have improved my score as a result. However I think the contributions are still not clearly highlighted. I however encourage the authors to improve their paper. \n\n**Summary**\nThis...
[ 5, 4, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ 4, 4, -1, -1, -1, -1, -1, 4, 4, 2 ]
[ "iclr_2021_wiSgdeJ29ee", "iclr_2021_wiSgdeJ29ee", "1JDVGBbykH1", "FhS5g2QUXeQ", "kY_l2uOooiH", "Trd5ruDLsuE", "EiXORgsKHPC", "iclr_2021_wiSgdeJ29ee", "iclr_2021_wiSgdeJ29ee", "iclr_2021_wiSgdeJ29ee" ]
iclr_2021_DE0MSwKv32y
Trust, but verify: model-based exploration in sparse reward environments
We propose trust-but-verify (TBV) mechanism, a new method which uses model uncertainty estimates to guide exploration. The mechanism augments graph search planning algorithms by the capacity to deal with learned model's imperfections. We identify certain type of frequent model errors, which we dub false loops, and whi...
withdrawn-rejected-submissions
After reading the meta-reviews and the authors comment, the meta-reviewer thinks the paper is not ready for publication in a high-impact conference like ICLR. The paper is not well positioned with respect to the literature, and the proposed techniques are not well discussed in relation with predominant paradigms like o...
train
[ "B886q-c_X2K", "u09GJAWGzG", "H_YmCztnVvz", "6Sq6Npq2CYt", "nAKAovmTOnF", "l5v-joks3-l", "AdWpIwRGmV", "ytdyrcImd0L" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thank you for the review. We have submitted a new version of the paper with several improvements. In particular we added more references and described the formalism for the method (Section 3.2). The answers to your questions can be found below:\nAd 1. We found out that the choice of threshold for RANDOM does not s...
[ -1, -1, -1, -1, 4, 2, 6, 4 ]
[ -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "AdWpIwRGmV", "l5v-joks3-l", "nAKAovmTOnF", "ytdyrcImd0L", "iclr_2021_DE0MSwKv32y", "iclr_2021_DE0MSwKv32y", "iclr_2021_DE0MSwKv32y", "iclr_2021_DE0MSwKv32y" ]
iclr_2021_V4AVDoFtVM
What About Taking Policy as Input of Value Function: Policy-extended Value Function Approximator
The value function lies in the heart of Reinforcement Learning (RL), which defines the long-term evaluation of a policy in a given state. In this paper, we propose Policy-extended Value Function Approximator (PeVFA) which extends the conventional value to be not only a function of state but also an explicit policy repr...
withdrawn-rejected-submissions
This paper proposes to consider value functions as explicit functions of policies, in order to allow generalization not only on the state(action) space, but also on the policy space. The initial reviews assessed that the paper was dealing with an important RL topic, but also raised many concerns about the position to p...
train
[ "FU1AfFHCYt", "BTm8BSd4DZg", "_UzIEV2fmUq", "6oKP1lRqxf", "Y0Vs4UvYQ4j", "zG5FfclLQFX", "E5pz7DcQ5it", "seapvb1sRX7", "pKyZjPh59P0", "Dax3UooqeXP", "q8MUjBjSa4n", "Rtyz7MejdAP", "fEMvfksrdvp", "wQ1ahVdRJcI", "A1wjenoBbRU", "iUi_71E0ASb", "-baeGM-WyJQ" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary:\n\nThe authors propose PeVFA: a value function able to evaluate the expected return of multiple policies. They do so by extending the conventional value function, allowing it to receive as input the parameter (or a representation) of the policy. The authors study the local generalization property of PeVFA...
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 5 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2021_V4AVDoFtVM", "6oKP1lRqxf", "iclr_2021_V4AVDoFtVM", "q8MUjBjSa4n", "zG5FfclLQFX", "E5pz7DcQ5it", "FU1AfFHCYt", "fEMvfksrdvp", "Dax3UooqeXP", "-baeGM-WyJQ", "Rtyz7MejdAP", "iUi_71E0ASb", "A1wjenoBbRU", "iclr_2021_V4AVDoFtVM", "iclr_2021_V4AVDoFtVM", "iclr_2021_V4AVDoFtVM", "...
iclr_2021_48goXfYCVFX
Interpretable Relational Representations for Food Ingredient Recommendation Systems
Supporting chefs with ingredient recommender systems to create new recipes is challenging, as good ingredient combinations depend on many factors like taste, smell, cuisine style, texture among others. There have been few attempts to address these issues using machine learning. Importantly, useful models do obvious...
withdrawn-rejected-submissions
There is a broad consensus that this paper explores an interesting and novel problem space. Nonetheless, in their initial assessment, the reviewers pointed to a few limitations of the paper including lack of strong baselines, lack of an ablation study, and weaker results according to the HIT@10 metric. The authors pr...
val
[ "wx_asc3GUk2", "88PQ9ekLtnT", "nnMaIcPjDZ", "5L9tBzINyVP" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Reject.\nSummary\nThis paper works on the problem of create new recipes. It uses recommendation approaches on it. The paper use two approaches for explainability and conducted experiments on two real-world datasets.\n\nStregthens\n1. The explainability in recommendation is an important research problem.\n2. The re...
[ 3, 5, 7, 5 ]
[ 4, 5, 3, 4 ]
[ "iclr_2021_48goXfYCVFX", "iclr_2021_48goXfYCVFX", "iclr_2021_48goXfYCVFX", "iclr_2021_48goXfYCVFX" ]
iclr_2021_EVV259WQuFG
Machine Reading Comprehension with Enhanced Linguistic Verifiers
We propose two linguistic verifiers for span-extraction style machine reading comprehension to respectively tackle two challenges: how to evaluate the syntactic completeness of predicted answers and how to utilize the rich context of long documents. Our first verifier rewrites a question through replacing its interroga...
withdrawn-rejected-submissions
The authors propose two linguistic verifiers for improving extractive question answering when the question is answerable. The first replaces the interrogative in the question with candidate answers and evaluates the result both in isolation and in combination with the answer-containing sentence to do answer verificatio...
train
[ "aKQHVTKpOcR", "Jb4GJKX84a2", "jGPnPss_p6z", "nJeLT-aK9mF", "p1RyhcvKbpY", "gpu8qbM_jZk", "jMeAGbTm_D3", "1qjgU3df10" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "MACHINE READING COMPREHENSION WITH ENHANCED LINGUISTIC VERIFIERS\n\nThe authors propose two linguistic verifiers for improving extractive question answering performance when the question is answerable. The first replaces interrogatives in the question (who etc.) with candidate answers and evaluates this both in is...
[ 5, 7, -1, -1, -1, -1, 6, 5 ]
[ 3, 5, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2021_EVV259WQuFG", "iclr_2021_EVV259WQuFG", "jMeAGbTm_D3", "aKQHVTKpOcR", "Jb4GJKX84a2", "1qjgU3df10", "iclr_2021_EVV259WQuFG", "iclr_2021_EVV259WQuFG" ]
iclr_2021_Gc4MQq-JIgj
Reconnaissance for reinforcement learning with safety constraints
Practical reinforcement learning problems are often formulated as constrained Markov decision process (CMDP) problems, in which the agent has to maximize the expected return while satisfying a set of prescribed safety constraints. In this study, we consider a situation in which the agent has access to the generative mo...
withdrawn-rejected-submissions
This paper investigates safe reinforcement learning with distinct reward function and safety function. The authors present theoretical analysis and simulation results. The representation of safety is a critical step. The authors define the safety function values based on various events and use linear combination of the...
train
[ "RT0Hqb3LVaJ", "pSXaWYF164B", "0DZ0etVVVV1", "opVELNioL6y", "xo99iv6FYdv", "2iFPAYIWXXi", "kh79oFo2cs_", "iHDiffAL3FY", "WLlDpCjhDTA", "iepV_udRuYn" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We thank the reviewers for taking their time to read our paper and making comments and suggestions. \n\nOur modification is summarized as follows:\n- We emphasize the crash rate in Table 1 and Table 2 to better convey our intention. Also, we added the tables with more penalties in Appendix H.\n- We stated the dime...
[ -1, -1, -1, -1, -1, -1, -1, 4, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 2, 3 ]
[ "iclr_2021_Gc4MQq-JIgj", "iHDiffAL3FY", "iHDiffAL3FY", "WLlDpCjhDTA", "WLlDpCjhDTA", "iHDiffAL3FY", "iepV_udRuYn", "iclr_2021_Gc4MQq-JIgj", "iclr_2021_Gc4MQq-JIgj", "iclr_2021_Gc4MQq-JIgj" ]
iclr_2021_0gfSzsRDZFw
Ablation Path Saliency
We consider the saliency problem for black-box classification. In image classification, this means highlighting the part of the image that is most relevant for the current decision. We cast the saliency problem as finding an optimal ablation path between two images. An ablation path consists of a sequence of ever...
withdrawn-rejected-submissions
While the reviewers in general liked the ideas proposed in the paper, the experimental evaluation has several issues that need fixing before it can be accepted.
train
[ "vnj-8b0Z7VX", "OJuHSWfh4R4", "GoGQMMpqf4K", "9VliPTdnxNA", "hRjsZ5VG3J", "BiCPDWtmVR7", "MYefZVtrAwU", "Ln02wIJR2HD" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The changes in this revision are:\n- A quantitative evaluation using the pointing game as metric on a few hundred images (we also compare to the meaningful perturbation approach by Fong & Vedaldi, although the implementation of their method that we use is probably suboptimal). (Section 7 and Appendix D)\n- Clearer...
[ -1, -1, -1, -1, -1, 4, 4, 6 ]
[ -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2021_0gfSzsRDZFw", "9VliPTdnxNA", "BiCPDWtmVR7", "MYefZVtrAwU", "Ln02wIJR2HD", "iclr_2021_0gfSzsRDZFw", "iclr_2021_0gfSzsRDZFw", "iclr_2021_0gfSzsRDZFw" ]
iclr_2021_k2Hm5Szfl5Z
A new framework for tensor PCA based on trace invariants
We consider the Principal Component Analysis (PCA) problem for tensors T∈(Rn)⊗k of large dimension n and of arbitrary order k≥3. It consists in recovering a spike v0⊗k (related to a signal vector v0∈Rn) corrupted by a Gaussian noise tensor Z∈(Rn)⊗k such that T=βv0⊗k+Z where β is the signal-to-noise ratio. In this paper...
withdrawn-rejected-submissions
This paper studies the tensor principal component analysis problem, where we observe a tensor T = \beta v^{\otimes k} + Z where v is a spike and Z is a Gaussian noise tensor. The goal is to recover an accurate estimate to the spike for as small a signal-to-noise ratio \beta as possible. There has been considerable inte...
train
[ "x7qLBJhL7VD", "zlSqZaj0Irw", "EyrylnDj4To", "bRqAc8z_ky4", "zhKZTsZCE2Z", "oMQmeSamrxp", "ARRUyGmhYcy", "NvIwXUMyrB", "pRAsk8Oa1Tk" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We would thank all reviewers for the valuable comments and constructive feedback which help us to significantly improve the quality of the presentation of this work.\n\nWe have uploaded a revisited version, in order to take into account the reviewer's comments and to clarify the notations and experimental details....
[ -1, -1, -1, -1, -1, -1, 5, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, 2, 3, 3 ]
[ "iclr_2021_k2Hm5Szfl5Z", "NvIwXUMyrB", "NvIwXUMyrB", "NvIwXUMyrB", "pRAsk8Oa1Tk", "ARRUyGmhYcy", "iclr_2021_k2Hm5Szfl5Z", "iclr_2021_k2Hm5Szfl5Z", "iclr_2021_k2Hm5Szfl5Z" ]
iclr_2021__77KiX2VIEg
On the Effectiveness of Deep Ensembles for Small Data Tasks
Deep neural networks represent the gold standard for image classification. However, they usually need large amounts of data to reach superior performance. In this work, we focus on image classification problems with a few labeled examples per class and improve sample efficiency in the low data regime by usi...
withdrawn-rejected-submissions
The paper received negative and borderline reviews. The reviewers have raised several concerns about the novelty of the approach and the lack of convincing experiments. The rebuttal only partially addresses these concerns. Overall, the area chair agrees with the reviewer's assessment and follows their recommendation.
train
[ "997dEeb_37E", "5EYcPPkoc0S", "1CiuqKDN9Np", "ZR9ckL_z2m", "DP1vRSICQat", "mYmtEmU3n7T", "70xDscr5jNM", "PpkbaEjVXM", "q9RX8UD3EJY" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary:\nIn this paper, the authors provide a series of experimens where they show that when dealing with a very small dataset, a single very deep network is outperformed by an ensemble of multiple more shallow networks. More specifically, the authors artificially create training sets from CIFAR10 and CIFAR100 da...
[ 5, -1, -1, -1, -1, -1, 4, 3, 5 ]
[ 5, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "iclr_2021__77KiX2VIEg", "q9RX8UD3EJY", "PpkbaEjVXM", "70xDscr5jNM", "997dEeb_37E", "iclr_2021__77KiX2VIEg", "iclr_2021__77KiX2VIEg", "iclr_2021__77KiX2VIEg", "iclr_2021__77KiX2VIEg" ]
iclr_2021_7IElVSrNm54
Zero-shot Fairness with Invisible Demographics
In a statistical notion of algorithmic fairness, we partition individuals into groups based on some key demographic factors such as race and gender, and require that some statistics of a classifier be approximately equalized across those groups. Current approaches require complete annotations for demographic factors, o...
withdrawn-rejected-submissions
The paper studies the problem of satisfying group-based fairness constraints in the situation where some demographics are not available in the training dataset. The paper proposes to disentangle the predictions from the demographic groups using adversarial distribution-matching on a "perfect batch" generated by a clust...
train
[ "2mc2Ye6j8Al", "ulj8lebEWJF", "mS_4BZlGl5y", "0oE3uEFYWGV", "cXXGTBbh_U0", "g_80V50W0qA", "5zdHJslW8GH", "A78XEV5J7WM", "cHnzSo244NH" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "############# Summary of contributions ##############\n\nThis paper introduces the problem of enforcing group-based fairness for “invisible demographics,” which they define to be demographic categories that are not present in the training dataset. They assume access to a “context set,” which is an additional unlab...
[ 6, -1, -1, -1, -1, -1, 5, 5, 4 ]
[ 5, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "iclr_2021_7IElVSrNm54", "2mc2Ye6j8Al", "iclr_2021_7IElVSrNm54", "A78XEV5J7WM", "5zdHJslW8GH", "cHnzSo244NH", "iclr_2021_7IElVSrNm54", "iclr_2021_7IElVSrNm54", "iclr_2021_7IElVSrNm54" ]
iclr_2021_Qpik5XBv_1-
Language Controls More Than Top-Down Attention: Modulating Bottom-Up Visual Processing with Referring Expressions
How to best integrate linguistic and perceptual processing in multimodal tasks is an important open problem. In this work we argue that the common technique of using language to direct visual attention over high-level visual features may not be optimal. Using language throughout the bottom-up visual pathway, going from...
withdrawn-rejected-submissions
The paper proposes to improve image segmentation from referring expression by integrating visual and language features using an UNet architecture and experimenting with top-down, bottom-up, and combined (dual) modulation. Review Summary: The submission received divergent reviews with scores spanning from 2 (R2) to ...
train
[ "bxS1-55Cv35", "YpGeltReJ7D", "OgJh8j0_RbH", "nd9FRVs5_w1", "D0WF1fjAEIM", "X02BGSPzzR3", "CiQc9XzOPRR", "vr4cjmbkui" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper presents a model for image segmentation from referring expressions which integrates linguistic representations of the referring expressions both at low-level and high-level stages of visual processing. They argue that this model is both more cognitively plausible and more successful than models which on...
[ 2, 4, -1, -1, -1, -1, 10, 5 ]
[ 4, 3, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2021_Qpik5XBv_1-", "iclr_2021_Qpik5XBv_1-", "bxS1-55Cv35", "YpGeltReJ7D", "CiQc9XzOPRR", "vr4cjmbkui", "iclr_2021_Qpik5XBv_1-", "iclr_2021_Qpik5XBv_1-" ]
iclr_2021_yOkSW62hqq2
Explicit Connection Distillation
One effective way to ease the deployment of deep neural networks on resource constrained devices is Knowledge Distillation (KD), which boosts the accuracy of a low-capacity student model by mimicking the learnt information of a high-capacity teacher (either a single model or a multi-model ensemble). Although great prog...
withdrawn-rejected-submissions
Knowledge distillation (KD) has been widely used in practice for deployment. In this paper, a variant of KD is proposed: given a student network, an auxiliary teacher architecture is temporarily generated via dynamic additive convolutions; dense feature connections are introduced to co-train the teacher and student mo...
train
[ "QzNeo0wQJ6y", "nxjS8cFWSBM", "A8MbY6y0luz", "N1_upMHchDN", "UWU45eT5cKt", "V1sRp83Xva", "redJDMG3qtf", "Kc2lqZjZERY", "HHONcP6jeRV", "gUpinnHZQcS", "PWYkzlvhjrg", "my3h4BMpsb" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "Summary:\n\nThe paper proposes new KD framework, i.e., Explicit Connection Distillation (ECD), which unlike existing methods, designs teacher network that is well aligned with the student architecture and trains both the networks simultaneously using explicit dense feature connections. The proposed method is evalu...
[ 7, 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ 4, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2021_yOkSW62hqq2", "iclr_2021_yOkSW62hqq2", "iclr_2021_yOkSW62hqq2", "iclr_2021_yOkSW62hqq2", "iclr_2021_yOkSW62hqq2", "nxjS8cFWSBM", "QzNeo0wQJ6y", "QzNeo0wQJ6y", "A8MbY6y0luz", "my3h4BMpsb", "iclr_2021_yOkSW62hqq2", "iclr_2021_yOkSW62hqq2" ]
iclr_2021_A-Sp6CR9-AA
Sandwich Batch Normalization
We present Sandwich Batch Normalization (SaBN), a frustratingly easy improvement of Batch Normalization (BN) with only a few lines of code changes. SaBN is motivated by addressing the inherent feature distribution heterogeneity that one can be identified in many tasks, which can arise from model heterogeneity (dynamic ...
withdrawn-rejected-submissions
This work proposes a novel reparameterization of batch normalization that is hypothesized to give a better inductive bias for learning several tasks, including neural architecture search, conditional image generation, adversarial robustness and neural style transfer. The reviewers indicate that this is useful and is of...
train
[ "d0kC6aS0PwN", "LHdhA2EXW26", "Eym6hzwDftS", "y2r-7HHR9gc", "2e6Mleg6t7r", "mvqMvy_0sZJ", "HxjJDsfHioc", "nXGSSHqYVf", "0rJiEndhMJ0", "13Sba6kaUko", "xEP-bw8P_f", "PQwL-QlslB", "PGI9ROlatYR", "4pBc6e8_UNd", "k6jUJz4rT3Y", "C_iybo7L4h" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Thanks for your constructive comments. \n\n1. *Why does SaBN benefit training? Better optimization or better generalization?*\n\nWe’ve added experiments and confirmed that the benefit of SaBN comes from both optimization and generalization aspects. In summary, we observe SaBN benefits the optimization in all four...
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 5 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 5 ]
[ "y2r-7HHR9gc", "mvqMvy_0sZJ", "k6jUJz4rT3Y", "nXGSSHqYVf", "iclr_2021_A-Sp6CR9-AA", "0rJiEndhMJ0", "PGI9ROlatYR", "4pBc6e8_UNd", "2e6Mleg6t7r", "2e6Mleg6t7r", "C_iybo7L4h", "iclr_2021_A-Sp6CR9-AA", "iclr_2021_A-Sp6CR9-AA", "iclr_2021_A-Sp6CR9-AA", "iclr_2021_A-Sp6CR9-AA", "iclr_2021_A-...
iclr_2021_D2Fp_qheYu
Max-sliced Bures Distance for Interpreting Discrepancies
We propose the max-sliced Bures distance, a lower bound on the max-sliced Wasserstein-2 distance, to identify the instances associated with the maximum discrepancy between two samples. The max-slicing can be decomposed into two asymmetric divergences each expressed in terms of an optimal slice or equivalently a witness...
withdrawn-rejected-submissions
The goal in this submission is to find interpretable samples discriminating two probability distributions. In order to tackle this task the authors propose to use a sliced variant of the Bures distance (where the slicing is implemented via a one-rank tensor) and the associated witness function, and illustrate the idea ...
train
[ "Y99PGewzVnB", "pRzPQ0qFaME", "aQc76zYMzc_", "tMEVkAoVJAK", "Uh1Fil5I7CP", "c0k8VtYr3_6" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "##########################################################################\n\nSummary: This work proposes the max-sliced Bures (MSB) distance, a distance metric for comparing probability distributions. This work adds to the existing literature on transport based slicing techniques for comparing probability distrib...
[ 7, -1, -1, -1, 5, 6 ]
[ 3, -1, -1, -1, 3, 2 ]
[ "iclr_2021_D2Fp_qheYu", "Y99PGewzVnB", "Uh1Fil5I7CP", "c0k8VtYr3_6", "iclr_2021_D2Fp_qheYu", "iclr_2021_D2Fp_qheYu" ]
iclr_2021_lE1AB4stmX
A Transformer-based Framework for Multivariate Time Series Representation Learning
In this work we propose for the first time a transformer-based framework for unsupervised representation learning of multivariate time series. Pre-trained models can be potentially used for downstream tasks such as regression and classification, forecasting and missing value imputation. We evaluate our models on severa...
withdrawn-rejected-submissions
The authors extends the transformer to multivariate time series. The proposed extension is simple, and lacks novelty. Some design decisions of the proposed method should be better justified. Similar works that also use the transformer for timeseries are not compared. Experimental results are not convincing. The settin...
train
[ "Ze39CB_SKh", "ACdCX68L8ct", "OXWK2_Z2rVF", "juzS56BfbYM", "lpkAXxyNWr", "OlSLOOBGka", "l_or0Qcein4", "maBr8Plcz79", "jd-P7rRbdMH", "rOIxA2nCnWm", "m9rdKI5mzDA", "19oOzHemDWU", "NUQ3azYYmH6", "zRGhPUAk9ft", "aW3yl4l1WAt", "bbo688fxyWQ", "kXkFZ6qDNcL", "P7KF2SGF-UZ", "hQ39VouyhS3"...
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "We have taken into account the feedback that we have received to significantly improve our manuscript.\n\nSpecifically, the revised manuscript version includes the following additions, updates and changes:\n\n- We have added a Table in the Appendix comparing performance between different masking schemes used as an...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5, 4 ]
[ "iclr_2021_lE1AB4stmX", "P7KF2SGF-UZ", "P7KF2SGF-UZ", "hQ39VouyhS3", "kXkFZ6qDNcL", "kXkFZ6qDNcL", "kXkFZ6qDNcL", "P7KF2SGF-UZ", "P7KF2SGF-UZ", "P7KF2SGF-UZ", "hQ39VouyhS3", "hQ39VouyhS3", "hQ39VouyhS3", "hQ39VouyhS3", "hyo7KA8V_jT", "aW3yl4l1WAt", "iclr_2021_lE1AB4stmX", "iclr_202...
iclr_2021_RHY_9ZVcTa_
On Linear Identifiability of Learned Representations
Identifiability is a desirable property of a statistical model: it implies that the true model parameters may be estimated to any desired precision, given sufficient computational resources and data. We study identifiability in the context of representation learning: discovering nonlinear data representations that are ...
withdrawn-rejected-submissions
This paper presents novel results on linear identifiability in discriminative models, with three of the four reviewers arguing for acceptance. The paper went through an extensive round of edits, which incorporated detailed responses to issues raised by the reviewers. While this paper would be a nice contribution to t...
train
[ "2rgCejHx-c", "bmLCUaeLMH", "MZkb6CMLpal", "cdNaOtZC2Kj", "mO2Xv1NWMK", "fIZoFRAaQa", "6aVabw0cViR", "X-ae7ARFSH7", "W8uqbZ5OFl", "OVQxprPC8Ec", "SqOtakzoYcb", "JwsivZGcCuB", "IiJ4fWRXFUn" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "%%% post-rebuttal %%\n\nThe authors replied to my comments related to the diversity condition in 3.2 and Theorem 1. Their answers did not fully clarify my concerns or misunderstandings, and it seems the authors didn't make any changes in that regard in the revised version. Except if I missed something in the revie...
[ 4, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6 ]
[ 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2021_RHY_9ZVcTa_", "iclr_2021_RHY_9ZVcTa_", "iclr_2021_RHY_9ZVcTa_", "2rgCejHx-c", "X-ae7ARFSH7", "W8uqbZ5OFl", "2rgCejHx-c", "JwsivZGcCuB", "bmLCUaeLMH", "IiJ4fWRXFUn", "bmLCUaeLMH", "iclr_2021_RHY_9ZVcTa_", "iclr_2021_RHY_9ZVcTa_" ]
iclr_2021_2d34y5bRWxB
Regularization Cocktails for Tabular Datasets
The regularization of prediction models is arguably the most crucial ingredient that allows Machine Learning solutions to generalize well on unseen data. Several types of regularization are popular in the Deep Learning community (e.g., weight decay, drop-out, early stopping, etc.), but so far these are selected on an a...
withdrawn-rejected-submissions
The paper in its most recent version claims that deep neural networks, when very carefully regularized, outperform methods such as Gradient Boosting Trees on tabular data. This is genuinely surprising to me (in a good way), and I suppose it is as well to the community. The paper initially received negative reviews wit...
train
[ "PBQ0PlpdH-y", "3SUQZNxMehf", "0a51bbW8kl-", "pZfcPFeZbLx", "IwoKYRfXof5", "8chFtEnJi2y", "p_2byJ0IY_0", "M1oSEfxpdN", "X_PNN7sOyRF", "_8Jd0UH3hTQ", "wgDedykKZ8S", "xBn_qmqCYJp", "LerYO0J7jJ8", "TDS8-zHKz9P", "6o7lyTw6-G1" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer" ]
[ "Summary: This work takes a step towards understanding the effect of automated selection of regularisation techniques and analyses the results across 42 structured datasets. It defines a search space over 13 regularisation techniques and employs one flavour of Bayesian Optimisation + Hyperband approach to find an o...
[ 6, 6, 6, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2021_2d34y5bRWxB", "iclr_2021_2d34y5bRWxB", "iclr_2021_2d34y5bRWxB", "3SUQZNxMehf", "M1oSEfxpdN", "xBn_qmqCYJp", "iclr_2021_2d34y5bRWxB", "TDS8-zHKz9P", "iclr_2021_2d34y5bRWxB", "PBQ0PlpdH-y", "6o7lyTw6-G1", "LerYO0J7jJ8", "0a51bbW8kl-", "p_2byJ0IY_0", "iclr_2021_2d34y5bRWxB" ]
iclr_2021_3zaVN0M0BIb
Learning and Generalization in Univariate Overparameterized Normalizing Flows
In supervised learning, it is known that overparameterized neural networks with one hidden layer provably and efficiently learn and generalize, when trained using Stochastic Gradient Descent (SGD). In contrast, the benefit of overparameterization in unsupervised learning is not well understood. Normalizing flows (NFs) ...
withdrawn-rejected-submissions
Motivated by the fact that the benefit of overparameterization in unsupervised learning is not well understood than supervised learning, this paper analyzes normalized flow (NF) when the underlying neural network is one hidden layer overparameterized network and proves that for a certain class of NFs, one can efficient...
train
[ "6pmniE70go3", "b21X8Q7gkZe", "M7GtHerSJd8", "pr6JIfRFm9M", "Jy0ts0Xgpa2", "jwlsNb7O_Bv", "xuxKAfhQSi8", "taIB6QWOT6E", "6UvrYpaoWWq", "zF84QKnepqX", "nrtPqwNFnlE", "2K-q-chykAZ" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "*The authors may want to discuss existing results about optimization and generalization of overparameterized deep neural networks [1-3], which are related to this work. Besides, this work relies on the idea of the existence of a pseudo network which approximates the target function well, which may be related to [4...
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "zF84QKnepqX", "zF84QKnepqX", "6UvrYpaoWWq", "6UvrYpaoWWq", "nrtPqwNFnlE", "nrtPqwNFnlE", "iclr_2021_3zaVN0M0BIb", "2K-q-chykAZ", "iclr_2021_3zaVN0M0BIb", "iclr_2021_3zaVN0M0BIb", "iclr_2021_3zaVN0M0BIb", "iclr_2021_3zaVN0M0BIb" ]