paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2022_YeShU5mLfLt
On the Convergence of Certified Robust Training with Interval Bound Propagation
Interval Bound Propagation (IBP) is so far the base of state-of-the-art methods for training neural networks with certifiable robustness guarantees when potential adversarial perturbations present, while the convergence of IBP training remains unknown in existing literature. In this paper, we present a theoretical anal...
Accept (Poster)
Verifying robustness of neural networks is an important application in machine learning. The submission takes on this challenge via the interval bound propagation (IBP) framework and provides a theoretical analysis on the training procedure. They establish, in the large network with case, that the certification via IBP...
val
[ "qP_FCpVREqF", "vjXz-Ft8DAS", "6gnQA0gg1LV", "aUyUlaB0Vy", "FrjhGCNwJJg", "KaV7AV5RN-x", "nUDfwvIEtq", "YZoAeBdqjPW", "PLUeKVnvkB8", "f8PKE6JCaPR", "r6N1m1PQRw_", "-bxHa4Iiv-w" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " \nWe thank the reviewer for clarifying the comments and acknowledging our improvements for the presentation. The suggestions are helpful for us to make the paper more clear and easier to understand.\n\nFor Theorem 1, we want to further clarify that our theorem contains two folds:\n* Given an $\\epsilon$, with th...
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3 ]
[ "6gnQA0gg1LV", "iclr_2022_YeShU5mLfLt", "FrjhGCNwJJg", "PLUeKVnvkB8", "vjXz-Ft8DAS", "vjXz-Ft8DAS", "vjXz-Ft8DAS", "vjXz-Ft8DAS", "r6N1m1PQRw_", "-bxHa4Iiv-w", "iclr_2022_YeShU5mLfLt", "iclr_2022_YeShU5mLfLt" ]
iclr_2022_X_hByk2-5je
Lossless Compression with Probabilistic Circuits
Despite extensive progress on image generation, common deep generative model architectures are not easily applied to lossless compression. For example, VAEs suffer from a compression cost overhead due to their latent variables. This overhead can only be partially eliminated with elaborate schemes such as bits-back codi...
Accept (Spotlight)
The paper revisits lossless compression using deep architecture. In contrast to main stream approaches, it suggests to make use of probabilistic circuits, introducing a novel class of tractable lossless compression models. Overall, the reviews agree that this is an interesting direction and a novel approach. I fully ag...
train
[ "K40uDq_2kRH", "4nBgU_3zn9N", "aS2zgIPBgv4", "tmZXCJaAuXw", "EqrpET_oNEH", "qj32rF8QmM4", "e1z8-NllNF", "qF7eUAc3-u", "ccI_GZ5Opk6", "32Ts3HNjidm", "NloZcdCHKNW", "9o6JYz6OfEm", "ZFU8TFz7HhT", "5a3byG6_yvb" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the helpful comments. We will rephrase the corresponding sentences according to your suggestions and discuss runtime vs. bitrate in more detail to better position the proposed compressor.", " Thank you for recognizing the efficiency of the PC-based compressor compared to autoregressive models. We ...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 5, 8, 6 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 2 ]
[ "tmZXCJaAuXw", "EqrpET_oNEH", "iclr_2022_X_hByk2-5je", "ccI_GZ5Opk6", "qF7eUAc3-u", "5a3byG6_yvb", "ZFU8TFz7HhT", "9o6JYz6OfEm", "32Ts3HNjidm", "aS2zgIPBgv4", "iclr_2022_X_hByk2-5je", "iclr_2022_X_hByk2-5je", "iclr_2022_X_hByk2-5je", "iclr_2022_X_hByk2-5je" ]
iclr_2022_PQQp7AJwz3
Particle Stochastic Dual Coordinate Ascent: Exponential convergent algorithm for mean field neural network optimization
We introduce Particle-SDCA, a gradient-based optimization algorithm for two-layer neural networks in the mean field regime that achieves exponential convergence rate in regularized empirical risk minimization. The proposed algorithm can be regarded as an infinite dimensional extension of Stochastic Dual Coordinate Asce...
Accept (Poster)
This is a solid paper and considers the problem of training a wide neural network with a single hidden layer. This can be framed as an optimization problem in the space of probability distributions with a suitable entropy regularization, where each atom in the distribution corresponds to a hidden neuron. The dual of th...
train
[ "nNPGk-txkK", "34S-he_wHGy", "YI45TZHIjk", "Ba0yyNaBulz", "tzQXgyZtPt", "q43fSOv9ieF", "WL_v_hk0zc", "oM_GXKLeKKX", "GHRygN7U8D", "2ymXKLc90mE", "ZdyggwKGG7H" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper considers optimization of MF two-layer neural nets (the infinite-width version with entropic regularization). Specifically the paper proposes to do dual coordinate ascent, which allows for exponential convergence rate, together with a particle approximation scheme. The paper shows that in the discrete-ti...
[ 6, -1, 8, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, 3, -1, -1, -1, -1, -1, -1, 2, 4 ]
[ "iclr_2022_PQQp7AJwz3", "Ba0yyNaBulz", "iclr_2022_PQQp7AJwz3", "q43fSOv9ieF", "iclr_2022_PQQp7AJwz3", "YI45TZHIjk", "nNPGk-txkK", "ZdyggwKGG7H", "2ymXKLc90mE", "iclr_2022_PQQp7AJwz3", "iclr_2022_PQQp7AJwz3" ]
iclr_2022_SaKO6z6Hl0c
Unsupervised Semantic Segmentation by Distilling Feature Correspondences
Unsupervised semantic segmentation aims to discover and localize semantically meaningful categories within image corpora without any form of annotation. To solve this task, algorithms must produce features for every pixel that are both semantically meaningful and compact enough to form distinct clusters. Unlike previou...
Accept (Poster)
The paper received two accept and two marginally accept recommendations. All reviewers find value in the proposed supervised semantic segmentation methodology (making self-supervised representation learning towards dense prediction tasks like segmentation or clustering without explicit manual supervision) and appreciat...
train
[ "jUm2vpMDYdx", "KGwDmMIwpR", "Xyv3skVFX8g", "svbxEn4OifZ", "BYd4CvWRkY", "Dp6tN7xMTZy", "JEB4VQF5nei", "WhfrKjHmedJ", "SPZVUQoJpBt", "2IMTyVtvhUk" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you to all the reviewers for the helpful comments and feedback. Please let us know if there is anything else that we can address prior to the review period closing. We appreciate your help to make this work better.", "In this paper, the author proposes an unsupervised semantic segmentation approach STEGO....
[ -1, 6, -1, -1, -1, -1, -1, 8, 6, 8 ]
[ -1, 2, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2022_SaKO6z6Hl0c", "iclr_2022_SaKO6z6Hl0c", "KGwDmMIwpR", "2IMTyVtvhUk", "SPZVUQoJpBt", "WhfrKjHmedJ", "iclr_2022_SaKO6z6Hl0c", "iclr_2022_SaKO6z6Hl0c", "iclr_2022_SaKO6z6Hl0c", "iclr_2022_SaKO6z6Hl0c" ]
iclr_2022_xP3cPq2hQC
Cross-Domain Imitation Learning via Optimal Transport
Cross-domain imitation learning studies how to leverage expert demonstrations of one agent to train an imitation agent with a different embodiment or morphology. Comparing trajectories and stationary distributions between the expert and imitation agents is challenging because they live on different systems that may not...
Accept (Poster)
All reviewers suggested acceptance of the paper based on that the paper addresses an important problem and presents and validates interesting ideas for approaching it. Therea are some concerns regarding limited experiments - I'd like to encourage the authors to make an effort to address these concerns and also a few ot...
train
[ "QzYzv906Qcv", "Xwi87_F9Dz0", "coaTafIGBl", "Y7MXL6zWmx9", "rcWXEh4NNJv", "O4iexOrcnRj", "cQIDwEwtuR", "Y0Q9ccgX5q4", "mAQYI74jZJn", "KwvPUai4SNc", "lGDbacyJv51", "QYsnXEkGxka", "zVNVnyoB57p", "SFjpluUIc9", "5it4_nsZE8", "36KKzDWw7x", "kXVR0DG0agn", "MBtcP97wYYV" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " The coupling experiments as well as the experiments with less obviously connected environments are not yet complete, but we will make sure to add the results to the final version!", "This paper frames cross-domain imitation learning as an optimal transport problem using the Gromov-Wasserstein distance. This pro...
[ -1, 8, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "O4iexOrcnRj", "iclr_2022_xP3cPq2hQC", "lGDbacyJv51", "iclr_2022_xP3cPq2hQC", "36KKzDWw7x", "cQIDwEwtuR", "mAQYI74jZJn", "SFjpluUIc9", "KwvPUai4SNc", "MBtcP97wYYV", "kXVR0DG0agn", "Y7MXL6zWmx9", "MBtcP97wYYV", "Xwi87_F9Dz0", "kXVR0DG0agn", "Y7MXL6zWmx9", "iclr_2022_xP3cPq2hQC", "ic...
iclr_2022_9otKVlgrpZG
Multi-Task Processes
Neural Processes (NPs) consider a task as a function realized from a stochastic process and flexibly adapt to unseen tasks through inference on functions. However, naive NPs can model data from only a single stochastic process and are designed to infer each task independently. Since many real-world data represent a set...
Accept (Poster)
Three knowledgeable referees recommend Accept. Reviewer eyrZ's concerns have been addressed by the authors in the rebuttal, in my opinion. Therefore I recommend Accept. I ask the authors to 1) rename the title of their paper and their model to the more specific name Multi-task Neural Processes (MTNP). I agree with bot...
train
[ "Q5Boc-mdZOi", "Mwj-Gx3Ajw4", "RP7oq0G0TA", "3BYQyBQulsT", "bNMVj8WgWFB", "8mk7AMGCwfR", "0PRkCt3tTbK", "ihb1tCi0GQS", "1MDAF0UDwlS", "RjB8d5Mo3DB", "0H1boaL9X4", "AGG72wGNj5k", "oGzZGLzr81", "dkBPmWyBkqO", "wH_vfH-rrDS", "I1evkto8zU4", "rHLY6cJ8fNs", "liAf8Yhr1F", "L9ZQDzfOfl", ...
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", ...
[ " I thank the authors for their response, and most of my concerns are addressed by the rebuttal. After reading the other reviews and the corresponding responses, I share the same impression with Reviewer eyrZ that several techniques used in the model lack motivations, e.g., attentive neural processes, learnable tas...
[ -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "rHLY6cJ8fNs", "3BYQyBQulsT", "3BYQyBQulsT", "BreRkLJ6Tgr", "8mk7AMGCwfR", "RjB8d5Mo3DB", "Fujor7fNmsU", "iclr_2022_9otKVlgrpZG", "LdRO0fD2oWG", "3BHxG1-jn1M", "A4SJuX0AJr", "1K2rxxA7TKA", "dkBPmWyBkqO", "yaQxzSOE43Z", "yaQxzSOE43Z", "yaQxzSOE43Z", "yaQxzSOE43Z", "BreRkLJ6Tgr", "...
iclr_2022_OXRZeMmOI7a
Topological Experience Replay
State-of-the-art deep Q-learning methods update Q-values using state transition tuples sampled from the experience replay buffer. This strategy often randomly samples or prioritizes data sampling based on measures such as the temporal difference (TD) error. Such sampling strategies can be inefficient at learning Q-func...
Accept (Poster)
A new sampling strategy for experience is proposed and compared with alternative sampling strategies. The main weakness of the paper is the limited applicability of the strategy as it only works well goal-oriented tasks, and stochasticity reduces the effectiveness. And within this setting, only good performance is show...
train
[ "lsT2_w9ibki", "uRlrISTGL7G", "wyUZk8mT6Rk", "LnYbUB196vb", "Gab5pw2N7LG", "WqwqiBnUzw", "K5fxk6uC75k", "mX2jxXnnc22", "1LNfdT34o5", "ZavkspE_Xv", "xnKOIdP-H3", "zfI_INi1x--", "CSfQp6Fk4t", "91NOZQ7eVzB", "kgf2v70bieW", "A0BUp3oQxuH", "sOzS546J8J", "0zUoX5rGyfo", "B4TUNAItL5Q", ...
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "...
[ " I've gone over the authors' rebuttal, in fact we had a couple of iterations by now. I had been harsher than I should have been in my initial assessment, and I revised my score. Nevertheless, even after the back and forth, I still think the paper is not that exciting. I'm not comfortable with the somewhat limited ...
[ -1, -1, -1, -1, -1, -1, 6, -1, -1, 8, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ -1, -1, -1, -1, -1, -1, 5, -1, -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "zfI_INi1x--", "kgf2v70bieW", "CSfQp6Fk4t", "CSfQp6Fk4t", "CSfQp6Fk4t", "CSfQp6Fk4t", "iclr_2022_OXRZeMmOI7a", "1LNfdT34o5", "xnKOIdP-H3", "iclr_2022_OXRZeMmOI7a", "0zUoX5rGyfo", "iclr_2022_OXRZeMmOI7a", "A0BUp3oQxuH", "A0BUp3oQxuH", "A0BUp3oQxuH", "zfI_INi1x--", "ZavkspE_Xv", "K5f...
iclr_2022_7MV6uLzOChW
Conditional Image Generation by Conditioning Variational Auto-Encoders
We present a conditional variational auto-encoder (VAE) which, to avoid the substantial cost of training from scratch, uses an architecture and training objective capable of leveraging a foundation model in the form of a pretrained unconditional VAE. To train the conditional VAE, we only need to train an artifact to pe...
Accept (Poster)
This paper presents a method to turn a pretrained unconditional VAE into a conditional VAE by training an encoder to predict the unconditional VAE latents given conditional input. On a variety of image tasks, the method is shown to perform competitively with GANs, yielding good sample quality and diversity, and resulti...
test
[ "jXprQaamGky", "qz-rtzzligF", "WZUPnaSAi0l", "76qw1pTXFgx", "yYvyu7VpQK", "RTEHBoDrIJQ", "oyeD6ghhJ6_", "LhW596bcbSp", "FupiXt7_ZB", "yth5CQxsQ-X", "gDWHR_p9GZT", "rm6cKe7u83", "BEc_lYZMHry", "DhGHeCP1Gn9", "VJ2L3zAPGD" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "In this paper, the authors focus on training conditional variational autoencoders. They propose an architecture and training objective which leverages pretraining an initial unconditional VAE. The approach effectively infers the latent variables of the original unconditional VAE given the new conditioning input. T...
[ 6, -1, -1, 8, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 3, -1, -1, 3, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_7MV6uLzOChW", "iclr_2022_7MV6uLzOChW", "oyeD6ghhJ6_", "iclr_2022_7MV6uLzOChW", "iclr_2022_7MV6uLzOChW", "LhW596bcbSp", "jXprQaamGky", "FupiXt7_ZB", "rm6cKe7u83", "VJ2L3zAPGD", "76qw1pTXFgx", "yYvyu7VpQK", "yYvyu7VpQK", "yYvyu7VpQK", "iclr_2022_7MV6uLzOChW" ]
iclr_2022__l_QjPGN5ye
The Boltzmann Policy Distribution: Accounting for Systematic Suboptimality in Human Models
Models of human behavior for prediction and collaboration tend to fall into two categories: ones that learn from large amounts of data via imitation learning, and ones that assume human behavior to be noisily-optimal for some reward function. The former are very useful, but only when it is possible to gather a lot of h...
Accept (Poster)
The paper presents a new approach to learning human behavior by observing a small number of interactions. To this end, it proposes a Bayesian learning framework where a Boltzmann-type prior over human policies, based on an available reward function, governs default behavior. The prior is updated by observing actual tra...
train
[ "MungH13s3Fk", "sNnIMRR_mko", "Kx43Qsk4xn", "LNyze1eKHTP", "Z3eXd0B74Lr", "N-5nO3R2BZE", "6mLxesB0B1S", "nfGYl5Oadkz", "wNu-1Ntrzxi", "hIu9kDh4hrk", "gyH7wCnLX4", "4e1SjqjDE9G" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The goal of this paper is to train models that imitate human behavior using limited samples. A human reward function is given but human behavior may be suboptimal according to this reward function and therefore we cannot use the standard assumption, Boltzmann rationality. The authors propose an imitation learning ...
[ 8, -1, 6, -1, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ 4, -1, 3, -1, -1, -1, -1, -1, -1, -1, 2, 4 ]
[ "iclr_2022__l_QjPGN5ye", "LNyze1eKHTP", "iclr_2022__l_QjPGN5ye", "Z3eXd0B74Lr", "N-5nO3R2BZE", "hIu9kDh4hrk", "Kx43Qsk4xn", "4e1SjqjDE9G", "gyH7wCnLX4", "MungH13s3Fk", "iclr_2022__l_QjPGN5ye", "iclr_2022__l_QjPGN5ye" ]
iclr_2022_5MLb3cLCJY
Adaptive Wavelet Transformer Network for 3D Shape Representation Learning
We present a novel method for 3D shape representation learning using multi-scale wavelet decomposition. Distinct from previous works that either decompose 3D shapes into complementary components at a single scale, or naively adopt up-/down-sampling to build hierarchies and treat all points or local regions equally, we ...
Accept (Poster)
The submission initially received mixed reviews. The authors presented convincing answers during the author response period, after which all reviewers recommended weak accepts. The AC has carefully read the reviews, responses, and discussions, and agreed with the reviewers' recommendation. Despite the marginal performa...
train
[ "KwfEgss4Ls", "D5MuW8Gu_HI", "xhUQpbMA3s7", "7rMqVMm11ox", "FsE5wIMzb0I", "QHhkuGQ8_k-", "Wf8SuACbBz", "iR3PsRkDRgb", "Sh629QvxQX", "H0BgWzqnH3j", "HHsxAA3tgB2", "jBixFE8mFQ" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a novel 3D point cloud representation learning framework. At the core of this method, is a lifting scheme inspired by wavelet decomposition. The proposed method roughly splits the input data in half at each stage, producing a down-sampled approximation C and detail d. Then C is further processe...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "iclr_2022_5MLb3cLCJY", "jBixFE8mFQ", "jBixFE8mFQ", "HHsxAA3tgB2", "KwfEgss4Ls", "KwfEgss4Ls", "H0BgWzqnH3j", "H0BgWzqnH3j", "iclr_2022_5MLb3cLCJY", "iclr_2022_5MLb3cLCJY", "iclr_2022_5MLb3cLCJY", "iclr_2022_5MLb3cLCJY" ]
iclr_2022_l3SDgUh7qZO
SphereFace2: Binary Classification is All You Need for Deep Face Recognition
State-of-the-art deep face recognition methods are mostly trained with a softmax-based multi-class classification framework. Despite being popular and effective, these methods still have a few shortcomings that limit empirical performance. In this paper, we start by identifying the discrepancy between training and eval...
Accept (Spotlight)
All reviewers agree that the proposed SphereFace2 approach - training face recognition models by using multiple binary classification losses - is interesting and innovative. The reviewers agree that the paper is well written and are satisfied with the presented experimental study. The rebuttal addressed all additionall...
train
[ "8vO6sAHvhYd", "PVbqbeEdb9H", "akOde7jsoTi", "BFfat2_26Em", "TUj82Yd0wpd", "Ms3lkY5LkS", "QDkX3p_FR4D", "dnfXtMOeuz-", "S0oa9DLhGt4", "zIo0BfEVq1D", "_wVDY6JhEdV", "olXRLr-xr67", "vecbEfF9H0f", "u_kDknWCbo1", "fBfVQIkAhCD" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper has proposed SphereFace2, a new face recognition training scheme that applies a set of binary classification losses to each identities in the dataset, to learn better face embeddings. The proposed method has addressed some limitations of existing training schemes, such as discrepancy between the softmax...
[ 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2022_l3SDgUh7qZO", "akOde7jsoTi", "BFfat2_26Em", "vecbEfF9H0f", "_wVDY6JhEdV", "QDkX3p_FR4D", "dnfXtMOeuz-", "olXRLr-xr67", "8vO6sAHvhYd", "iclr_2022_l3SDgUh7qZO", "fBfVQIkAhCD", "u_kDknWCbo1", "8vO6sAHvhYd", "iclr_2022_l3SDgUh7qZO", "iclr_2022_l3SDgUh7qZO" ]
iclr_2022_dgxFTxuJ50e
Learnability of convolutional neural networks for infinite dimensional input via mixed and anisotropic smoothness
Among a wide range of success of deep learning, convolutional neural networks have been extensively utilized in several tasks such as speech recognition, image processing, and natural language processing, which require inputs with large dimensions. Several studies have investigated function estimation capability of dee...
Accept (Spotlight)
This work studies the approximation and estimation errors of using neural networks (NNs) to fit functions on infinite-dimensional inputs that admit smoothness constraints. By considering a certain notion of anisotropic smoothness, the authors show that convolutional neural networks avoid the curse of dimensionality. ...
train
[ "1krJpFnJOwR", "lGsBWiV8kR", "1k0Ko8WVSIp", "IeThswvoLb", "uPJyzaXayr6", "elGRF4kP4uO", "h5F3_d3pkF", "LIjm7OrQ-0e" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your suggestive comments. Please find our answer in the following. \n\n**1. Are the obtained rates also minimax optimal when $1 \\leq p < 2$?** \n\nThank you very much for noticing an important point. Indeed, we believe that it is *not* minimax optimal and it can be improved. More precisely, the t...
[ -1, -1, -1, -1, 6, 8, 8, 8 ]
[ -1, -1, -1, -1, 3, 3, 3, 5 ]
[ "elGRF4kP4uO", "uPJyzaXayr6", "LIjm7OrQ-0e", "h5F3_d3pkF", "iclr_2022_dgxFTxuJ50e", "iclr_2022_dgxFTxuJ50e", "iclr_2022_dgxFTxuJ50e", "iclr_2022_dgxFTxuJ50e" ]
iclr_2022_0EXmFzUn5I
Pyraformer: Low-Complexity Pyramidal Attention for Long-Range Time Series Modeling and Forecasting
Accurate prediction of the future given the past based on time series data is of paramount importance, since it opens the door for decision making and risk management ahead of time. In practice, the challenge is to build a flexible but parsimonious model that can capture a wide range of temporal dependencies. In this p...
Accept (Oral)
The authors propose a multi-resolution pyramidal attention mechanism to capture long-range dependencies in time series forecasting, achieving linear time and space complexity. The authors conduced an extensive set of experiments and ablation studies demonstrating that the proposed method consistently outperforms the s...
train
[ "JLoiiEDgTbL", "Gd1MadiCQ9E", "V5fdKXpgPgZ", "TV3bPGHR2F8", "CKJIoK4PE-P", "5yfE4drbghP", "2MzmUVRpdpq", "0rN6MM9aeo", "6U-JdDeL4CC", "-h6Cofx8qK6", "4vptmbgsdMr", "p37Ls0Brohq", "wiJbSXNtKRV", "2tGuVdp6Ypv", "2gvRi1JxKVS", "PPaNYRMu_eI", "IRoCU2928lZ", "M90iHKf04Ns" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " Many thanks to Reviewer 17as for providing an impressively insightful pre-rebuttal review. Your detailed suggestions help us a lot in paper revision.\n\nWe'd also thank your dedication for carefully judging our feedback and raising the score! Your constructive suggestions are very helpful for us to improve the pa...
[ -1, -1, -1, -1, 8, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ -1, -1, -1, -1, 3, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "TV3bPGHR2F8", "6U-JdDeL4CC", "2MzmUVRpdpq", "4vptmbgsdMr", "iclr_2022_0EXmFzUn5I", "iclr_2022_0EXmFzUn5I", "5yfE4drbghP", "iclr_2022_0EXmFzUn5I", "2tGuVdp6Ypv", "5yfE4drbghP", "CKJIoK4PE-P", "5yfE4drbghP", "M90iHKf04Ns", "0rN6MM9aeo", "0rN6MM9aeo", "CKJIoK4PE-P", "CKJIoK4PE-P", "i...
iclr_2022_TZeArecH2Nf
Bridging Recommendation and Marketing via Recurrent Intensity Modeling
This paper studies some unexplored connections between personalized recommendation and marketing systems. Obviously, the two systems are different, in two main ways. Firstly, personalized item-recommendation (ItemRec) is user-centric, whereas marketing recommends the best user-state segments (UserRec) on behalf of its ...
Accept (Poster)
Between a reject, an accept, and a borderline-accept, this is truly a borderline paper, though I'd lean slightly on the side of accepting it. The most negative review raises issues of weak baselines, along with several more minor issues. The authors rebut this reasonably well, arguing several differences from the setti...
train
[ "nAlhS3U-mv8", "zQ7AZTjyJGg", "CHoanicn8E7", "5COFANM-498", "LMz6D5xtav", "kVWCDMUkYOx", "7shk84ddhz", "Q_pxMrqFqqs", "wepYKEfbS_-", "i1XzDYCyaj", "Y6uDn20QZD" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We appreciate the reviewer’s agreement with our contributions of intensity-based recommender models, choices of baselines, and experimental designs. We’d like to provide further clarification regarding diversity (and accuracy) as this is the only remaining concern.\n\nFirst of all, we showed that our RIM proposal...
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "CHoanicn8E7", "iclr_2022_TZeArecH2Nf", "7shk84ddhz", "wepYKEfbS_-", "Y6uDn20QZD", "i1XzDYCyaj", "Q_pxMrqFqqs", "zQ7AZTjyJGg", "iclr_2022_TZeArecH2Nf", "iclr_2022_TZeArecH2Nf", "iclr_2022_TZeArecH2Nf" ]
iclr_2022_zf_Ll3HZWgy
How Much Can CLIP Benefit Vision-and-Language Tasks?
Most existing Vision-and-Language (V&L) models rely on pre-trained visual encoders, using a relatively small set of manually-annotated data (as compared to web-crawled data), to perceive the visual world. However, it has been observed that large-scale pretraining usually can result in better generalization performance,...
Accept (Poster)
Reviewers are in agreement that this work is a useful, clear, documentary piece of work that shows the utility of CLIP on a number of popular V+L tasks. There is a somewhat persistent concern that simply demonstrating that a stronger visual encoder leads to improvements downstream is not an insightful result on which ...
train
[ "QvjkDkZn-2", "HPkeV55znf6", "v8_ecERXcu1", "i8E-CQGmRZk", "vbBuZE42ZFs", "hcRqsg77YNt", "BV3SI5qREbY", "1DmrJVOqcOz", "2Td7wH2eG1C", "goW9aCwyBsi" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the authors for answering my questions and providing the additional results.\n\nI have no doubt that CLIP can serve as a strong visual backbone for V&L applications. My concerns are more about the underlying reasons behind the improvements of CLIP. The new results on BUTD with V&L pretraining seem t...
[ -1, -1, 5, -1, -1, -1, -1, 6, 6, 8 ]
[ -1, -1, 5, -1, -1, -1, -1, 4, 4, 4 ]
[ "hcRqsg77YNt", "i8E-CQGmRZk", "iclr_2022_zf_Ll3HZWgy", "v8_ecERXcu1", "goW9aCwyBsi", "2Td7wH2eG1C", "1DmrJVOqcOz", "iclr_2022_zf_Ll3HZWgy", "iclr_2022_zf_Ll3HZWgy", "iclr_2022_zf_Ll3HZWgy" ]
iclr_2022_ChMLTGRjFcU
How many degrees of freedom do we need to train deep networks: a loss landscape perspective
A variety of recent works, spanning pruning, lottery tickets, and training within random subspaces, have shown that deep neural networks can be trained using far fewer degrees of freedom than the total number of parameters. We analyze this phenomenon for random subspaces by first examining the success probability of hi...
Accept (Poster)
This paper presents new insights for training on random subspaces of low dimension, with several theoretical and experimental contributions. This is a paper that would be interesting to many people doing research in deep learning, both from the theoretical and practical side.
train
[ "ii4Xjd25Lf4", "nzJqGQQqVY", "ceoqtToRs-E", "YGRdIE8ibfm", "am53iicVSWP", "iUyUiS-RJIT", "PXVN5mXmrtx", "gcsIxNigYWS", "q0SMpLxi8u", "lEGIQnXDdI_", "HwEMjr568PP" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your detailed response to each question. After the responses, I think the authors have clarified most questions and I think the paper explores some interesting theoretical directions, that may guide the understanding of Deep neural networks, hopefully for more practical settings, ReLU functions and ...
[ -1, -1, -1, -1, -1, -1, -1, 6, 8, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "am53iicVSWP", "PXVN5mXmrtx", "iUyUiS-RJIT", "HwEMjr568PP", "lEGIQnXDdI_", "q0SMpLxi8u", "gcsIxNigYWS", "iclr_2022_ChMLTGRjFcU", "iclr_2022_ChMLTGRjFcU", "iclr_2022_ChMLTGRjFcU", "iclr_2022_ChMLTGRjFcU" ]
iclr_2022_GWQWAeE9EpB
DictFormer: Tiny Transformer with Shared Dictionary
We introduce DictFormer with the efficient shared dictionary to provide a compact, fast, and accurate transformer model. DictFormer significantly reduces the redundancy in the transformer's parameters by replacing the prior transformer's parameters with a compact, shared dictionary, few unshared coefficients, and indic...
Accept (Poster)
DictFormer is a method to reduce the redundancy in transformers so they can deployed on edge devices. In the method, a shared dictionary across layers and unshared coefficients are used in place of weight multiplications. The author proposed a l1 relaxation to train the non-differentiable objective to achieve both hig...
train
[ "HZ45cf3Ecs", "_Ah44QJvAs", "aUyFQkT0ty", "HV0zROvFP7I", "u9JjYojMMm", "02O_1kbtwLO", "XX6u2R7ivtX", "Ek0R2mkwJav", "XruOVcMK4YZ", "k41r-jspR1", "vBMaWM2bPz_", "ZE9U60vs6-M", "43Od4EhGldQ", "nqC9e76Z1hp", "slq_TOS5oY6", "nGCmZaiztJp", "G7wBchti2ze", "dlkVOVZ0gNc", "XMxIVQN2LI", ...
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_rev...
[ "This paper proposes a compression method for Transformer-based encoder-decoder or language models.\nThe key idea of the proposed method is to decompose the standard parameters into a much smaller shared parameter matrix and independent parameters for each original matrix.\nThen, the method can approximately recove...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2022_GWQWAeE9EpB", "aUyFQkT0ty", "u9JjYojMMm", "a7tl_b85or5", "XX6u2R7ivtX", "k41r-jspR1", "Ek0R2mkwJav", "XruOVcMK4YZ", "X1FhRTTeLID", "efhib8jgs3X", "nqC9e76Z1hp", "XMxIVQN2LI", "nGCmZaiztJp", "1Yc64_37dnI", "HZ45cf3Ecs", "mzJsEcarbcn", "iclr_2022_GWQWAeE9EpB", "iclr_2022_G...
iclr_2022_qhC8mr2LEKq
CrossBeam: Learning to Search in Bottom-Up Program Synthesis
Many approaches to program synthesis perform a search within an enormous space of programs to find one that satisfies a given specification. Prior works have used neural models to guide combinatorial search algorithms, but such approaches still explore a huge portion of the search space and quickly become intractable a...
Accept (Poster)
This paper addresses the problem of program synthesis given input/output examples and a domain-specific language using a bottom-up approach. The paper proposes the use of a neural architecture that exploits the search context (all the programs considered so far and their execution results) to decide which program to ev...
train
[ "JT6oWWmqIVi", "9A2TIAVHiW-", "HFjWRq3kiIZ", "iBWaI5pRU6r", "7-r0uX86uCa", "SLWxQWDVay8", "aJ3NNwn_NpH", "HWRZ3Horeco", "gOS7Gv06jwz", "4zaB_kUyyg1", "n-nzA00Fft", "zoaHGRBkub9", "08FWLkYSlE" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response and for clarifying all the details!", " With a budget of 50,000 candidates:\n* BUSTLE solves 15 / 38 of their new tasks, and 44 / 89 of the SyGuS tasks, or 59 / 127 tasks total.\n* CrossBeam solves 28.8 / 38 of their new tasks, and 66.6 / 89 of the SyGuS tasks, or 95.4 / 127 tasks tot...
[ -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, 8, 8, 6 ]
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "gOS7Gv06jwz", "HFjWRq3kiIZ", "SLWxQWDVay8", "iclr_2022_qhC8mr2LEKq", "iclr_2022_qhC8mr2LEKq", "iBWaI5pRU6r", "08FWLkYSlE", "08FWLkYSlE", "zoaHGRBkub9", "n-nzA00Fft", "iclr_2022_qhC8mr2LEKq", "iclr_2022_qhC8mr2LEKq", "iclr_2022_qhC8mr2LEKq" ]
iclr_2022_izj68lUcBpt
TAda! Temporally-Adaptive Convolutions for Video Understanding
Spatial convolutions are widely used in numerous deep video models. It fundamentally assumes spatio-temporal invariance, i.e., using shared weights for every location in different frames. This work presents Temporally-Adaptive Convolutions (TAdaConv) for video understanding, which shows that adaptive weight calibration...
Accept (Poster)
The authors study the problem of video classification and propose a new module which promises to increase accuracy while keeping the computational overhead low. The main idea is not to share the spatial convolution weights over different time steps, but allow some modulation based on pooled local and global frame descr...
train
[ "XSejPKYyGYj", "Grm31Ae9R8j", "_zXAAqkPVa9", "_4d3swuZYD", "WogwGGozpWg", "s9Kc5IdUQdl", "gBaOIPZaHdF", "BZXDXmi8c4", "xum3_o8sXB", "7YrKpWabHcq", "S0WbihDF2YU", "i1U0bdo3Sg5", "FJvnlHqFouE", "agrcOQ6HrpR", "IaYlisQV1Ar", "5dE2LEIv_tH", "urq61qw8T4e", "B4vPWr1V4LI", "YjNA9BaHgA" ...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer jnMz,\n\nWe would like to thank you again for your time and effort in reviewing our manuscript. We have carefully revised our submission according to your insightful comments, and put in our best efforts to clarify the questions and concerns. It would be greatly appreciated if you could check our re...
[ -1, -1, -1, 6, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, -1, -1, 4, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "YjNA9BaHgA", "WogwGGozpWg", "s9Kc5IdUQdl", "iclr_2022_izj68lUcBpt", "IaYlisQV1Ar", "urq61qw8T4e", "iclr_2022_izj68lUcBpt", "iclr_2022_izj68lUcBpt", "iclr_2022_izj68lUcBpt", "B4vPWr1V4LI", "YjNA9BaHgA", "YjNA9BaHgA", "YjNA9BaHgA", "_4d3swuZYD", "_4d3swuZYD", "gBaOIPZaHdF", "gBaOIPZaH...
iclr_2022_VimqQq-i_Q
What Do We Mean by Generalization in Federated Learning?
Federated learning data is drawn from a distribution of distributions: clients are drawn from a meta-distribution, and their data are drawn from local data distributions. Generalization studies in federated learning should separate performance gaps from unseen client data (out-of-sample gap) from performance gaps from ...
Accept (Poster)
This paper presents some insightful suggestions for researchers studying generalization in federated learning by separating two types of performance gaps between training and test performance, the participation gap (due to partial client participation) and the performance gap (due to data heterogeneity). They suggest t...
test
[ "ufo6VFdaksX", "yvrw-ndZQH", "xyABX5fy5v", "fBy8Dd8dbVy", "V3BynnOfjOY", "FMjkZPS8d2I", "Q9cRJbRD89", "F6SIZ5pVUVm", "CEThrQGP0Ko", "kyzQjR4FksQ", "H5fq4qYcdy", "0T6McVnXr7", "T0pDAzH1MXf", "HeW9thoBIS", "f36m06L2Oxs", "5C7VRuxiWk", "nbXXfEpuDoy", "H-Bburze0s1" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a framework for disentangling the performance gap in federated learning into out-of-sample gap and participation gap, which should serve as a better tool for explaining the model generalization performance. They also give a semantic synthesis strategy that enables realistic simulation to create ...
[ 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ 3, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2022_VimqQq-i_Q", "iclr_2022_VimqQq-i_Q", "iclr_2022_VimqQq-i_Q", "Q9cRJbRD89", "Q9cRJbRD89", "Q9cRJbRD89", "0T6McVnXr7", "nbXXfEpuDoy", "xyABX5fy5v", "xyABX5fy5v", "xyABX5fy5v", "xyABX5fy5v", "H-Bburze0s1", "H-Bburze0s1", "ufo6VFdaksX", "ufo6VFdaksX", "iclr_2022_VimqQq-i_Q", ...
iclr_2022_vgqS1vkkCbE
Value Function Spaces: Skill-Centric State Abstractions for Long-Horizon Reasoning
Reinforcement learning can train policies that effectively perform complex tasks. However for long-horizon tasks, the performance of these methods degrades with horizon, often necessitating reasoning over and chaining lower-level skills. Hierarchical reinforcement learning aims to enable this by providing a bank of low...
Accept (Poster)
This paper is good but at a borderline. One reviewer increased the score during the discussions. However, no reviewer was in strong favor. So that this paper is still a borderline one, and it is up to the SAC to decide.
train
[ "PdT3MCjBOhA", "kBVcGm0AEXY", "QQUJ03efzgN", "4cVsCuzTGbG", "ZCcnu73ci6P", "vVxqSX6aRyh", "1wPM-9f7m94", "7g7HrGi_Iyl", "LDRZxNlnFJ5", "KMfmQQRNV4u", "yxfWt2qrlYA", "OtTqn8KoYsj", "L_sKzkRS2e_", "pwk1EoP2Wx3", "j93-4RDRkfQ", "QNIqBl6D8u" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper presents a novel state representation technique that is based on Value Functions (VFs). The core idea of the paper is to use VFs to construct a high-level space representation in an hierarchical reinforcement learning (RL) scenario. The contributions of the paper are:\n\n- Value Function Spaces (VFS): le...
[ 6, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2022_vgqS1vkkCbE", "iclr_2022_vgqS1vkkCbE", "iclr_2022_vgqS1vkkCbE", "LDRZxNlnFJ5", "QQUJ03efzgN", "QNIqBl6D8u", "kBVcGm0AEXY", "L_sKzkRS2e_", "yxfWt2qrlYA", "kBVcGm0AEXY", "OtTqn8KoYsj", "QQUJ03efzgN", "QNIqBl6D8u", "j93-4RDRkfQ", "PdT3MCjBOhA", "iclr_2022_vgqS1vkkCbE" ]
iclr_2022_P3Bh01hBYTH
X-model: Improving Data Efficiency in Deep Learning with A Minimax Model
To mitigate the burden of data labeling, we aim at improving data efficiency for both classification and regression setups in deep learning. However, the current focus is on classification problems while rare attention has been paid to deep regression, which usually requires more human effort to labeling. Further, due ...
Accept (Poster)
The SAC wrote a very good meta review and I just copy and paste it here. I completely agree with the SAC that the contribution of the paper due to the similarity to MME and MCD. Hopefully adding data augmentation to MCD and providing empirical results on new tasks can shed some lights to the community. ---------------...
train
[ "-n8fQLJeHhZ", "j4jW-I1hVMw", "AbwRSsBppUH", "MaFC25i-Cfx", "dVrTkiepbDX", "Z4g2qjvrMtY", "DJsDntenfCs", "aZjCUFksDVJ", "kXId6Dfkbm", "OmrLIMJRUrE" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes the so called Chi-model, which combines data augmentation with a two-headed network architecture to include model stochasticity. It specifically addresses data efficient learning in regression settings. ## Strong points\nGood performance on a wide range of benchmark datasets.\nEasy to read, exp...
[ 6, -1, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2022_P3Bh01hBYTH", "DJsDntenfCs", "dVrTkiepbDX", "OmrLIMJRUrE", "Z4g2qjvrMtY", "-n8fQLJeHhZ", "kXId6Dfkbm", "iclr_2022_P3Bh01hBYTH", "iclr_2022_P3Bh01hBYTH", "iclr_2022_P3Bh01hBYTH" ]
iclr_2022_eBS-3YiaIL-
Analyzing and Improving the Optimization Landscape of Noise-Contrastive Estimation
Noise-contrastive estimation (NCE) is a statistically consistent method for learning unnormalized probabilistic models. It has been empirically observed that the choice of the noise distribution is crucial for NCE’s performance. However, such observation has never been made formal or quantitative. In fact, it is not ev...
Accept (Spotlight)
This contribution investigates and takes a step back on an important problem in recent ML, namely the impact of the noise distribution in density estimation using Noise Contrastive Estimation. The work offers both theoretical insights and convincing experiments. For these reasons, this work should be endorsed for publ...
train
[ "VdP6l-FtTV", "RYuMhdK0NB", "b8YWf6tONj", "rro5WF2GGg", "KhQj3ooLAKP", "5nw0n3Z-oWk", "sJjQ18zjJz6", "cMDa4LqXMQS", "diG2Au_ap4U", "lgwHW_khXt4", "WtcWmfeitTi", "QM_GEK9BzBb", "IwFMT43qgDY", "2tcvTY-nVuk", "QIohDD8XVjI" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "TLDR: The authors theoretically study convergence properties of the Noise Contrastive Estimation (NCE) loss, identify issues that make its optimisation difficult in some practical scenarios, propose a method to resolve this and provide theoretical guarantees that their proposal works. \n\nLonger version: \n\nThe a...
[ 8, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ 3, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 2, 4 ]
[ "iclr_2022_eBS-3YiaIL-", "diG2Au_ap4U", "KhQj3ooLAKP", "5nw0n3Z-oWk", "lgwHW_khXt4", "cMDa4LqXMQS", "iclr_2022_eBS-3YiaIL-", "WtcWmfeitTi", "IwFMT43qgDY", "2tcvTY-nVuk", "sJjQ18zjJz6", "QIohDD8XVjI", "VdP6l-FtTV", "iclr_2022_eBS-3YiaIL-", "iclr_2022_eBS-3YiaIL-" ]
iclr_2022_EXHG-A3jlM
Efficient Token Mixing for Transformers via Adaptive Fourier Neural Operators
Vision transformers have delivered tremendous success in representation learning. This is primarily due to effective token mixing through self attention. However, this scales quadratically with the number of pixels, which becomes infeasible for high-resolution inputs. To cope with this challenge, we propose Adaptive Fo...
Accept (Poster)
Overall, this paper receives positive reviews. The reviewers find the technical novelty and contributions are significant enough for acceptance at this conference. The authors' rebuttal helps address some issues. The area chair agrees with the reviewers and recommend it be accepted at this conference.
train
[ "IOWW44el1X", "WkIKfrvKTix", "38LKkJHINMj", "e1x8q2shRvr", "Klq_POkGFJ", "ogTrM22nngO", "i5lz9WUlBuh" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer" ]
[ " After reading the reviews, I don't think it is necessary to compare with contemporaneous networks (some of the mentioned ones were published after the submission deadline such as SWIN).\n\nI maintain my rating. I am not entirely convinced about novelty, but I think this paper, nevertheless, can be interesting to ...
[ -1, -1, 8, -1, -1, 6, 6 ]
[ -1, -1, 3, -1, -1, 4, 4 ]
[ "Klq_POkGFJ", "i5lz9WUlBuh", "iclr_2022_EXHG-A3jlM", "38LKkJHINMj", "ogTrM22nngO", "iclr_2022_EXHG-A3jlM", "iclr_2022_EXHG-A3jlM" ]
iclr_2022_rwE8SshAlxw
Unsupervised Discovery of Object Radiance Fields
We study the problem of inferring an object-centric scene representation from a single image, aiming to derive a representation that explains the image formation process, captures the scene's 3D nature, and is learned without supervision. Most existing methods on scene decomposition lack one or more of these characteri...
Accept (Poster)
This paper develops a method for decomposing scenes into object-specific neural radiance fields. After the discussion phase, two reviewers support acceptance. Empirical results on multiple synthetic datasets and benchmarks appear convincing; the rebuttal also added an initial demonstration of generalization to real i...
train
[ "t7AucFn7qO-", "dr2lrcvHOAv", "hRRR-ywWmNC", "w2pdY6gm60E", "NhanIvTpGQD", "vsB7bLBiSFT", "89RM2brYGPR", "SP8X4WYkRIK", "44vKwB3HdH", "m1QjorC98o", "yXaytgFizll", "qZY0RKnsnLp", "cD0wWdGvyMO" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer tkSe,\n\nWe would like to thank you again for your constructive review. We are happy to see that our revision has addressed your concerns and the real-world results are helpful. We sincerely appreciate your suggestions.\n\nThank you!\n\nPaper396 Authors", " Thanks for the rebuttal and in particula...
[ -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, 5, 8 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "dr2lrcvHOAv", "m1QjorC98o", "iclr_2022_rwE8SshAlxw", "vsB7bLBiSFT", "qZY0RKnsnLp", "44vKwB3HdH", "qZY0RKnsnLp", "iclr_2022_rwE8SshAlxw", "cD0wWdGvyMO", "hRRR-ywWmNC", "qZY0RKnsnLp", "iclr_2022_rwE8SshAlxw", "iclr_2022_rwE8SshAlxw" ]
iclr_2022_St6eyiTEHnG
Consistent Counterfactuals for Deep Models
Counterfactual examples are one of the most commonly-cited methods for explaining the predictions of machine learning models in key areas such as finance and medical diagnosis. Counterfactuals are often discussed under the assumption that the model on which they will be used is static, but in deployment models may be p...
Accept (Poster)
Most of the discussion centered around whether the underlying question in the literature is setup correctly in terms of its relationship to causality as the question being asked is one of an intervention. The underlying literature makes an attempt at not including things that can't be intervened on like age, but the se...
train
[ "-yYpBEBfkTh", "0jL78_P3PKQ", "yuqn0QsVHy8", "jqAzUZRLn7e", "Bw5OFfneXL4", "ZkYA7ruCcc", "eYEJMIc-0AB", "hv1k5Tx1oY5", "bHeqdzuiuo1", "C0wrgK6TKcw", "dQx2XIqOlyB", "cZMgYwnpqDL", "9_1mSFPviG", "Hk2WKpooyo1", "sdxzGa-lFGY", "Bl6RRWgw9Ku", "Bj1z64TClhh", "SJ1OxQl6XS", "Lky1hdkvnaZ"...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_re...
[ " Thanks for the authors' reply. Now I learned in counterfactual explanation the counterfactual is something completely different from what I was assuming. This is actually quite different from what I read in [1,2], but I can accept this difference.\n\nIt is also great to learn from your explanation on the relation...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 3 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "sdxzGa-lFGY", "iclr_2022_St6eyiTEHnG", "Bw5OFfneXL4", "hv1k5Tx1oY5", "eYEJMIc-0AB", "dQx2XIqOlyB", "9_1mSFPviG", "bHeqdzuiuo1", "C0wrgK6TKcw", "cZMgYwnpqDL", "Hk2WKpooyo1", "Lky1hdkvnaZ", "0jL78_P3PKQ", "SJ1OxQl6XS", "Bl6RRWgw9Ku", "Bj1z64TClhh", "iclr_2022_St6eyiTEHnG", "iclr_202...
iclr_2022_HfUyCRBeQc
Selective Ensembles for Consistent Predictions
Recent work has shown that models trained to the same objective, and which achieve similar measures of accuracy on consistent test data, may nonetheless behave very differently on individual predictions. This inconsistency is undesirable in high-stakes contexts, such as medical diagnosis and finance. We show that this ...
Accept (Poster)
This is a well-done job which combines a few ideas to reach means to identify problematic cases and indicate this when classifying. It has raised doubts about the applicability, though I can see that a abstention rule can have multiple uses. While the work seems to be well done, it has not largely excited the committee...
test
[ "VBAgNwqwilE", "0HXty3akaJ9", "yIsqsn8GhB8", "zrDbRzf_C7g", "2C2KMOmh7lG", "BbxJiSvVAU2", "zT9wocobrES", "MlDL0d7AZYP", "rF9uEX3zgz", "gFs5JUFuWG6", "MC6ELo3N-ES", "JczJmMh_osN", "OSs_ZggL5uR" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In the paper the authors proposed a selective ensemble whose constituents are classifiers with reject (abstention) option. They theoretically exposed the problem that differentiable models yielding identical predictions might remotely share identical gradients, biasing the feature attribution based on (integrated)...
[ 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 2, 4 ]
[ "iclr_2022_HfUyCRBeQc", "iclr_2022_HfUyCRBeQc", "rF9uEX3zgz", "2C2KMOmh7lG", "BbxJiSvVAU2", "JczJmMh_osN", "MC6ELo3N-ES", "OSs_ZggL5uR", "gFs5JUFuWG6", "VBAgNwqwilE", "iclr_2022_HfUyCRBeQc", "iclr_2022_HfUyCRBeQc", "iclr_2022_HfUyCRBeQc" ]
iclr_2022_w60btE_8T2m
Spanning Tree-based Graph Generation for Molecules
In this paper, we explore the problem of generating molecules using deep neural networks, which has recently gained much interest in chemistry. To this end, we propose a spanning tree-based graph generation (STGG) framework based on formulating molecular graph generation as a construction of a spanning tree and the res...
Accept (Spotlight)
This paper proposes a spanning tree-based graph generation framework for molecular graph generation, which is an interesting problem. The tree-based approach is efficient and relatively effective in molecular graph generation tasks, and the empirical results are convincing. There were some concerns during the initial r...
val
[ "xC9L1_4ZkL", "fHM9gFo9Jxr", "dI5tzuq3h5W", "PlfU_X5dqb-", "BwWOAIzQIiU", "aira9yammqk", "i2g4ND0s6QI", "LL78ZR9ZCjG", "VlLOO2YFZ8I", "bZ8j_naw42l", "weOuv3k0qOa", "rTv8SrwtNTl", "XX-fjHEdtQr", "eNoYF117reE", "pMgVDSJbPF", "q4pQWx36VtK", "-YehbZ1IdrZ", "QqOJUGltsB3" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_rev...
[ " I have read your response and I will keep my original score.", " Thank you very much for the positive response. We think your response were very helpful to improve your paper, and we are very happy to hear that your concerns are addressed! ", "This paper presents a method to construct a molecular graph, which...
[ -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 8 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "pMgVDSJbPF", "PlfU_X5dqb-", "iclr_2022_w60btE_8T2m", "BwWOAIzQIiU", "aira9yammqk", "i2g4ND0s6QI", "LL78ZR9ZCjG", "VlLOO2YFZ8I", "weOuv3k0qOa", "dI5tzuq3h5W", "bZ8j_naw42l", "dI5tzuq3h5W", "QqOJUGltsB3", "-YehbZ1IdrZ", "q4pQWx36VtK", "iclr_2022_w60btE_8T2m", "iclr_2022_w60btE_8T2m", ...
iclr_2022_lL3lnMbR4WU
Open-vocabulary Object Detection via Vision and Language Knowledge Distillation
We aim at advancing open-vocabulary object detection, which detects objects described by arbitrary text inputs. The fundamental challenge is the availability of training data. It is costly to further scale up the number of classes contained in existing object detection datasets. To overcome this challenge, we propose ...
Accept (Poster)
the aim of this work is to produce an open-vocabulary detector. The approach is via knowledge distillation from existing large-scale V+L models, and the evaluation is based on novel classes with LVIS. The reviewers were generally happy with the work (approach and results), but there were substantial points of clarifi...
train
[ "vvdZeRVVgRE", "3ZfMBBJV_w5", "ESnWUUiGr_4", "sDfJ8teZFwt", "n_8bRv-yYXj", "hPGunG0YQHz", "_qIa2k3nINF", "E1LgbQeLcIF", "mKl-YiwoL5", "OgoacXcYMkd", "OzzX3N-3vQ", "V7p2Xajq5th" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks for your detailed suggestion and we will certainly consider them in the final version of our paper!", " Thank you for your suggestion!\nWe will include a figure and more explanation for ViLD-ensemble in the appendix, for the final version.\n\nBelow we plot the model architecture and training objectives f...
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 8, 8 ]
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, 5, 4 ]
[ "sDfJ8teZFwt", "hPGunG0YQHz", "_qIa2k3nINF", "E1LgbQeLcIF", "iclr_2022_lL3lnMbR4WU", "OgoacXcYMkd", "OzzX3N-3vQ", "V7p2Xajq5th", "n_8bRv-yYXj", "n_8bRv-yYXj", "iclr_2022_lL3lnMbR4WU", "iclr_2022_lL3lnMbR4WU" ]
iclr_2022_S874XAIpkR-
The Essential Elements of Offline RL via Supervised Learning
Recent work has shown that supervised learning alone, without temporal difference (TD) learning, can be remarkably effective for offline RL. When does this hold true, and which algorithmic components are necessary? Through extensive experiments, we boil supervised learning for offline RL down to its essential elements....
Accept (Poster)
The paper studies the behavior cloning based strategies of offline RL algorithms in different type of environments and reports that performance primarily depends on model size and regularization. The results contradict some of the earlier claims, and the authors conjecture that model size and regularization characteris...
test
[ "HWolrJWWw8s", "jLfFlbkCZTZ", "LPs-5pxTAe", "YUfT0u-kKF0", "DjuZXGBYhIJ", "zYfynOsMbj", "zUiWknjViNP", "LUq9G8u_eIT", "BWFnOxrDJJB", "jrnFYE0n5Lj", "KAupfA5yBGw", "jK0I-WeZwoA", "qT2p-Rld9Mi", "hnkL4S9w94h", "gnguRYPKQIz", "2c3n58WHR19", "9hgN1672Pd", "VPtzTfzo6P" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " **“% BC somehow is not emphasized in much of literature”**\n\nNote that %BC is also competitive with Decision Transformer, and indeed the evaluation in the Decision Transformer paper (see Table 3) also finds that %BC is similar to DT in performance. So while the point about %BC being competitive is well-taken, an...
[ -1, 5, -1, -1, -1, 8, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, 3, -1, -1, -1, 4, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 2 ]
[ "LPs-5pxTAe", "iclr_2022_S874XAIpkR-", "zUiWknjViNP", "hnkL4S9w94h", "qT2p-Rld9Mi", "iclr_2022_S874XAIpkR-", "jLfFlbkCZTZ", "zYfynOsMbj", "VPtzTfzo6P", "jK0I-WeZwoA", "iclr_2022_S874XAIpkR-", "2c3n58WHR19", "zYfynOsMbj", "VPtzTfzo6P", "jLfFlbkCZTZ", "KAupfA5yBGw", "iclr_2022_S874XAIp...
iclr_2022_gFDFKC4gHL4
How Did the Model Change? Efficiently Assessing Machine Learning API Shifts
ML prediction APIs from providers like Amazon and Google have made it simple to use ML in applications. A challenge for users is that such APIs continuously change over time as the providers update models, and changes can happen silently without users knowing. It is thus important to monitor when and how much the MLAPI...
Accept (Poster)
The paper studies real world ML APIs' performance shifts due to API updates/retraining and proposes a framework to efficiently estimate those shifts. The problem is very important and the presented approach definitely novel. My concern is about limited novelty of the theoretical analysis and weak experimental evaluati...
test
[ "VDdYd34TFW8", "Evl_0rm-4qm", "XUaUFTrBSBF", "M2h8PxAf3LX", "FJishLLrd14", "U10qVTEe-YU", "qR2NxCpxBdj", "GrjckE359Ya", "qNsUWQ3Z4Ui", "pdOi6BPRkgR", "12Wbc1A39M-", "jfeKk7tekNk" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks to authors for their answers to my questions and clarifications.", " The authors' response answered most of my questions and concerns. The added experiment comparing the method to other baselines is a nice improvement. I recommend adding the discussion around *[Necessity of difficulty estimator]* to the ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "FJishLLrd14", "qR2NxCpxBdj", "M2h8PxAf3LX", "jfeKk7tekNk", "12Wbc1A39M-", "pdOi6BPRkgR", "qNsUWQ3Z4Ui", "iclr_2022_gFDFKC4gHL4", "iclr_2022_gFDFKC4gHL4", "iclr_2022_gFDFKC4gHL4", "iclr_2022_gFDFKC4gHL4", "iclr_2022_gFDFKC4gHL4" ]
iclr_2022_sX3XaHwotOg
Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators
We present a new framework AMOS that pretrains text encoders with an Adversarial learning curriculum via a Mixture Of Signals from multiple auxiliary generators. Following ELECTRA-style pretraining, the main encoder is trained as a discriminator to detect replaced tokens generated by auxiliary masked language models (M...
Accept (Poster)
This paper received six reviews, consisting of three 8s two 6s and one 3. The reviewers generally felt that the proposed Electra-like pretraining provided fairly significant downstream improvements. Additional ablations were provided to during the author response period and other author responses were sufficient to cau...
train
[ "fHTcXwT5HQj", "Lk_uFfuURCW", "5O2_jHYvhOO", "icQeetpBFKu", "HTl_tNuJON", "L4BePp_nVS7", "IHqUQB1wkBt", "wH8rsu6-ta4", "BLEHWSUKJOm", "IbxMg1dDjlB", "gMMn3dSahTG", "CESpTkQmP8", "IO4_1xHNndc", "AQ8OJ9KWfZH", "5tySqbWewIT", "DSdJGf3Smoa", "_oNS3BWOY7Q", "2IGEUN4iqYp", "sDg_nlns7FH...
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_...
[ " Thanks for the detailed explanation. All my questions have been addressed.", " Dear Reviewer sJ99,\n\nThanks again for your review. We have answered your questions by updating our manuscript to incorporate new results (_e.g._, the performance study with more MLM heads). As the discussion period is ending soon, ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 3, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 3, 5 ]
[ "DSdJGf3Smoa", "TPhSWWrz48Y", "_JqL9yzIjJu", "w4K1PCPnuW", "CESpTkQmP8", "CESpTkQmP8", "gMMn3dSahTG", "BLEHWSUKJOm", "sDg_nlns7FH", "iclr_2022_sX3XaHwotOg", "AQ8OJ9KWfZH", "IO4_1xHNndc", "m7qMZyP2Tq", "IbxMg1dDjlB", "TPhSWWrz48Y", "_JqL9yzIjJu", "w4K1PCPnuW", "iclr_2022_sX3XaHwotOg...
iclr_2022_J_2xNmVcY4
Optimizing Neural Networks with Gradient Lexicase Selection
One potential drawback of using aggregated performance measurement in machine learning is that models may learn to accept higher errors on some training cases as compromises for lower errors on others, with the lower errors actually being instances of overfitting. This can lead both to stagnation at local optima and to...
Accept (Poster)
The authors propose bringing lexicase selection from evolutionary computation and applying it to the optimisation of gradient descent. This is done by training a set of p networks and using their performance to select this set of p networks as training progresses on random subsets of the training data. The reviewers fe...
train
[ "hmuHeznfY5l", "j-ys71hfJW5", "JMmW6r66j-L", "1AGbyN5tFS", "3I_TyXI6my9", "9l1Y_mVXk2-", "qy7P5jBZEe0", "c5A8vGo6kaI", "6nII5i8M_1K", "bB2VmgWXfcn", "Cvp5K-u7Ayg", "YQ-7t2As9z1", "JgRxAAB_rN_" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper presents a neural network training strategy that leverages lexicase selection and evolutionary algorithms. Specifically, the training strategy involves two layers of loops to iteratively update model&select candidates, which searches the best optimization direction through greedy search. Experiments are...
[ 6, -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, 8 ]
[ 3, -1, 5, -1, -1, 4, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2022_J_2xNmVcY4", "6nII5i8M_1K", "iclr_2022_J_2xNmVcY4", "qy7P5jBZEe0", "c5A8vGo6kaI", "iclr_2022_J_2xNmVcY4", "Cvp5K-u7Ayg", "bB2VmgWXfcn", "hmuHeznfY5l", "JMmW6r66j-L", "9l1Y_mVXk2-", "JgRxAAB_rN_", "iclr_2022_J_2xNmVcY4" ]
iclr_2022_VNqaB1g9393
Decoupled Adaptation for Cross-Domain Object Detection
Cross-domain object detection is more challenging than object classification since multiple objects exist in an image and the location of each object is unknown in the unlabeled target domain. As a result, when we adapt features of different objects to enhance the transferability of the detector, the features of the fo...
Accept (Poster)
This paper decouples the adversarial training of a domain adaptation model with the detector learning process, and is able to disentangle the features of foreground and background when performing adaptation. State of the art results on four different domains/tasks are presented with significant improvement. Reviewers ...
train
[ "98metX_5A9a", "rD1-CiygAo3", "NTfgrDVznXj", "_Ybi6C0XA_F", "NNC0Btu2PDY", "GnNGw4P37Qz", "XrOL1urHA54", "o-N2RaHj6iD", "AKpPVhA1QU9", "iByF7Xvd1Eg", "qKJB8DgWHvK", "h9uKndtNcY", "PEHNSlmfDb", "b3aFO6RdLoe", "GvnJtCv7g66", "rYDMAoEQuy8", "Rfd62gdOCy6" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Paper addresses the problem of cross-domain object detection. The formed source domain detectors (in the form of Faster R-CNN), learned in a supervised manner, are adopted to perform well on the target domain where no annotations are available. The key to the approach is separate adaptation of the classification h...
[ 6, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ 4, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2022_VNqaB1g9393", "NTfgrDVznXj", "b3aFO6RdLoe", "GnNGw4P37Qz", "iclr_2022_VNqaB1g9393", "XrOL1urHA54", "o-N2RaHj6iD", "qKJB8DgWHvK", "GvnJtCv7g66", "iclr_2022_VNqaB1g9393", "NNC0Btu2PDY", "Rfd62gdOCy6", "rYDMAoEQuy8", "AKpPVhA1QU9", "98metX_5A9a", "iclr_2022_VNqaB1g9393", "icl...
iclr_2022_gSdSJoenupI
PolyLoss: A Polynomial Expansion Perspective of Classification Loss Functions
Cross-entropy loss and focal loss are the most common choices when training deep neural networks for classification problems. Generally speaking, however, a good loss function can take on much more flexible forms, and should be tailored for different tasks and datasets. Motivated by how functions can be approximated vi...
Accept (Poster)
Realizing the fact that cross-entropy loss and focal loss are widely used for training deep learning models but mathematical understanding and exploration for such losses are lacking, the authors propose a simple framework named PolyLoss to express the loss function as a linear combination of polynomial functions. In...
train
[ "a4ygvuMlcHg", "loWRJqKF4Cl", "gunZi8KqrT", "POHqOq03mS_", "nwrcLUm4xcR", "133fsHzSjds", "hTXcBIyl4mm", "NEecqLeJIB1", "xstjhKjZFF", "BjSCSdS0l70", "FrkTwUOHU05", "rDpGGkRMmz_", "3NTyhRDg4k" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We thank all reviewers for their thoughtful comments and feedbacks. We are excited that all reviewers see the paper favorably. In the rebuttal revision, the major text changes are marked in blue, including:\n\n1. Emphasizing Poly-1 is the simplest loss formulation in the PolyLoss framework in **Table 1** (Reviewe...
[ -1, 6, -1, -1, -1, 6, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, 2, -1, -1, -1, 3, -1, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2022_gSdSJoenupI", "iclr_2022_gSdSJoenupI", "loWRJqKF4Cl", "hTXcBIyl4mm", "FrkTwUOHU05", "iclr_2022_gSdSJoenupI", "nwrcLUm4xcR", "3NTyhRDg4k", "gunZi8KqrT", "rDpGGkRMmz_", "133fsHzSjds", "iclr_2022_gSdSJoenupI", "iclr_2022_gSdSJoenupI" ]
iclr_2022_cuvga_CiVND
No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models
Recent research has shown the existence of significant redundancy in large Transformer models. One can prune the redundant parameters without significantly sacrificing the generalization performance. However, we question whether the redundant parameters could have contributed more if they were properly trained. To answ...
Accept (Poster)
The paper observes that the number of redundant parameters is a function of the training procedure and proposes a training strategy that encourages all parameters in the model to be trained sufficiently and become useful. The method adaptively adjusts the learning rate for each individual parameter according to its sen...
train
[ "y5LwnGrLG3Q", "JtBk4lsTE8l", "s_TgAt0yKLQ", "f3folLGCpCs", "p2OTEPVLFsu", "tyaR-2beU64", "--qTFCQKsQr", "qV-wX3wFx1H", "wSrrmW5kM6d", "wCQ3Ftc1r2W" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the reply, which has cleared my questions about Table 1 and Figure 5. But my concern regarding the model pruning and compression remains. According to Figure 3 (the bottom row), the SAGE-trained model is more susceptible to parameter pruning than the baseline model. Besides, the comparison on MNLI-m/mm...
[ -1, -1, -1, -1, -1, -1, 6, 6, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "tyaR-2beU64", "s_TgAt0yKLQ", "wCQ3Ftc1r2W", "wSrrmW5kM6d", "qV-wX3wFx1H", "--qTFCQKsQr", "iclr_2022_cuvga_CiVND", "iclr_2022_cuvga_CiVND", "iclr_2022_cuvga_CiVND", "iclr_2022_cuvga_CiVND" ]
iclr_2022_28ib9tf6zhr
Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?
Vision transformers (ViTs) have recently set off a new wave in neural architecture design thanks to their record-breaking performance in various vision tasks. In parallel, to fulfill the goal of deploying ViTs into real-world vision applications, their robustness against potential malicious attacks has gained increasin...
Accept (Poster)
This paper provides an interesting study on the adversarial robustness comparisons between ViTs and CNNs, and successfully challenges the previous belief that ViTs are always more robust than CNNs on defending against adversarial attacks. Specifically, as revealed in this paper, when the attacker considers the attentio...
train
[ "ePmKFWA5IVn", "4RsBSmcCN5", "D0RuKt1HTnp", "zqRqqbxrV94", "bXickuJuuoE", "cUIm8ZukNRz", "V4WQK5PXPdC", "G8iFgNsr1hB", "-1nnIDQWRkj", "sxBtejLVEcp", "sisKbxTE49h", "9AHBGrY-iE", "EqwWJ9DXDNI", "PZlsbGG9r8", "Ifkzf5oy2Tc" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks to the authors for the new experiments on adversarial training and the response, which are impressive to me. I would support acceptance. In addition, I’m hoping to see experimental details (e.g., hyperparameters) for adversarial training to be included in the next revision.\n\n", " Thank you for the resp...
[ -1, -1, 6, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ -1, -1, 4, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "sxBtejLVEcp", "-1nnIDQWRkj", "iclr_2022_28ib9tf6zhr", "G8iFgNsr1hB", "cUIm8ZukNRz", "EqwWJ9DXDNI", "iclr_2022_28ib9tf6zhr", "D0RuKt1HTnp", "Ifkzf5oy2Tc", "PZlsbGG9r8", "V4WQK5PXPdC", "G8iFgNsr1hB", "sisKbxTE49h", "iclr_2022_28ib9tf6zhr", "iclr_2022_28ib9tf6zhr" ]
iclr_2022_gPvB4pdu_Z
Compositional Training for End-to-End Deep AUC Maximization
Recently, deep AUC maximization (DAM) has achieved great success in different domains (e.g., medical image classification). However, the end-to-end training for deep AUC maximization still remains a challenging problem. Previous studies employ an ad-hoc two-stage approach that first trains the network by optimizing a ...
Accept (Spotlight)
I recommend this paper to be accepted. All reviewers are in agreement that this paper is above the bar.
train
[ "TVwh_v459tk", "agoaTBa7VNA", "-WXxOpbj1nI", "dj8M2pPnl2t", "Al9WyV8CG4O", "XSMsw3GPPX2", "s0na1MTkoYu", "_lcOuB7Uu_R" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear all reviewers,\n\nThank you for reading our responses! We have also updated our manuscript to address the concerns raised by all reviewers. The changes made in main section and appendix in the revised draft are marked by red color. Please let us know if you have any further questions. Thank you!\n\nAuthors\n...
[ -1, -1, -1, -1, -1, 6, 8, 8 ]
[ -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "iclr_2022_gPvB4pdu_Z", "_lcOuB7Uu_R", "s0na1MTkoYu", "XSMsw3GPPX2", "XSMsw3GPPX2", "iclr_2022_gPvB4pdu_Z", "iclr_2022_gPvB4pdu_Z", "iclr_2022_gPvB4pdu_Z" ]
iclr_2022_Zq2G_VTV53T
FastSHAP: Real-Time Shapley Value Estimation
Although Shapley values are theoretically appealing for explaining black-box models, they are costly to calculate and thus impractical in settings that involve large, high-dimensional models. To remedy this issue, we introduce FastSHAP, a new method for estimating Shapley values in a single forward pass using a learned...
Accept (Poster)
The reviewers agree that the paper introduces an interesting approach for estimating Shaley values in real run-time. The effectiveness of the method is well demonstrated across different tasks/datasets.
train
[ "LFMF94Ar71", "KJCOFmCSZ6Y", "y38ifba213r", "1srtBnUyD0Q", "Nz2DZl43CO", "gWf-x97WRam", "5QBm_4XBTug", "5SkFNj12sxs", "tpYjr0NFkOr", "Y66TvC6DfMs", "uRbQANQFYdA", "rJSBQNrpPQ", "jEknTT0NaZe", "t3DDvzWHr3r", "Rz8YTXUqe4X" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear reviewers,\n\nThank you very much for your valuable feedback that has helped us improve this work. We would also like to thank reviewers GNuc and zqFs for reading our rebuttal and updating their reviews. For the other two reviewers, wDJz and Asf6, we hope that we have addressed any concerns you may have had....
[ -1, 5, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, 3, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, 2, 4 ]
[ "iclr_2022_Zq2G_VTV53T", "iclr_2022_Zq2G_VTV53T", "t3DDvzWHr3r", "5QBm_4XBTug", "KJCOFmCSZ6Y", "Rz8YTXUqe4X", "tpYjr0NFkOr", "iclr_2022_Zq2G_VTV53T", "5SkFNj12sxs", "Rz8YTXUqe4X", "KJCOFmCSZ6Y", "t3DDvzWHr3r", "iclr_2022_Zq2G_VTV53T", "iclr_2022_Zq2G_VTV53T", "iclr_2022_Zq2G_VTV53T" ]
iclr_2022_IcUWShptD7d
Monotonic Differentiable Sorting Networks
Differentiable sorting algorithms allow training with sorting and ranking supervision, where only the ordering or ranking of samples is known. Various methods have been proposed to address this challenge, ranging from optimal transport-based differentiable Sinkhorn sorting algorithms to making classic sorting networks ...
Accept (Poster)
This submission presents an interesting contribution on differentiable sorting, providing an analysis of monotonicity for these operations. The reviewers overall argue for acceptance.
train
[ "rontKBs3Uca", "6pgViKlcGY", "t8nRb2AgghM", "Oq1tSiUkSry", "EVMQdURk2cO", "YILX_9hmF0H", "vlzXARQj5en", "ji3nedgI_40", "16slpYaOnyI", "QW-_nZP0VBN", "LET1hP5rBR", "y1M-Ykfvjqz", "uuKDCCaZRJ2", "x5EEbcxgSJ" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper introduces monotic differentiable sinkhorn networks. Authors argue that mononicity is a desired property because it leads to better bounds in errors. Authors show their proposed approach beats the state of the art on sorting MNIST and SVHN digits This is a nice contribution, the paper is well-written, t...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 8 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3 ]
[ "iclr_2022_IcUWShptD7d", "QW-_nZP0VBN", "YILX_9hmF0H", "EVMQdURk2cO", "16slpYaOnyI", "ji3nedgI_40", "iclr_2022_IcUWShptD7d", "x5EEbcxgSJ", "uuKDCCaZRJ2", "rontKBs3Uca", "y1M-Ykfvjqz", "iclr_2022_IcUWShptD7d", "iclr_2022_IcUWShptD7d", "iclr_2022_IcUWShptD7d" ]
iclr_2022_ZOcX-eybqoL
Generalisation in Lifelong Reinforcement Learning through Logical Composition
We leverage logical composition in reinforcement learning to create a framework that enables an agent to autonomously determine whether a new task can be immediately solved using its existing abilities, or whether a task-specific skill should be learned. In the latter case, the proposed algorithm also enables the agent...
Accept (Poster)
I thank the authors for their submission and active participation in the discussions. This papers is borderline. On the positive side, reviewers emphasized this is a well written [ovqB,1zPe] and sound paper [BUDa] with good theoretical [td5N,ovqB,1zPe] and empirical [BUDa,td5N,ovqB] results. On the negative side, revie...
train
[ "Wp6e_D1HB1_", "DBaK6-KALyn", "7PO4xbvZZ8S", "7djVIarNegN", "GevRiME7K4_", "foPnUQYmRDw", "IcwCuGAVTB1", "stECuCatQfK", "l4YySGShH4s", "yIqZ6Qt2Q9I", "CBicXAiC36Q", "hIoa4-nBqfG", "eJQWWqSwYPf", "g6UdcXyeie4", "VOaJyiHbpQS", "95qtU1ELQPW", "qnHK9TIHgW2", "HL8YdANPUg", "KNP7xnQ0CX...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_re...
[ " Thanks for the continued discussion.\n\n> It is possible (and easier for me) to understand the bounds in three cases: the task has been previously seen, the task is expressible in terms of previously seen tasks, and the task is neither. The relevant case for claims regarding \"few-shot transfer\" is the last one;...
[ -1, -1, 5, -1, -1, -1, 8, -1, -1, 6, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, -1, 3, -1, -1, -1, 3, -1, -1, 3, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "Ea4RR7PCqOB", "GevRiME7K4_", "iclr_2022_ZOcX-eybqoL", "stECuCatQfK", "KNP7xnQ0CX", "yFzyxdJmEr", "iclr_2022_ZOcX-eybqoL", "8ab_74qb-du", "4uQ9Vzqo5V5", "iclr_2022_ZOcX-eybqoL", "eJQWWqSwYPf", "iclr_2022_ZOcX-eybqoL", "g6UdcXyeie4", "95qtU1ELQPW", "iclr_2022_ZOcX-eybqoL", "qnHK9TIHgW2"...
iclr_2022_NRX9QZ6yqt
Memory Augmented Optimizers for Deep Learning
Popular approaches for minimizing loss in data-driven learning often involve an abstraction or an explicit retention of the history of gradients for efficient parameter updates. The aggregated history of gradients nudges the parameter updates in the right direction even when the gradients at any given step are not inf...
Accept (Poster)
The paper proposes a general method to enhance the performance of first-order optimizers. The main idea is to use a memory buffer to maintain a limited set of critical gradients from recent history. Namely, gradients with large l2 norm. The paper includes a convergence proof on strongly convex smooth objectives. Experi...
train
[ "6JHRe8zRknz", "Vd8Y7_JQef0", "uXp9bD1PXBD", "Y1auAUcfIsA", "Kemf0XTgT5u", "ns-xodFkZRM", "uJuixu-wR-C", "HGI-uTcyPOV" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for their suggestions and comments. We address the questions raised in the review in detail below:\n\n**Review**: nowhere near SotA validation on CIFAR 10/100? Validation should be around 75% looking at wideresnets, results, while this is an optimisation paper, I feel like this makes it hard...
[ -1, -1, -1, -1, 5, 6, 6, 8 ]
[ -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "HGI-uTcyPOV", "ns-xodFkZRM", "uJuixu-wR-C", "Kemf0XTgT5u", "iclr_2022_NRX9QZ6yqt", "iclr_2022_NRX9QZ6yqt", "iclr_2022_NRX9QZ6yqt", "iclr_2022_NRX9QZ6yqt" ]
iclr_2022_msRBojTz-Nh
Learned Simulators for Turbulence
Turbulence simulation with classical numerical solvers requires high-resolution grids to accurately resolve dynamics. Here we train learned simulators at low spatial and temporal resolutions to capture turbulent dynamics generated at high resolution. We show that our proposed model can simulate turbulent dynamics more...
Accept (Poster)
The paper compared different architectures of deep neural nets for learning full 3D turbulence simulations. On coarse grids, the proposed method predicts more accurately than the classical solvers, especially on preserving the high-frequency information. The reviews think the paper is clearly written with strong expe...
train
[ "CfIErGa3gD", "OSXLJlc6phi", "nns8QG2bRUz", "sO9-q7mFxJ", "d-Y1fhco_L8", "cypnkhuopQY", "jv2a75naa5Z", "6sajNc5LOXe", "eO47A92ppt1", "mIjMKRJOI_X", "RzuKKZY3CR" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " Thank you for your reply. I marked up my score and I hope you can continue to polish the paper before submitting the final version.", "This paper aims to learn a simulator that predicts large-scale turbulent dynamics of a known system. As shown in the results, on coarse grids, the proposed method predicts more ...
[ -1, 6, -1, 8, -1, 6, -1, -1, -1, -1, 8 ]
[ -1, 3, -1, 3, -1, 3, -1, -1, -1, -1, 4 ]
[ "jv2a75naa5Z", "iclr_2022_msRBojTz-Nh", "mIjMKRJOI_X", "iclr_2022_msRBojTz-Nh", "eO47A92ppt1", "iclr_2022_msRBojTz-Nh", "cypnkhuopQY", "OSXLJlc6phi", "RzuKKZY3CR", "sO9-q7mFxJ", "iclr_2022_msRBojTz-Nh" ]
iclr_2022_e95i1IHcWj
Equivariant and Stable Positional Encoding for More Powerful Graph Neural Networks
Graph neural networks (GNN) have shown great advantages in many graph-based learning tasks but often fail to predict accurately for a task-based on sets of nodes such as link/motif prediction and so on. Many works have recently proposed to address this problem by using random node features or node distance features. H...
Accept (Poster)
This work studies the question of increasing the expressive power of GNNs by adding positional encodings while preserving equivariance and stability to graph perturbations. Reviewers were generally positive about this work, highlighting its judicious problem setup, identifying the right notion of stability and how it ...
train
[ "-QM1WOZaB4", "tDJdVlfJ4mU", "YnLoIL8nlX", "g13bC9Tmzzq", "Y_5611viBwo", "458oyddUJRl", "ij8F_l0NyJ", "7NeVZHKk-L", "fktOSW6a3m", "JCPMWwo5WqK", "r2FHjBNoL4k", "K-JK0JvPPm4", "H3fYkVFUa9Y", "WOmV9WYxB4N", "l-2HLPQoIoC", "UmcV0IUh00P", "XjJUBUYGxIL", "UKq0T74XZ6u", "--KhlF2wr0j", ...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for increasing your evaluation! We take your suggestion seriously. We are working on a follow-up work that focuses on those tasks with multiple graphs where equivariance is more crucial as you suggested. ", " I agree with you that my claim about equivariance and transductive task is not correct as eq...
[ -1, -1, 6, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 8 ]
[ -1, -1, 5, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "tDJdVlfJ4mU", "r2FHjBNoL4k", "iclr_2022_e95i1IHcWj", "UKq0T74XZ6u", "YnLoIL8nlX", "ij8F_l0NyJ", "XjJUBUYGxIL", "iclr_2022_e95i1IHcWj", "--KhlF2wr0j", "UKq0T74XZ6u", "K-JK0JvPPm4", "YnLoIL8nlX", "7NeVZHKk-L", "iclr_2022_e95i1IHcWj", "H3fYkVFUa9Y", "cXJO6Giv_Xz", "fktOSW6a3m", "iclr...
iclr_2022_HOjLHrlZhmx
CROP: Certifying Robust Policies for Reinforcement Learning through Functional Smoothing
As reinforcement learning (RL) has achieved great success and been even adopted in safety-critical domains such as autonomous vehicles, a range of empirical studies have been conducted to improve its robustness against adversarial attacks. However, how to certify its robustness with theoretical guarantees still remains...
Accept (Poster)
The authors propose a framework for for the certification of reinforcement learning agents against adversarial observation/state perturbations based on randomized smoothing. They develop the theory of the framework, demonstrating that the framework can be used to certify lower bounds on the worst-case cumulative reward...
train
[ "zgVgXzLXW6O", "e9_PTKNIs5R", "A25Dq8-Te0x", "eZjLJLGaQ8Z", "5-DvEpli4up", "Yn3U7YPXmMS", "6uzVehanZPE", "Fud1_BxeNG", "doBf40x3WEo", "IMtAIGS9hD5", "sOMLKpJPvt-", "wdTpTiEof_", "0Fy_bnxCNfn", "7z7H7WjH_M5", "Vofgf8-NET9", "-GN545PlRhm", "6RuQftJBsI", "4Uo8j7_9L9n", "S_UlC5LH-Q7"...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", ...
[ "This paper proposes two robustness certification criteria for Q-learning based RL policies under adversarial state perturbations: per-state action robustness and lower bound of cumulative rewards. The certification is mainly based on the randomized smoothing technique. For certifying the cumulative reward, two smo...
[ 6, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2 ]
[ "iclr_2022_HOjLHrlZhmx", "7z7H7WjH_M5", "5-DvEpli4up", "iclr_2022_HOjLHrlZhmx", "Yn3U7YPXmMS", "Fud1_BxeNG", "iclr_2022_HOjLHrlZhmx", "IMtAIGS9hD5", "wdTpTiEof_", "wdTpTiEof_", "0Fy_bnxCNfn", "SnMJhXndiWA", "OiMoWIScAke", "zgVgXzLXW6O", "zgVgXzLXW6O", "zgVgXzLXW6O", "zgVgXzLXW6O", ...
iclr_2022_psh0oeMSBiF
COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks
As reinforcement learning (RL) has achieved near human-level performance in a variety of tasks, its robustness has raised great attention. While a vast body of research has explored test-time (evasion) attacks in RL and corresponding defenses, its robustness against training-time (poisoning) attacks remains largely una...
Accept (Poster)
The authors develop a novel framework for certifying the robustness of RL agents against data poisoning attacks. They obtain lower bounds on the cumulative reward for several benchmark tasks. Reviewers had concerns about certain organizational and technical aspects of the paper, but these were addressed well in the di...
test
[ "bKTmxfBloy", "i7pbfn844mZ", "9k3F7PQi5hZ", "ePCxqRe9hGQ", "NQsUkXyAw3q", "YRGDpvMqmeA", "_Eh28C38F6", "zUHjNzR6Vu_", "NSfE1s6U4fg", "Ig1rrEajLv", "nq-2e2oOnqB", "KcCikdpGSod", "SjhpLqLb-_t", "y5W6O0JZVp4", "MZwTbpyTgaQ", "6ccZpdqwG1Y", "-muVCDQJ-nV", "nJJqpImSxq", "9ccrffh8xNC",...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author",...
[ " We thank the reviewer for the prompt feedback and we are glad that our clarifications were effective. We thank the reviewer’s suggestion for rewriting the Algorithm 5 and will definitely do that in our revision to avoid future confusion. However, we would like to further clarify that **it is a complete misunderst...
[ -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 2 ]
[ "i7pbfn844mZ", "9k3F7PQi5hZ", "NQsUkXyAw3q", "wBAvqOOFGNK", "YRGDpvMqmeA", "NSfE1s6U4fg", "NSfE1s6U4fg", "iclr_2022_psh0oeMSBiF", "KcCikdpGSod", "SjhpLqLb-_t", "zUHjNzR6Vu_", "SjhpLqLb-_t", "6ccZpdqwG1Y", "iclr_2022_psh0oeMSBiF", "wBAvqOOFGNK", "zUHjNzR6Vu_", "zUHjNzR6Vu_", "zUHjNz...
iclr_2022_BwPaPxwgyQb
Provable Learning-based Algorithm For Sparse Recovery
Recovering sparse parameters from observational data is a fundamental problem in machine learning with wide applications. Many classic algorithms can solve this problem with theoretical guarantees, but their performances rely on choosing the correct hyperparameters. Besides, hand-designed algorithms do not fully exploi...
Accept (Poster)
Dear Authors, The paper was received nicely and discussed during the rebuttal period. There is consensus among the reviewers that the paper should be accepted: - This paper does contribute solidly to a timely topic of theoretical understanding of sparisty recovery with deep unroling. - The original version had ver...
train
[ "BTTHSZ3Rh5N", "4ksb5LJ2une", "Rrur3tQSsgl", "O_pdi8CxT2P", "GkDn3EaagsL", "jv779OYdWfV", "fv0hF6AU7Tm", "KFNsbIoTr1s", "bjWk2-sBvlC", "66CM-IXAjq", "Ua3t7ZJMfFf", "0qu5CVtrxtK", "5q01T_QjbCW", "jK1eQIkv1c", "yyMnOeKDUx", "BAmgjAnWapv", "Pih9OpsCwu", "qw_yXWq46JoR", "nj2_IwOrGvO"...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "...
[ "The paper proposes a learning-to-learn algorithm for the classic problem of compressed sensing. The algorithm is based on a deep unrolling of a non-convex optimization procedure from a prior work (Wang et al, 2014) in the literature. The paper contains some theory (for capacity and generalization) and experiments,...
[ 6, 8, 5, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2022_BwPaPxwgyQb", "iclr_2022_BwPaPxwgyQb", "iclr_2022_BwPaPxwgyQb", "iclr_2022_BwPaPxwgyQb", "Rrur3tQSsgl", "BTTHSZ3Rh5N", "O_pdi8CxT2P", "4ksb5LJ2une", "iclr_2022_BwPaPxwgyQb", "jK1eQIkv1c", "5q01T_QjbCW", "5cRnih86L2N", "nj2_IwOrGvO", "0qu5CVtrxtK", "Pih9OpsCwu", "qw_yXWq46JoR...
iclr_2022_lY0-7bj0Vfz
Prototype memory and attention mechanisms for few shot image generation
Recent discoveries indicate that the neural codes in the primary visual cortex (V1) of macaque monkeys are complex, diverse and sparse. This leads us to ponder the computational advantages and functional role of these “grandmother cells." Here, we propose that such cells can serve as prototype memory priors that bias a...
Accept (Poster)
This paper uses prototype memories for learning generative models. Inspired by the finding that there is sparse activity and complex selectivity in the supragranular layers of every cortical region, even primary visual cortex, the authors propose to use prototype memories at each level of the hierarchy, which marks the...
train
[ "J8-0tolLLn", "GMWV4s1xLrI", "Tns_KwKW6W0", "N-hJDYlAiJ", "vYKexpIE7f", "L7uxSD9SZlN", "nuGQFQqRwjF", "xU1api0iYx", "If1_EVdLewa", "62TRCtulSAp", "W_KCxtC7qHc" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer 4n7p,\n\nThank you again for your feedback. As the deadline for discussion is approaching, we would be happy to provide any additional clarifications that you may need.\n\nIn our previous comments, we have carefully studied your comments and made updates to the revision as summarized below:\n\n- Rev...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "62TRCtulSAp", "If1_EVdLewa", "If1_EVdLewa", "iclr_2022_lY0-7bj0Vfz", "W_KCxtC7qHc", "62TRCtulSAp", "62TRCtulSAp", "If1_EVdLewa", "iclr_2022_lY0-7bj0Vfz", "iclr_2022_lY0-7bj0Vfz", "iclr_2022_lY0-7bj0Vfz" ]
iclr_2022_7UmjRGzp-A
Understanding over-squashing and bottlenecks on graphs via curvature
Most graph neural networks (GNNs) use the message passing paradigm, in which node features are propagated on the input graph. Recent works pointed to the distortion of information flowing from distant nodes as a factor limiting the efficiency of message passing for tasks relying on long-distance interactions. This phen...
Accept (Oral)
The paper proposes a new technique to handle oversquashing in GNNs by introducing a novel rewiring technique. The reviewers are quite positive about the paper and the rebuttal phase greatly helped clarify the method and it's impact.
train
[ "l19T-EuvxvG", "olFvxkayJYF", "583iS7M5keP", "d68aDZJ0ac", "1ognaWiUJk", "N7_talvJjhe", "3QL1PZDosZB", "aB6w7xgUEHJ", "Vrf66eENr7X", "wUAQTjYtLAV", "MP-EWtgRghc", "jgP1vE1TIsq", "ntxMNiXNCbL", "7i8x7fsAdy", "4B4lMcNTFQb", "ieIdosWSyVt", "7FeFaSgWRU-", "US8GtufIL6_", "8NA_kOsiVSN"...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "...
[ "The authors propose a new graph rewiring approach that utilizes a discrete notion of Ricci curvature to mitigate over-squashing. This is motivated by a link between negatively curved edges and graph bottlenecks. The paper has a theoretical focus, but also provides a set of validation experiments to demonstrate the...
[ 8, -1, 8, -1, 10, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 3, -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2022_7UmjRGzp-A", "Vrf66eENr7X", "iclr_2022_7UmjRGzp-A", "Vrf66eENr7X", "iclr_2022_7UmjRGzp-A", "MP-EWtgRghc", "jgP1vE1TIsq", "wUAQTjYtLAV", "iclr_2022_7UmjRGzp-A", "83V-xdyhBrI", "4B4lMcNTFQb", "ntxMNiXNCbL", "7i8x7fsAdy", "ieIdosWSyVt", "K9waDPEsE7B", "7FeFaSgWRU-", "US8GtufI...
iclr_2022_ecH2FKaARUp
An Information Fusion Approach to Learning with Instance-Dependent Label Noise
Instance-dependent label noise (IDN) widely exists in real-world datasets and usually misleads the training of deep neural networks. Noise transition matrix (NTM) (i.e., the probability that clean labels flip into noisy labels) is used to characterize the label noise and can be adopted to bridge the gap between clean a...
Accept (Poster)
To tackle the problem of classification under input-dependent noise, the authors proposed the posterior transition matrix (PTM) to achieve statistically consistent classification. Specifically the information fusion approach was developed to fine-tune the noise transition matrix. Experiments demonstrated the effectiven...
test
[ "2tc1bAh8eLA", "Mgi3ehWq5uL", "TENvRr-RG1F", "X1O419zarZi", "r9kiOzGPGKr", "sgRqS8Svy3Q", "6aYEA42-muF", "yzWroTUM7f1", "lHkIEoFi5lN", "81RUsan_O06", "3Qz7sUoZxHv", "e1SryFXM-Il", "nAQ1nb74GhE", "ng3DO-6crME", "XTpjSQmQEDX", "xhLPOWazTuc", "3S1Us8pvlyh", "ptzUGRPWbeM", "-O9u2qzk0...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "...
[ " Q3: Peer loss [2] also provides optimal classifier guarantee given prior label probability and transition matrix. Based on my understanding, peer loss does not require the exact knowledge of the transition matrix.\n\nA3: For optimal classifier guarantee, peer loss still needs exact knowledge of the transition mat...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, 5, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, -1, 4, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "X1O419zarZi", "r9kiOzGPGKr", "sgRqS8Svy3Q", "lHkIEoFi5lN", "81RUsan_O06", "yzWroTUM7f1", "ng3DO-6crME", "ng3DO-6crME", "ng3DO-6crME", "ng3DO-6crME", "iclr_2022_ecH2FKaARUp", "fHLtN7UrHjr", "iclr_2022_ecH2FKaARUp", "4Dr1BGw5M-U", "3S1Us8pvlyh", "iclr_2022_ecH2FKaARUp", "B40Np8am3bK",...
iclr_2022_htWIlvDcY8
FALCON: Fast Visual Concept Learning by Integrating Images, Linguistic descriptions, and Conceptual Relations
We present a meta-learning framework for learning new visual concepts quickly, from just one or a few examples, guided by multiple naturally occurring data streams: simultaneously looking at images, reading sentences that describe the objects in the scene, and interpreting supplemental sentences that relate the novel c...
Accept (Poster)
This paper presents a meta learning framework to learn novel visual concepts with few examples. The proposed FALCON model uses an embedding prediction module to infer novel concept embeddings. This is done via paired image and text data as well as supplementary sentences. The resulting systems shows improvements on a s...
train
[ "eppaOJcXcQB", "zjiCSgiscig", "XrgGV02CjR4", "-NM7pWPCLBN", "OSo7qDeaWxq", "RAeLbCs9ZSp", "YU2EkVIk6u", "Mr_19zLqztl", "QjICwGqODu9", "1I6EWRpUvs2", "THAytYaDdGw", "PIip4T4RzIm", "s1aQdaRhpyh", "K6gka_QPdTE", "J3chA0szjVE" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks for the thorough and insightful response. The author resolved my questions. Generalizing to real-world image and language is challenging. I think this paper provide an interesting approach towards that. I would recommend 8.", " Thank you for the added examples and future plans! I am generally satisfied w...
[ -1, -1, 6, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ -1, -1, 3, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "Mr_19zLqztl", "RAeLbCs9ZSp", "iclr_2022_htWIlvDcY8", "YU2EkVIk6u", "iclr_2022_htWIlvDcY8", "XrgGV02CjR4", "J3chA0szjVE", "QjICwGqODu9", "K6gka_QPdTE", "THAytYaDdGw", "OSo7qDeaWxq", "s1aQdaRhpyh", "iclr_2022_htWIlvDcY8", "iclr_2022_htWIlvDcY8", "iclr_2022_htWIlvDcY8" ]
iclr_2022_n0OeTdNRG0Q
Efficient Sharpness-aware Minimization for Improved Training of Neural Networks
Overparametrized Deep Neural Networks (DNNs) often achieve astounding performances, but may potentially result in severe generalization error. Recently, the relation between the sharpness of the loss landscape and the generalization error has been established by Foret et al. (2020), in which the Sharpness Aware Minimiz...
Accept (Poster)
This paper focuses on improving the efficiency of sharpness-aware minimization method for training neural networks. The proposals are stochastic weight perturbation, namely selecting subset of the parameters at any step, and sharpness-sensitive data selection. The philosophy behind sounds quite interesting to me, namel...
test
[ "Eu0kOCRpg5-", "X9UYPp3GZyS", "MId2sRP3TEN", "SXI_ri9UjRi", "VI9438XPR1r", "2o9tHVYTuw2", "T-95MFRvYEH", "BlqXzHR0Mue", "vTXMfwqg8QO", "9wb5fWsk73a", "SlFvSFECh3", "aDbCbC7zT3H", "1dasJvCRdW0" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear authors, thank you for the detailed rebuttal, especially for the new experimental results and intuition for why ESAM is helpful! I maintain my assessment, this is a good paper and I recommend accepting it.", "Paper proposes techniques to improve the efficiency of Sharpness aware minimization method. They a...
[ -1, 8, -1, 8, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "9wb5fWsk73a", "iclr_2022_n0OeTdNRG0Q", "vTXMfwqg8QO", "iclr_2022_n0OeTdNRG0Q", "T-95MFRvYEH", "iclr_2022_n0OeTdNRG0Q", "SXI_ri9UjRi", "1dasJvCRdW0", "X9UYPp3GZyS", "SlFvSFECh3", "aDbCbC7zT3H", "iclr_2022_n0OeTdNRG0Q", "iclr_2022_n0OeTdNRG0Q" ]
iclr_2022_tUMr0Iox8XW
Efficient Computation of Deep Nonlinear Infinite-Width Neural Networks that Learn Features
While a popular limit of infinite-width neural networks, the Neural Tangent Kernel (NTK) often exhibits performance gaps from finite-width neural networks on standard datasets, due to lack of feature learning. Although the feature learning *maximal update limit*, or *μ-limit* (Yang and Hu, 2020) of wide networks has cl...
Accept (Poster)
This paper studies deep non-linear infinite-width neural networks that go beyond the NTK and learn features. This paper extends the prior result on shallow neural networks to deep neural networks and empirically evaluates the deep inf-wide nn. The reviewers find the contributions in the paper valuable. The meta reviewe...
train
[ "zlx98YhJchZ", "ScNkcymsfGp", "WMloWi4Sg9", "AcJ2hUXK824", "19e8c7GS7L7", "hr8L_Rz9toh", "QBYL5ORZWSZ", "OYRx4_wM_MM", "rfm4ka-1WhP", "P_VuJ9opIXc", "8Ap3gUIdCKx", "tgRIA3ecXvO", "m1VIbVi7y1t", "FH_ikg-qXMH", "KlL29tucoz", "4hzmCrEcn95", "dQY6Y1nlcR0", "0kyGXnkj149", "_GMxrykS6L-...
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", ...
[ " We will do another sweep in standard parametrization as you suggest, as well as address what \"the pi-net and pi-limit really tells us much about what happens in standard, finite width networks.\"", " Thanks to the authors for their response, as well as to the other reviewers for their detailed reviews - I am a...
[ -1, -1, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, -1, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "ScNkcymsfGp", "m1VIbVi7y1t", "iclr_2022_tUMr0Iox8XW", "iclr_2022_tUMr0Iox8XW", "OYRx4_wM_MM", "QBYL5ORZWSZ", "FH_ikg-qXMH", "dQY6Y1nlcR0", "P_VuJ9opIXc", "8Ap3gUIdCKx", "FH_ikg-qXMH", "0kyGXnkj149", "bWHg-Z5HAl2", "WMloWi4Sg9", "WMloWi4Sg9", "WMloWi4Sg9", "LwvPdbhLJqb", "7A3BBh5Hy...
iclr_2022_5QhUE1qiVC6
The Convex Geometry of Backpropagation: Neural Network Gradient Flows Converge to Extreme Points of the Dual Convex Program
We study non-convex subgradient flows for training two-layer ReLU neural networks from a convex geometry and duality perspective. We characterize the implicit bias of unregularized non-convex gradient flow as convex regularization of an equivalent convex model. We then show that the limit points of non-convex subgradie...
Accept (Poster)
The papers makes progress on the important question of implicit bias in gradient based neural learning. Remarkably they derive reasonable conditions for global optimality.
test
[ "zkLmFj5GUw", "X0M9ERscTH", "fchynKyh-us", "jGiuLxf46GX", "XTeGTqxI8SF", "aByoprKyOOf", "SgpEwwk8wK3" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper studies the subgradient flows when training a two-layer ReLU neural network. To this end, the non-convex max-margin problem is reformulated as a convex optimization problem. The authors then analyze the dual extreme points of the convex formulation and show the implicit regularization of unregularized gr...
[ 6, -1, -1, -1, -1, 6, 8 ]
[ 2, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2022_5QhUE1qiVC6", "fchynKyh-us", "zkLmFj5GUw", "SgpEwwk8wK3", "aByoprKyOOf", "iclr_2022_5QhUE1qiVC6", "iclr_2022_5QhUE1qiVC6" ]
iclr_2022_cw-EmNq5zfD
Group-based Interleaved Pipeline Parallelism for Large-scale DNN Training
The recent trend of using large-scale deep neural networks (DNN) to boost performance has propelled the development of the parallel pipelining technique for efficient DNN training, which has resulted in the development of several prominent pipelines such as GPipe, PipeDream, and PipeDream-2BW. However, the current lead...
Accept (Poster)
The paper proposes a new pipeline-parallel training method called WPipe. WPipe works (on a very high level) by replacing the two-buffer structure of PipeDream-2BW with a two-partition-group structure, allowing resources to be shared in a similar way to PipeDream-2BW but with less memory use and less delays in weight up...
val
[ "roXML1Liy4Z", "RdAeTukd73P", "TFVEUXb9e3V", "THOnpU5TO3H", "4qnnBofmolk", "TldKKt6QJ6F", "K_WB80CKrDf", "9T3P44cJMSv", "gT8bkYYJBb", "AmMfoBOmLBr", "pa7drHi_Opl", "2LRFHHArijz" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for expressing your concerns in more detail, so that we can understand your thoughts better. You may have some misunderstandings about WPipe.\n\nRegarding **A1**, WPipe, like GPipe, PipeDream-2BW, PipeDream-flush, etc., is a general pipeline parallel training system. Where the others can be used, WPipe ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 8, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "RdAeTukd73P", "9T3P44cJMSv", "THOnpU5TO3H", "K_WB80CKrDf", "pa7drHi_Opl", "2LRFHHArijz", "AmMfoBOmLBr", "gT8bkYYJBb", "iclr_2022_cw-EmNq5zfD", "iclr_2022_cw-EmNq5zfD", "iclr_2022_cw-EmNq5zfD", "iclr_2022_cw-EmNq5zfD" ]
iclr_2022_iMH1e5k7n3L
Spike-inspired rank coding for fast and accurate recurrent neural networks
Biological spiking neural networks (SNNs) can temporally encode information in their outputs, e.g. in the rank order in which neurons fire, whereas artificial neural networks (ANNs) conventionally do not. As a result, models of SNNs for neuromorphic computing are regarded as potentially more rapid and efficient than AN...
Accept (Spotlight)
The authors propose a rank coding scheme for recurrent neural networks (RNNs) - inspired by spiking neural networks - in order to improve inference times at the classification of sequential data. The basic idea is to train the RNN to classify the sequence early - even before the full sequence has been observed. They al...
train
[ "JK4g1Z4qlZ", "Jkt8JoNeEoA", "9AkBgkuTPV1", "c6LDnWkMzUm", "RaWggLnkY_K", "qOXlMCxC-c5", "wJ0hCCV22G", "-eW-FUPEQ3", "X-qzb0OwOSM", "s_rWgRnhGRd", "n7IyIRqBDSN", "tXtaRAm6DTL", "aBJ-z9ALJF", "z_-jzhbHgg", "CeQCazQVQyp", "A9vsXEJCRsv", "74FHALwfvGr", "pw0B0zSQ3K", "ra1RgfawTNa", ...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_r...
[ " We would like to thank the reviewers for the constructive discussion. Based on it we have updated the manuscript with the requested new experiments, added clarifications and corrections of some typos. We are thankful for the process that allowed the paper to converge to the same recommendation by all reviewers.",...
[ -1, -1, -1, 8, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ -1, -1, -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2022_iMH1e5k7n3L", "9AkBgkuTPV1", "qOXlMCxC-c5", "iclr_2022_iMH1e5k7n3L", "n7IyIRqBDSN", "X-qzb0OwOSM", "iclr_2022_iMH1e5k7n3L", "s_rWgRnhGRd", "z_-jzhbHgg", "A9vsXEJCRsv", "tXtaRAm6DTL", "OQeDo31BcG", "tXtaRAm6DTL", "CeQCazQVQyp", "74FHALwfvGr", "wJ0hCCV22G", "pw0B0zSQ3K", "...
iclr_2022_VTNjxbFRKly
Why Propagate Alone? Parallel Use of Labels and Features on Graphs
One of the challenges of graph-based semi-supervised learning over ordinary supervised learning for classification tasks lies in label utilization. The direct use of ground-truth labels in graphs for training purposes can result in a parametric model learning trivial degenerate solutions (e.g., an identity mapping fro...
Accept (Poster)
The paper provides the theoretical justification for the "label trick" (using labels in graph-based semisupervised learning tasks). The authors performed a thorough evaluation of their analysis, which constitutes an experimental contribution. The authors provided a rebuttal that the AC finds to have reasonably addresse...
train
[ "EAbSQQ25DB1", "vosS8lhY7jx", "KXOk-q4eBwx", "SrkRvsSPb6i", "8bhEtKtQS1r", "4XnNui6h8Zc", "oGj3CTO74b", "FSxm6ysqir", "ag6UsI324G5", "hRDojxrS8Ww", "ZpDq3j9wcE", "3NLY9ZoWzbl", "-aiBwNrsWaX", "dzsH3fJDtDg", "bG-1SzYozOl", "O1vGMSWkC8h", "hU8h_fWLKTV", "7iTn_FmMqC", "e8bewRIOTn2",...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", ...
[ " **Question:** I think it is worth highlighting that after looking at the $R^2$ score, that the House dataset is the only one with meaningful improvement (and I am not sure why the right-column is bolded for the County dataset). This is not to fault the experiment/methodology, though.\n\n**Response:** Actually, i...
[ -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 4 ]
[ "vosS8lhY7jx", "KXOk-q4eBwx", "oGj3CTO74b", "oGj3CTO74b", "hU8h_fWLKTV", "iclr_2022_VTNjxbFRKly", "O1vGMSWkC8h", "kEr81JGclM3", "ZpDq3j9wcE", "ZpDq3j9wcE", "3NLY9ZoWzbl", "-aiBwNrsWaX", "bG-1SzYozOl", "7iTn_FmMqC", "7iTn_FmMqC", "kEr81JGclM3", "e8bewRIOTn2", "QIGVz1mOW3G", "ypQ3F...
iclr_2022_6Pe99Juo9gd
Learning Value Functions from Undirected State-only Experience
This paper tackles the problem of learning value functions from undirected state-only experience (state transitions without action labels i.e. (s,s',r) tuples). We first theoretically characterize the applicability of Q-learning in this setting. We show that tabular Q-learning in discrete Markov decision processes (MDP...
Accept (Poster)
The paper proposes a method for learning state value functions from (s,s',r) tuples, founded on the theoretical analysis in MDP setting. The extensive evaluation in several environments shows the benefit of the algorithm. The consensus among the reviewers, and I concur, that the paper proposes an interesting and novel...
train
[ "QMOtBWH-neT", "h9_I2Vz4R6P", "6dFHDYWrNyA", "vxgtC8DlAwT", "xLlb0Jdf7Nu", "WMRqRc9Y28", "D7tjqtx1GDw", "srsBI6hiVT", "_xIakIt_TF5", "8oVpGbFa_G", "JBGEUe_6WmI", "2EUJ8at80GI", "yLzFkIjIkTA", "3ezMn9C1PFm" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper analyzes the problem of learning from experience tuples that\nomit the action. The authors analyze tabular Q-learning and show that if\nthe action-space is a refinement, then Q-learning can learn the optimal\nvalue function. Motivated by this, an algorithm is proposed that labels\n$(s,s')$ tuples with a...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 2 ]
[ "iclr_2022_6Pe99Juo9gd", "6dFHDYWrNyA", "vxgtC8DlAwT", "8oVpGbFa_G", "3ezMn9C1PFm", "yLzFkIjIkTA", "2EUJ8at80GI", "2EUJ8at80GI", "QMOtBWH-neT", "QMOtBWH-neT", "iclr_2022_6Pe99Juo9gd", "iclr_2022_6Pe99Juo9gd", "iclr_2022_6Pe99Juo9gd", "iclr_2022_6Pe99Juo9gd" ]
iclr_2022_YWNAX0caEjI
Neural Structured Prediction for Inductive Node Classification
This paper studies node classification in the inductive setting, i.e., aiming to learn a model on labeled training graphs and generalize it to infer node labels on unlabeled test graphs. This problem has been extensively studied with graph neural networks (GNNs) by learning effective node representations, as well as tr...
Accept (Oral)
Most of the existing GNN based methods model the node labels independently and ignore the joint dependency of node labels. The CRF-based methods work in this setting, but they are hard to learn. Hence, this paper proposes to ease the learning difficulty by solving the proxy problem and simplifying the max-min problem. ...
train
[ "dTxbG3dA1E", "vIDJnpmtMn", "GZAlk8S7Dhv", "eUJdecnU-NE", "JSZqwhe8uv", "Ctat-SJZz9", "gUUxmwQw25H", "ujUhff1I-iv", "fp7fvfGfsV", "CDhAeJVTqxs", "_j61ikz8B7H", "yKqo_zaP7LE", "VW2bbjN9sE0", "TsbJKpTpbAI" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " Thank you for the helpful suggestions on the caption of fig.3 and the name of the model!\n\nWe agree that the caption and the model name should be further revised. For now, we are not able to update the draft as the function is currently closed, but we will keep editing the paper to further improve its quality.",...
[ -1, -1, 10, 8, -1, -1, -1, 8, -1, -1, -1, -1, -1, 8 ]
[ -1, -1, 4, 4, -1, -1, -1, 4, -1, -1, -1, -1, -1, 3 ]
[ "vIDJnpmtMn", "yKqo_zaP7LE", "iclr_2022_YWNAX0caEjI", "iclr_2022_YWNAX0caEjI", "Ctat-SJZz9", "fp7fvfGfsV", "CDhAeJVTqxs", "iclr_2022_YWNAX0caEjI", "ujUhff1I-iv", "ujUhff1I-iv", "eUJdecnU-NE", "GZAlk8S7Dhv", "TsbJKpTpbAI", "iclr_2022_YWNAX0caEjI" ]
iclr_2022_fvLLcIYmXb
AS-MLP: An Axial Shifted MLP Architecture for Vision
An Axial Shifted MLP architecture (AS-MLP) is proposed in this paper. Different from MLP-Mixer, where the global spatial feature is encoded for information flow through matrix transposition and one token-mixing MLP, we pay more attention to the local features interaction. By axially shifting channels of the feature map...
Accept (Poster)
The paper proposes a MLP-based architecture that makes extensive use of the shift operation on the feature maps. The model performs well on several vision tasks and datasets. The reviews are mixed even after the authors' response. Main pros are that the proposed architecture is elegant and reasonable, and the experime...
test
[ "qKRpP7-XCk9", "oBUwuPN9xM", "Sood-XnNhFK", "opY-APHv9_", "K-wYut2f_qt", "fIZsuqIap0", "rR0moUMhJQZ", "iR7E4zlZ9FH", "fS5k9JBON_j", "w1swBMAoUkd", "oANBULYIhAH", "31jVlyWknwT", "cjk-ovj1AmHF", "RVIhOyGGLwz", "paK3_sofwrl", "9_4IPx0dN_A", "slY2bbl8Dgb", "q41HAZNW_nx" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "public", "official_reviewer" ]
[ " Thanks for your response. Our new response is as follows. \n\nQ1: non-axial 3x3 will use the same number of groups (8) as the axial 5x5 design, but the performance looks similar for axial 3x3 (Table 3), axial 5x5 (Table 3), and presumably non-axial 3x3.\n\nA1: It is worth noting that although non-axial 3x3 will u...
[ -1, -1, -1, -1, -1, -1, 5, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ -1, -1, -1, -1, -1, -1, 5, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "oBUwuPN9xM", "q41HAZNW_nx", "q41HAZNW_nx", "fS5k9JBON_j", "slY2bbl8Dgb", "iR7E4zlZ9FH", "iclr_2022_fvLLcIYmXb", "9_4IPx0dN_A", "paK3_sofwrl", "iclr_2022_fvLLcIYmXb", "iclr_2022_fvLLcIYmXb", "q41HAZNW_nx", "q41HAZNW_nx", "w1swBMAoUkd", "w1swBMAoUkd", "rR0moUMhJQZ", "rR0moUMhJQZ", "...
iclr_2022_OJm3HZuj4r7
Convergent and Efficient Deep Q Learning Algorithm
Despite the empirical success of the deep Q network (DQN) reinforcement learning algorithm and its variants, DQN is still not well understood and it does not guarantee convergence. In this work, we show that DQN can indeed diverge and cease to operate in realistic settings. Although there exist gradient-based convergen...
Accept (Poster)
It is important to have good stable and trustworthy algorithms. Though I am unconvinved that the C-DQN algorithm proposed here is the final word (and I suppose this is not controversial, and the authors might agree), the ideas presented here are sufficiently interesting to be disseminated and discussed more widely. A...
train
[ "hNkz6vsCyM_", "-d1JePfv-9H", "HY_Yq4wbof", "u1V7R6yUc6", "N0ldPjZx47", "M8Nj1iLMC6h", "7Cs4b-mSXRK", "Rlv1jlBeH6J", "6Ya0SBfo_bg", "a-MvGvdKic", "DamG3_iLPBe", "JgZi8HEZtIZ", "N6D0xs1-dDe" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ " Thank you for the detailed answer. I have raised the review score to \"marginally above\", considering that the paper might be interesting to some readers. Nevertheless, I still maintain that the performance had only improved on a very narrow set of examples selected to match the properties of the algorithm, so I...
[ -1, -1, -1, 6, -1, 6, -1, -1, 10, -1, -1, -1, -1 ]
[ -1, -1, -1, 4, -1, 3, -1, -1, 4, -1, -1, -1, -1 ]
[ "7Cs4b-mSXRK", "a-MvGvdKic", "N0ldPjZx47", "iclr_2022_OJm3HZuj4r7", "DamG3_iLPBe", "iclr_2022_OJm3HZuj4r7", "u1V7R6yUc6", "a-MvGvdKic", "iclr_2022_OJm3HZuj4r7", "N6D0xs1-dDe", "JgZi8HEZtIZ", "M8Nj1iLMC6h", "6Ya0SBfo_bg" ]
iclr_2022_gRCCdgpVZf
Provable Adaptation across Multiway Domains via Representation Learning
This paper studies zero-shot domain adaptation where each domain is indexed on a multi-dimensional array, and we only have data from a small subset of domains. Our goal is to produce predictors that perform well on \emph{unseen} domains. We propose a model which consists of a domain-invariant latent representation laye...
Accept (Poster)
Thanks for your submission to ICLR. This paper explores zero-shot adaptation from a theoretical perspective. Three of the four reviewers are quite positive about the paper, particularly after the discussion phase. One reviewer was more negative, citing a lack of compelling experiments and some possibly restrictive a...
train
[ "_6xnmsCQHH", "ahgxMUntBTE", "ONDg-onbaCu", "a5ZG0tJtDDr", "qfH4KdXoXMf", "eP8A-zF-6xQ", "FJMQAaI7Y5l", "J-Rn0G1tP52", "BNl9gpwqrt4", "ABzk57btCju", "qb5mkalrDH3" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for responding to my comments. One of my initial concerns was the lack of evaluation on datasets more complex than the MNIST-derived one. The authors have now provided such results, and comments regarding my question on hyperparameter tuning. As a result, I have raised my score t...
[ -1, 8, -1, -1, -1, -1, -1, -1, 6, 3, 6 ]
[ -1, 2, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "FJMQAaI7Y5l", "iclr_2022_gRCCdgpVZf", "a5ZG0tJtDDr", "qb5mkalrDH3", "iclr_2022_gRCCdgpVZf", "ABzk57btCju", "ahgxMUntBTE", "BNl9gpwqrt4", "iclr_2022_gRCCdgpVZf", "iclr_2022_gRCCdgpVZf", "iclr_2022_gRCCdgpVZf" ]
iclr_2022_g8NJR6fCCl8
NODE-GAM: Neural Generalized Additive Model for Interpretable Deep Learning
Deployment of machine learning models in real high-risk settings (e.g. healthcare) often depends not only on the model's accuracy but also on its fairness, robustness, and interpretability. Generalized Additive Models (GAMs) are a class of interpretable models with a long history of use in these high-risk domains, but ...
Accept (Spotlight)
The paper proposes two new generalized additive models (GAM) based on neural networks and referred to as NODE-GAM and NODE-GA2M. An empirical analysis shows that the proposed and carefully designed architectures perform comparably to several baselines on medium-sized datasets while outperforming them on larger datasets...
train
[ "Zk8uwpM1RwL", "EYFXRRdcjuF", "wWtFnQwf3w", "A3yGP7mLYdO", "zrEoct8poj", "toZIPucxOcy", "GH2iXEne2h-", "-hEPQ-ulsir", "WaqeJQ8RMav", "q_YdXig2b2Q", "9omQjf0IDPB", "wzsTA1fL6EF", "iRhwOqDznyb" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ " Thank you and it makes my day :)", " I have changed my score to reflect it as such!", "Generalized Additive Models (GAMs) are a class of interpretable models with a long history of use in these high-risk domains, but they lack desirable features of deep learning such as differentiability and scalability. \n\n...
[ -1, -1, 8, -1, -1, -1, 8, -1, -1, -1, -1, -1, 5 ]
[ -1, -1, 3, -1, -1, -1, 4, -1, -1, -1, -1, -1, 2 ]
[ "EYFXRRdcjuF", "q_YdXig2b2Q", "iclr_2022_g8NJR6fCCl8", "toZIPucxOcy", "iclr_2022_g8NJR6fCCl8", "-hEPQ-ulsir", "iclr_2022_g8NJR6fCCl8", "WaqeJQ8RMav", "wzsTA1fL6EF", "wWtFnQwf3w", "iRhwOqDznyb", "GH2iXEne2h-", "iclr_2022_g8NJR6fCCl8" ]
iclr_2022_TqNsv1TuCX9
Axiomatic Explanations for Visual Search, Retrieval, and Similarity Learning
Visual search, recommendation, and contrastive similarity learning power technologies that impact billions of users worldwide. Modern model architectures can be complex and difficult to interpret, and there are several competing techniques one can use to explain a search engine's behavior. We show that the theory of fa...
Accept (Poster)
The initial reviews for this paper were 6,6,6, the authors have provided a rebuttal and after the rebuttal the recommendation stayed the same. The reviewers have reached the consensus that the paper is borderline but they have all recommended keeping it above the acceptance threshold. Following the recommendation of th...
train
[ "-3SaKSq2Yjw", "64nWXInLSCP", "VsqytWmACMn", "JmsURW6_ZXh", "0RX4AC7ByHP", "DYz38f9BXWt", "v3WppZxIIUL", "p9ZRX3EKUhC", "6jvKkBl8SWv", "0eqCUqBsOOO", "G-kAVINJLXi", "6iaDRUSQjc" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for your helpful comments, as today is the last day of the review please let us know if there is anything else we can address at this time. We appreciate your consideration", " Thank you for your helpful comments, as today is the last day of the review please let us know if there is anything else we c...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, 2, 4 ]
[ "G-kAVINJLXi", "JmsURW6_ZXh", "DYz38f9BXWt", "iclr_2022_TqNsv1TuCX9", "iclr_2022_TqNsv1TuCX9", "v3WppZxIIUL", "p9ZRX3EKUhC", "6iaDRUSQjc", "JmsURW6_ZXh", "G-kAVINJLXi", "iclr_2022_TqNsv1TuCX9", "iclr_2022_TqNsv1TuCX9" ]
iclr_2022_HuaYQfggn5u
FedBABU: Toward Enhanced Representation for Federated Image Classification
Federated learning has evolved to improve a single global model under data heterogeneity (as a curse) or to develop multiple personalized models using data heterogeneity (as a blessing). However, little research has considered both directions simultaneously. In this paper, we first investigate the relationship between ...
Accept (Poster)
The paper makes some novel and interesting observation pertaining the relationship between data heterogeneity and personalization. Reviewers like the paper and ideas in general but raised several concerns. The rebuttal rectified several confusions and provided more clarification which convinced the reviewers that the p...
train
[ "emMu6-qDO_z", "6p2HcqRYIT0", "hbna7XSycHx", "QvsJb40BEU0", "Z2yxKru0AZv", "0UyZhaLIFE5", "bPvPjtO79jv", "KdUSj-_PjT", "0ylO2EcfrK3", "RiRU3Toql69", "wxoihVqzmcK", "XofkUwcnRR_" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " For more empirical evidence related to Table 2 (a motivation for our study), we provide the results on CIFAR10 with the same settings as Table 2. The results show almost the same trend that training the head on the server can hurt personalization, although the performance degradation decreases compared to the cas...
[ -1, 5, 6, -1, 6, -1, -1, -1, -1, -1, -1, 8 ]
[ -1, 3, 4, -1, 4, -1, -1, -1, -1, -1, -1, 4 ]
[ "6p2HcqRYIT0", "iclr_2022_HuaYQfggn5u", "iclr_2022_HuaYQfggn5u", "KdUSj-_PjT", "iclr_2022_HuaYQfggn5u", "6p2HcqRYIT0", "6p2HcqRYIT0", "hbna7XSycHx", "Z2yxKru0AZv", "XofkUwcnRR_", "iclr_2022_HuaYQfggn5u", "iclr_2022_HuaYQfggn5u" ]
iclr_2022_P7OVkHEoHOZ
Hindsight Foresight Relabeling for Meta-Reinforcement Learning
Meta-reinforcement learning (meta-RL) algorithms allow for agents to learn new behaviors from small amounts of experience, mitigating the sample inefficiency problem in RL. However, while meta-RL agents can adapt quickly to new tasks at test time after experiencing only a few trajectories, the meta-training process is ...
Accept (Poster)
This paper proposes Hindsight Foresight Relabeling (HFR), an approach for reward relabeling for meta RL. The main contribution is a measure of how useful a given trajectory is for the purpose of meta-task identification as well as the derivation of a task relabeling distribution based on this measure. Reviewers agreed...
train
[ "cJbbpKa-MsA", "r6-ZwsHymxJ", "RD_J3bP483I", "WCzbAoZ7fx7", "no4Vv0wviWP", "PB1XjctYOmI", "WGsdFqAL89U", "fIsoknYvlz5", "E9qqIUbh-G", "4zufVPqbF9g", "vPzUBd1JyIo", "cCGIO2NT5y-", "ZrNBZZwne_M", "oR0Cukb1cHk", "QZDBsEJkgDZ", "rKGyAxevTk-" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " We really appreciate the reviewer for their positive evaluation of our work and for taking our rebuttal into consideration in raising their rating of the paper. Thank you!", "This paper studies task relabelling in hindsight to increase the efficiency of meta-reinforcement-learning.\nThe authors propose a strate...
[ -1, 8, -1, -1, -1, 5, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, 3, -1, -1, -1, 3, -1, -1, 5, -1, -1, -1, -1, -1, -1, 4 ]
[ "r6-ZwsHymxJ", "iclr_2022_P7OVkHEoHOZ", "iclr_2022_P7OVkHEoHOZ", "WGsdFqAL89U", "WGsdFqAL89U", "iclr_2022_P7OVkHEoHOZ", "QZDBsEJkgDZ", "4zufVPqbF9g", "iclr_2022_P7OVkHEoHOZ", "ZrNBZZwne_M", "iclr_2022_P7OVkHEoHOZ", "rKGyAxevTk-", "E9qqIUbh-G", "r6-ZwsHymxJ", "PB1XjctYOmI", "iclr_2022_P...
iclr_2022_Ucx3DQbC9GH
What Makes Better Augmentation Strategies? Augment Difficult but Not too Different
The practice of data augmentation has been extensively used to boost the performance of deep neural networks for various NLP tasks. It is more effective when only a limited number of labeled samples is available, e.g., low-data or class-imbalanced regimes. Most current augmentation techniques rely on parameter tuning o...
Accept (Poster)
We appreciate the authors for addressing the comments raised by the reviewers during the discussion period, which includes providing more experimental results to address the concerns. We believe the publication of this paper can contribute to the important topic of data augmentation. The authors are highly recommended...
train
[ "tKDxREtwX4-", "9cBk2yWyZI8", "nTeBZDWIlt", "jczCHr3t2HL", "LkJlDVsjHNv", "0S0j2YhZWY", "V5_pbFZxcLq", "s-_vdHm6ln", "1uq-AAljMvU", "FMVgg9Op-IO", "Lp6r_ALP8Nd", "2htDx02CnTJ", "Z0HnkfpCrlk", "cVPHcyaBtTb", "Wx9J3331ME", "IiVMBbkz2A", "ErFi8cI8UJH" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the positive response before the discussion phase ends, and we are happy to hear that our response could help to address your concerns !\n\nWe also agree that we used well-known techniques optimizing the proposed reward function, but we still think that designing such a good reward function itself, ...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 8 ]
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "nTeBZDWIlt", "iclr_2022_Ucx3DQbC9GH", "9cBk2yWyZI8", "iclr_2022_Ucx3DQbC9GH", "9cBk2yWyZI8", "Wx9J3331ME", "Wx9J3331ME", "Wx9J3331ME", "Wx9J3331ME", "IiVMBbkz2A", "9cBk2yWyZI8", "9cBk2yWyZI8", "ErFi8cI8UJH", "ErFi8cI8UJH", "iclr_2022_Ucx3DQbC9GH", "iclr_2022_Ucx3DQbC9GH", "iclr_2022...
iclr_2022_U8pbd00cCWB
Differentiable Gradient Sampling for Learning Implicit 3D Scene Reconstructions from a Single Image
Implicit shape models are promising 3D representations for modeling arbitrary locations, with Signed Distance Functions (SDFs) particularly suitable for clear mesh surface reconstruction. Existing approaches for single object reconstruction impose supervision signals based on the loss of the signed distance value from ...
Accept (Poster)
The paper presents a new way to train the prediction of implicit 3D scene representations from a single view. The main innovations are a novel numerically stable and memory efficient formulation of the derivatives of a loss function based on the spatial gradients of the implicit field, and focusing the training on regi...
train
[ "zwUlItbwL_x", "8hsRqmy9hvP", "JyMcTzQ_O5O", "MnFuOD91e9t", "XzYTYjMRRrJ", "zP5Bqj8skrj", "ZUt1_HUmPqI", "wLMFXvlOKP", "AJJNaNCje1I", "cWYltwnhmaG", "GCuseA2aeZZ", "DZMfgXksOl", "yFgklUaZeaJ", "e8SbAoGxnCk", "AbMMujEzOi", "5-APjoRJSUq", "0EktWet713X" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Many thanks for the detailed check on our rebuttal and the revision. We will address each in the manuscript, and will conduct the overall grammar check on the latest version to make sure our manuscript is carefully checked.", " Thanks very much for your response to the reviews and for addressing my comment abou...
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 6 ]
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "8hsRqmy9hvP", "e8SbAoGxnCk", "XzYTYjMRRrJ", "iclr_2022_U8pbd00cCWB", "DZMfgXksOl", "0EktWet713X", "iclr_2022_U8pbd00cCWB", "MnFuOD91e9t", "MnFuOD91e9t", "MnFuOD91e9t", "MnFuOD91e9t", "MnFuOD91e9t", "5-APjoRJSUq", "AbMMujEzOi", "iclr_2022_U8pbd00cCWB", "iclr_2022_U8pbd00cCWB", "iclr_...
iclr_2022_upnDJ7itech
Knowledge Infused Decoding
Pre-trained language models (LMs) have been shown to memorize a substantial amount of knowledge from the pre-training corpora; however, they are still limited in recalling factually correct knowledge given a certain context. Hence. they tend to suffer from counterfactual or hallucinatory generation when used in knowled...
Accept (Poster)
The paper introduces a novel decoding algorithm that allows to dynamically integrate external knowledge with generative LMs. The proposed technique is plug-and-play, it does not require re-training or fine-tuning LMs with knowledge based objectives. The author report a series of experiments on several datasets and ta...
train
[ "GHGFBYPGuP2", "4dSd6VPNkC4", "SVHV29PAB3xp", "EQ8Ipj9jseyr", "A4ln27adUUI", "8GxAUyJeRYb-", "XUldnno6ZS", "rBJgbzDTL4W", "nWQw0MUQ5ZU", "_7bGnpLGYlE" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer’s constructive suggestions about clarity, and we feel grateful you recommended acceptance for our work!\n\nUnfortunately we cannot upload a new version right now, but we will definitely incorporate those details into our final revision following your suggestions. Below we briefly answer your...
[ -1, 6, -1, -1, -1, -1, -1, 6, 8, 5 ]
[ -1, 4, -1, -1, -1, -1, -1, 3, 4, 2 ]
[ "4dSd6VPNkC4", "iclr_2022_upnDJ7itech", "_7bGnpLGYlE", "iclr_2022_upnDJ7itech", "4dSd6VPNkC4", "rBJgbzDTL4W", "nWQw0MUQ5ZU", "iclr_2022_upnDJ7itech", "iclr_2022_upnDJ7itech", "iclr_2022_upnDJ7itech" ]
iclr_2022_f9MHpAGUyMn
Dynamic Token Normalization improves Vision Transformers
Vision Transformer (ViT) and its variants (e.g., Swin, PVT) have achieved great success in various computer vision tasks, owing to their capability to learn long-range contextual information. Layer Normalization (LN) is an essential ingredient in these models. However, we found that the ordinary LN makes tokens at dif...
Accept (Poster)
A new method for dynamic token normalization in ViTs (both within and across tokens) is introduced in the paper. As noted by the reviewers, the proposed method is technically sound, with a clear and solid motivation. The main raised concerns included the lack of experiments using larger models, unclear reason for the a...
val
[ "tiWcMLSCfAzI", "ZU-D9vYNCxJt", "3vpnqdPa47B", "xvooYzMu9n2", "Kg8P7zIeLq", "6JAtgxGRaq6", "rWwjDvaBqXiy", "PTpwB8GPAzm", "IfNm-yab6y45", "al3x7yaB7DcU", "LSbH-W2gTvX", "mFNQoMTLI4vf", "UYe1wq6yyDz", "SLhXeSRyVSp", "_jGAmvmHYfc", "W2Z7XlUdfu" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the detailed comments and valuable suggestions. We have provided a detailed general response to the concerns of all the reviewers. Please see details at [_General Response_](https://openreview.net/forum?id=f9MHpAGUyMn&noteId=al3x7yaB7DcU). We address the reviewer's concern as follows,\n...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "_jGAmvmHYfc", "al3x7yaB7DcU", "iclr_2022_f9MHpAGUyMn", "3vpnqdPa47B", "iclr_2022_f9MHpAGUyMn", "PTpwB8GPAzm", "W2Z7XlUdfu", "rWwjDvaBqXiy", "SLhXeSRyVSp", "iclr_2022_f9MHpAGUyMn", "mFNQoMTLI4vf", "UYe1wq6yyDz", "3vpnqdPa47B", "iclr_2022_f9MHpAGUyMn", "iclr_2022_f9MHpAGUyMn", "iclr_202...
iclr_2022_lQI_mZjvBxj
Towards Model Agnostic Federated Learning Using Knowledge Distillation
Is it possible to design an universal API for federated learning using which an ad-hoc group of data-holders (agents) collaborate with each other and perform federated learning? Such an API would necessarily need to be model-agnostic i.e. make no assumption about the model architecture being used by the agents, and als...
Accept (Poster)
This manuscript proposes and analyzes a distillation approach to address heterogeneity in distributed learning. The main paper focuses on a relatively simple two-agent kernel regression setting, and the insights developed are extended (and partially analyzed) for a multiagent setting. There are four reviewers, all of...
train
[ "F-6Q6fI50nI", "TYvO_n9WrNP", "I0xam9VbDT3", "QWXs5tnqkDd", "Z6QF41l73xP", "I9ftXN7pyUf", "4Puw9xM9UZu", "iB-H6nCwoAL", "AkOIDq_3oAg", "xk-Zhxbm_FY", "PTaBd_6kFBi", "VaoeLEjSa1N" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate authors' efforts on addressing my concerns. My main concern is still that the setting is bit toy. I keep my score, weakly accept.", "The paper analyzes the dynamics of optimizing two kernel regression models via co-distillation in a distributed setup where local models may differ in the kernel used...
[ -1, 8, -1, -1, -1, -1, -1, -1, -1, 6, 6, 3 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "AkOIDq_3oAg", "iclr_2022_lQI_mZjvBxj", "I9ftXN7pyUf", "iclr_2022_lQI_mZjvBxj", "xk-Zhxbm_FY", "4Puw9xM9UZu", "VaoeLEjSa1N", "TYvO_n9WrNP", "PTaBd_6kFBi", "iclr_2022_lQI_mZjvBxj", "iclr_2022_lQI_mZjvBxj", "iclr_2022_lQI_mZjvBxj" ]
iclr_2022_DTXZqTNV5nW
Actor-Critic Policy Optimization in a Large-Scale Imperfect-Information Game
The deep policy gradient method has demonstrated promising results in many large-scale games, where the agent learns purely from its own experience. Yet, policy gradient methods with self-play suffer convergence problems to a Nash Equilibrium (NE) in multi-agent situations. Counterfactual regret minimization (CFR) has ...
Accept (Poster)
This paper presents a Actor-Critic Hedge (ACH) method for 1-on-1 Mahjong. It is is an actor-critic method for approximating Nash equilibrium strategies in large extensive-form games. ACH extends the CFR family of algorithms that uses deep learning and model-free training (not using full game traversal). The propose ACH...
train
[ "cG6Xro4eXxB", "eu70Y4I-h-N", "R88dBUMU52k", "qcn9CVMnb2b", "M_COtJuq-b", "UhpBgd2VWqq", "ssALwvZlWH", "vG0V_r2oLS2", "mmUfeyMQPj8", "ORzraC24md6", "NsJFfb-DiJF", "oHDhw2NHzg2", "pX_MBnB0A_x", "iaKDPetoNIb", "vniCWqm8LY", "Z9jdMR1zu-S", "Ka4MytXZ3oH", "UdFT2b4Y-f", "F-YM4snYhy", ...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", ...
[ " Many thanks for the reply. We really appreciate it !\n\nWe would like to express our sincere gratitude to all the reviewers for their thorough and insightful comments, based on which the paper has been greatly improved since its first submission. \n\n* reviewer comment: My concern is that the original publication...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 8, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "eu70Y4I-h-N", "LE3mIH0Q-Sr", "iclr_2022_DTXZqTNV5nW", "ssALwvZlWH", "ssALwvZlWH", "ssALwvZlWH", "mmUfeyMQPj8", "vniCWqm8LY", "ORzraC24md6", "oHDhw2NHzg2", "iclr_2022_DTXZqTNV5nW", "NsyU7ZwNmr", "Z9jdMR1zu-S", "iclr_2022_DTXZqTNV5nW", "n9kJ1iMzQJS", "Ka4MytXZ3oH", "F-YM4snYhy", "Zv...
iclr_2022_af1eUDdUVz
Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent
Evading adversarial example detection defenses requires finding adversarial examples that must simultaneously (a) be misclassified by the model and (b) be detected as non-adversarial. We find that existing attacks that attempt to satisfy multiple simultaneous constraints often over-optimize against one constraint at th...
Accept (Poster)
The authors propose two new variants of (projected) gradient descent for attacking a classifier and a detector simultaneously. Using these two new variants they are able to break four recent detection methods for adversarial samples. Strength: - All the reviewer acknowledge that breaking these four defenses is a valua...
train
[ "oMY_MUEz1Wu", "jaQXAe3swaA", "M6KcE8tNu4D", "Pj9ucNPGscV", "_nGWGgeQwer", "pf_tBWcdKSp", "9jrpwNejcjV", "5UZqq3uXJqi", "OLHCvCdEXZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors propose an attack that could break 4 adversarial detection methods published recently. Traditionally, attacks against detection methods have attempted to maximize the loss for both classification and detection simultaneously. However, using a toy example the authors show that this is suboptimal, as it ...
[ 8, 6, -1, -1, -1, -1, -1, 8, 8 ]
[ 3, 3, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2022_af1eUDdUVz", "iclr_2022_af1eUDdUVz", "Pj9ucNPGscV", "jaQXAe3swaA", "OLHCvCdEXZ", "5UZqq3uXJqi", "oMY_MUEz1Wu", "iclr_2022_af1eUDdUVz", "iclr_2022_af1eUDdUVz" ]
iclr_2022_7_JR7WpwKV1
The Effects of Invertibility on the Representational Complexity of Encoders in Variational Autoencoders
Training and using modern neural-network based latent-variable generative models (like Variational Autoencoders) often require simultaneously training a generative direction along with an inferential (encoding) direction, which approximates the posterior distribution over the latent variables. Thus, the question arises...
Accept (Poster)
All three reviewers viewed this paper as marginally above the acceptance threshold (6). Most of the initial concerns of reviewers were around (a) the applicability of the theory to actual practical use cases and networks, and (b) the presentation and framing of the work, and scope of its results. There were fairly det...
train
[ "u6ygnJ-OGF", "dslp_qwU9kz", "HxY8J6IkwB3", "gSkEPcW80iF", "QAKfoH6BNAx", "KhmYsjSSwEb", "zgvRdXTB5uq", "GbQrOVYuXm", "Po5xzm4TDvC", "7Uw3Ixfnj7Q", "cJirvrGg_QR", "p-4TvpHNeGT", "NvQgTgbTUO", "FojssCgFA9V", "WRzRG2QPD9" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " Thank you for your response, encouraging words, and reconsidering your score! We will tone down the wording regarding Lipton-Tripathi a bit more. ", " I thank the authors for their clarifications. I largely consider my concerns addressed. I remain sceptical about the ultimate significance of the results, but as...
[ -1, -1, 6, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, 3 ]
[ "dslp_qwU9kz", "zgvRdXTB5uq", "iclr_2022_7_JR7WpwKV1", "QAKfoH6BNAx", "KhmYsjSSwEb", "FojssCgFA9V", "HxY8J6IkwB3", "7Uw3Ixfnj7Q", "iclr_2022_7_JR7WpwKV1", "NvQgTgbTUO", "HxY8J6IkwB3", "HxY8J6IkwB3", "Po5xzm4TDvC", "WRzRG2QPD9", "iclr_2022_7_JR7WpwKV1" ]
iclr_2022_iC4UHbQ01Mp
Poisoning and Backdooring Contrastive Learning
Multimodal contrastive learning methods like CLIP train on noisy and uncurated training datasets. This is cheaper than labeling datasets manually, and even improves out-of-distribution robustness. We show that this practice makes backdoor and poisoning attacks a significant threat. By poisoning just 0.01% of a dataset ...
Accept (Oral)
The paper studies attacks on the self-supervised training pipeline of multi-modal models, e.g., CLIP and related models. The reviewers agree that the poisoning results are impressive in that they achieve good poisoning success with a fairly small number of samples. The threat model is fairly specific to one (high pro...
train
[ "tkJYwELgiwi", "SvKYWC0nSq", "2J7weGsHNJt", "cs-yvh8pY_w", "jJrG931I7dd", "7V5UBmDOC3Q", "3nyuNlRSI7", "KzHwiHkeAba", "hLqm8_inwO-", "PrfEIWv6q6I" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " The authors have done a nice job to address my concerns. I am raising my score to an 8.", "This paper illustrates how easily contrastive learning, particularly on multi-modal data, can be mislead by a small amount of poisoned or backdoor data instances. Contrastive learning is a widely used technique for self-...
[ -1, 8, 8, -1, -1, -1, -1, -1, 8, 8 ]
[ -1, 4, 3, -1, -1, -1, -1, -1, 4, 3 ]
[ "3nyuNlRSI7", "iclr_2022_iC4UHbQ01Mp", "iclr_2022_iC4UHbQ01Mp", "jJrG931I7dd", "2J7weGsHNJt", "hLqm8_inwO-", "SvKYWC0nSq", "PrfEIWv6q6I", "iclr_2022_iC4UHbQ01Mp", "iclr_2022_iC4UHbQ01Mp" ]
iclr_2022_4-D6CZkRXxI
Value Gradient weighted Model-Based Reinforcement Learning
Model-based reinforcement learning (MBRL) is a sample efficient technique to obtain control policies, yet unavoidable modeling errors often lead performance deterioration. The model in MBRL is often solely fitted to reconstruct dynamics, state observations in particular, while the impact of model error on the policy is...
Accept (Spotlight)
This paper studies model-based RL in the setting where the model can be misspecified. In this case, MLE of model parameters is a not necessarily a good idea because the error in the model estimate compounds when the model is used for planning. The authors solve this problem by optimizing a novel objective, which takes ...
val
[ "X_MXM02Nc_Q", "P9tcdD6k72m", "ab9oWnTp2Wh", "4DvoaSdmQ5F", "4MT6JNu_T64", "H6eF0FHsUt", "hzxlYJekBGa", "Y_llxERdfc8", "ic3HTSOrA-O", "32-vsk5WsNr", "Y7S7a-8-4Lx", "YihOS7ZbZxr", "bw3BYQrmmrt", "FkqGh9_D_a", "7UxNJsFbnbS", "PdOP7o2kmCy", "6SHOjNZmUsQ", "4ATghwHhAZB", "bBy9Pb3KKXV...
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", ...
[ " We have updated the draft. Only spelling errors and formatting was fixed.\n\nWe thank the reviewers for their continued discussion of the draft. If there are any open questions after the draft update period, we are happy to clarify.", " Thanks for your suggestions and updating the scores.\n\nRegarding the propo...
[ -1, -1, -1, 8, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, -1, 5, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ "iclr_2022_4-D6CZkRXxI", "ab9oWnTp2Wh", "4MT6JNu_T64", "iclr_2022_4-D6CZkRXxI", "hzxlYJekBGa", "iclr_2022_4-D6CZkRXxI", "bBy9Pb3KKXV", "FkqGh9_D_a", "iclr_2022_4-D6CZkRXxI", "ic3HTSOrA-O", "7UxNJsFbnbS", "PdOP7o2kmCy", "iclr_2022_4-D6CZkRXxI", "7UxNJsFbnbS", "PdOP7o2kmCy", "ic3HTSOrA-O...
iclr_2022_Kwm8I7dU-l5
Graph-Guided Network for Irregularly Sampled Multivariate Time Series
In many domains, including healthcare, biology, and climate science, time series are irregularly sampled with varying time intervals between successive readouts and different subsets of variables (sensors) observed at different time points. Here, we introduce RAINDROP, a graph neural network that embeds irregularly sam...
Accept (Poster)
The authors introduce a GNN based method for classifying irregular multivariate timeseries. They represent the dependencies among sensors using a graph structure and deploy message passing to model the effect of a sensor on another)s=. The approach jointly learns embeddings and the dependency graph. The manuscript ...
train
[ "hDp7yFhuzc3", "zvGQTVzFB2g", "Kkea26yGL6v", "v5y2TRb1CEY", "pB5GgkKjnv5", "NcLFRClr1HI", "Y5VoUDu8KR5", "uPbwVoQG2yd", "ip8T2uUwjwY", "7FrP-7pPMyW", "0IwXPmHWnt", "N6mpQsEl2Zj", "ZPTDwWbBc5", "4RLTslOFvSn", "aLGZsx2QXzt", "PNYUPKNEgay", "BvwEse4LA8O", "fmnrJamM6pB", "U8myMVwI45E...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer, \n\nWe sincerely appreciate your valuable comments on our work. In our previous response and the updated manuscript, we have tried our best to address the points raised in your review. Is there any unclear point that we can further clarify?\n\nThank you again!\n", " We sincerely want to thank you...
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5 ]
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "wtKl_zJDAI", "NcLFRClr1HI", "NcLFRClr1HI", "ZPTDwWbBc5", "iclr_2022_Kwm8I7dU-l5", "ZPTDwWbBc5", "wtKl_zJDAI", "pB5GgkKjnv5", "pB5GgkKjnv5", "pB5GgkKjnv5", "pB5GgkKjnv5", "pB5GgkKjnv5", "pB5GgkKjnv5", "wtKl_zJDAI", "pB5GgkKjnv5", "pB5GgkKjnv5", "U8myMVwI45E", "iclr_2022_Kwm8I7dU-l5...
iclr_2022_fR-EnKWL_Zb
Quadtree Attention for Vision Transformers
Transformers have been successful in many vision tasks, thanks to their capability of capturing long-range dependency. However, their quadratic computational complexity poses a major obstacle for applying them to vision tasks requiring dense predictions, such as object detection, feature matching, stereo, etc. We intro...
Accept (Poster)
The paper proposes an efficient attention variant inspired by quadtrees, for use in vision transformers. When applied to several vision tasks, the approach leads to better results and/or less compute. The reviews are all positive about the paper, after taking into account the authors' feedback (one reviewer forgot to...
val
[ "sbspk6reuHf", "8IcYvM041KB", "K_V26_3eAQD", "zEmToU9OEUp", "O86nG7MmB7", "3goxbojsI2_", "lF78FQEbSvr", "IZKfzG0Yw0N", "SeOfo00RFEH", "ZzkkuQc8Tuu", "tRGJ7AimnUL4", "nU7wbywuxZD", "AokV0omfc_w", "oZxs_sopbw8", "-OjWvmY92tg", "jb9gCSfwtU", "bxMsXkjhwlk", "kqrttp6oSr7", "MQ-6poyaIT...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_re...
[ " The response addressed my major concerns. I lean to keep my initial rate.", " Thank the authors for their response. After reading other reviews and the corresponding responses, I believe my concerns and most primary concerns of other reviewers have been addressed. Thus I would like to keep my initial rating and...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "MQ-6poyaITd", "ZzkkuQc8Tuu", "O86nG7MmB7", "bxMsXkjhwlk", "3goxbojsI2_", "IZKfzG0Yw0N", "tRGJ7AimnUL4", "AokV0omfc_w", "iclr_2022_fR-EnKWL_Zb", "oZxs_sopbw8", "bxMsXkjhwlk", "MQ-6poyaITd", "jb9gCSfwtU", "kqrttp6oSr7", "iclr_2022_fR-EnKWL_Zb", "iclr_2022_fR-EnKWL_Zb", "iclr_2022_fR-E...
iclr_2022_oWZsQ8o5EA
On the Generalization of Models Trained with SGD: Information-Theoretic Bounds and Implications
This paper follows up on a recent work of Neu et al. (2021) and presents some new information-theoretic upper bounds for the generalization error of machine learning models, such as neural networks, trained with SGD. We apply these bounds to analyzing the generalization behaviour of linear and two-layer ReLU networks. ...
Accept (Poster)
This paper offers a refinement of the information-theoretic characterization of the generalization of models obtained via SGD. This is assessed on some basic neural architectures and inspires the use of new regularizers. Overall, even though the perspective of this paper is not novel, the presented results appear to be...
train
[ "hgHJckjlN61", "a-BfNI-187B", "VXXHJI_U9Lx", "VqSbTx0ys7y", "R8yW3Yl6hph", "pw9d4h-PRjn", "75OEQReHqHS", "2Ht8UzgYnw7", "nJYPiYZYVrw", "sVNPhcB89Vz", "OZxvitv61lI", "VYYop5lt6YQ" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank all reviewers for your insightful comments. We have revised the paper to address these comments and we will discuss these revisions separately in our response to each reviewer. \n\nAdditionally, we wish to report that we discover a bug in our original proof of the theorems. We have made an ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 6, 10 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "iclr_2022_oWZsQ8o5EA", "VXXHJI_U9Lx", "VYYop5lt6YQ", "R8yW3Yl6hph", "OZxvitv61lI", "75OEQReHqHS", "sVNPhcB89Vz", "nJYPiYZYVrw", "iclr_2022_oWZsQ8o5EA", "iclr_2022_oWZsQ8o5EA", "iclr_2022_oWZsQ8o5EA", "iclr_2022_oWZsQ8o5EA" ]
iclr_2022_9Hrka5PA7LW
Representational Continuity for Unsupervised Continual Learning
Continual learning (CL) aims to learn a sequence of tasks without forgetting the previously acquired knowledge. However, recent CL advances are restricted to supervised continual learning (SCL) scenarios. Consequently, they are not scalable to real-world applications where the data distribution is often biased and unan...
Accept (Oral)
Exciting work at the intersection of continual learning and representation learning. The reviewers have all commented that the proposed work addresses a number of issues related to catastrophic forgetting, which is very encouraging. The work also shows that the representation learning with the proposed method is more g...
train
[ "Ej0F1KuT9PK", "QruBRSd8VWr", "v2b-HX38Jk", "zty30bQT92H", "2NdFgFjzbkX", "j9EewvLS7cE", "b4gevCf-KZY", "8n8-Xgegfgb", "zTeQb5EyUs", "IqsGdgUkHf0", "FD1VUIt5URV", "MrVKk844UHn", "I8Se-LsJYbC", "h4hc_3ByBV1" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer QCJy,\n\nWe are glad to hear that we addressed your concerns. Thank you for all your insightful comments and suggestions.\n\nThank you, \nAuthors", " Dear Reviewer synL\n\nWe are happy to hear that our response addressed your concerns. Thank you for your valuable feedback and suggestions.\n\nThan...
[ -1, -1, 8, -1, -1, 8, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ -1, -1, 3, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "zty30bQT92H", "2NdFgFjzbkX", "iclr_2022_9Hrka5PA7LW", "FD1VUIt5URV", "IqsGdgUkHf0", "iclr_2022_9Hrka5PA7LW", "8n8-Xgegfgb", "v2b-HX38Jk", "v2b-HX38Jk", "I8Se-LsJYbC", "h4hc_3ByBV1", "j9EewvLS7cE", "iclr_2022_9Hrka5PA7LW", "iclr_2022_9Hrka5PA7LW" ]
iclr_2022_-TSe5o7STVR
Non-Parallel Text Style Transfer with Self-Parallel Supervision
The performance of existing text style transfer models is severely limited by the non-parallel datasets on which the models are trained. In non-parallel datasets, no direct mapping exists between sentences of the source and target style; the style transfer models thus only receive weak supervision of the target sentenc...
Accept (Poster)
The paper proposes a new method for unsupervised text style transfer by assuming there exist some pseudo-parallal sentences pairs in the data. The method thus first mines and constructs a synthetic parallel corpus with certain similarity metrics, and then trains the model via imitation learning. Reviewers have found th...
train
[ "57iuoGJpBr", "x_Bqr0KTXN5", "45KJB7LsgwN", "bsCXyPscnvW", "xDNCFDWcwql", "ueadBslNdZk", "TjxswEYQYj8", "_-3wqwGVzb", "Nw0XKU_yGTx", "1p-S4WdDMI_", "TPCo6a-F695", "PVKYX5ZeQiO", "LV_kx1fRSFL", "VMt0IWuRHb", "tYZsgFT9iW-", "QD_DCu1WebR", "hgYGzcW0ABA", "ZndgxUL4fHx", "2y1uTAcX1CC"...
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_re...
[ " Thank you for clarifying your questions and for updating your score! \n\n**About Question 1**\n\nIn our few-shot experiment, we actually try to get at what you are suggesting here. We sample the formal and informal sentences separately, not in pairs. This way we are very likely to be left with formal and informal...
[ -1, -1, 6, 8, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ -1, -1, 5, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "x_Bqr0KTXN5", "tYZsgFT9iW-", "iclr_2022_-TSe5o7STVR", "iclr_2022_-TSe5o7STVR", "ueadBslNdZk", "PVKYX5ZeQiO", "iclr_2022_-TSe5o7STVR", "Nw0XKU_yGTx", "1p-S4WdDMI_", "TPCo6a-F695", "PVKYX5ZeQiO", "TjxswEYQYj8", "-_gBOl2XPd0", "-u3H36OX1TC", "45KJB7LsgwN", "hgYGzcW0ABA", "ZndgxUL4fHx",...
iclr_2022_ZU-zFnTum1N
Bregman Gradient Policy Optimization
In the paper, we design a novel Bregman gradient policy optimization framework for reinforcement learning based on Bregman divergences and momentum techniques. Specifically, we propose a Bregman gradient policy optimization (BGPO) algorithm based on the basic momentum technique and mirror descent iteration. Meanwhile, ...
Accept (Poster)
This paper proposes a policy gradient algorithm based on the Bregman divergence and momentum method. While one reviewer was initially concerned about the technical novelty of the paper given some existing works, after the author's response and paper revision, the reviewers are all convinced and have reached a consensus...
train
[ "mURsuEYl9hf", "BMMcRzttpMQ", "OKelPe0itlB", "UHJUiJ6se8I", "10YSpoI6wJ9", "vOhNSmaxSUA", "8WE9LtvZ5Tf", "RVcAwxV71YR" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper studies the convergence of policy gradient algorithms with constraints. They modify the vanilla policy gradient with Bregman divergence as a regularizer. The authors also propose a new variance reduced policy gradient methods based on the STORM estimator in nonconvex optimization. \n\n\n My concern abou...
[ 6, -1, -1, -1, -1, -1, 8, 8 ]
[ 4, -1, -1, -1, -1, -1, 3, 5 ]
[ "iclr_2022_ZU-zFnTum1N", "8WE9LtvZ5Tf", "RVcAwxV71YR", "10YSpoI6wJ9", "iclr_2022_ZU-zFnTum1N", "mURsuEYl9hf", "iclr_2022_ZU-zFnTum1N", "iclr_2022_ZU-zFnTum1N" ]
iclr_2022_I1hQbx10Kxn
On Bridging Generic and Personalized Federated Learning for Image Classification
Federated learning is promising for its capability to collaboratively train models with multiple clients without accessing their data, but vulnerable when clients' data distributions diverge from each other. This divergence further leads to a dilemma: "Should we prioritize the learned model's generic performance (for f...
Accept (Spotlight)
This manuscript proposes and analyses can approach to address the centralized and personalized tasks in federated learning jointly. Existing work has tackled this issue by developing separate tasks. Instead, this manuscript proposes a shared architecture that aims to optimize centralized and personalized models. One ob...
train
[ "G2HCSJUGPR1", "va8X0sY88Tq", "AX0w-E-apk", "HAsfaWbF_vL", "YXMrlbfLs3b", "8kj0C2yVzP9", "FGEDKZ48l8I", "SHdhNyoxcMX", "85Y6P6eLKG", "bOXnUtfr_bf", "N0YYC2jA8-R" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "FL trains models both on the server and clients. Such model training becomes problematic when the data distributions on each of the clients diverge from each other. The paper answers the question of prioritizing the generic model at the server or the personalized model on each client. The paper proposes to decoupl...
[ 8, 8, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, 2 ]
[ "iclr_2022_I1hQbx10Kxn", "iclr_2022_I1hQbx10Kxn", "SHdhNyoxcMX", "bOXnUtfr_bf", "G2HCSJUGPR1", "bOXnUtfr_bf", "G2HCSJUGPR1", "va8X0sY88Tq", "N0YYC2jA8-R", "iclr_2022_I1hQbx10Kxn", "iclr_2022_I1hQbx10Kxn" ]
iclr_2022_gpp7cf0xdfN
Reverse Engineering of Imperceptible Adversarial Image Perturbations
It has been well recognized that neural network based image classifiers are easily fooled by images with tiny perturbations crafted by an adversary. There has been a vast volume of research to generate and defend such adversarial attacks. However, the following problem is left unexplored: How to reverse-engineer advers...
Accept (Poster)
The manuscrupt studies an unexplored problem: How to reverse-engineer adversarial perturbations from an adversarial image? This leads to a new adversarial learning paradigm—Reverse Engineering of Deceptions (RED). The authors formalize the RED problem and identify a set of principles crucial to the RED approach design....
train
[ "WHbNJ6VVWKX", "XoUd18i3jCA", "1hnNnNJVId2", "XphAYbt67w-", "P3thXJYMhDy", "8MZP9ovA9Ps", "MWq6mSYwLrp", "FY5YRiP3eZY", "iuxxVB1x6nv", "R-OLY-_WdYW", "NulB92wwRoc", "IJGxoOHqY2j", "ZQ1ABdZkuPv", "X4Ly-ab1yIO", "XS_FHB96yJr", "UlsU6qmd9qK", "x7N29z2kMGW", "5QiWrtN3dC2", "mZQJqwnXl...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author",...
[ " Dear Reviewer T72q,\n\nIt is our great pleasure to learn that our response has addressed your previous concerns. \n\nThank you very much for raising the score. \n\nAuthors,", " Dear Reviewer Ah1k,\n\nThanks again for your careful review and valuable comments. It is our great pleasure to see that your original r...
[ -1, -1, 6, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ -1, -1, 2, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "XphAYbt67w-", "P3thXJYMhDy", "iclr_2022_gpp7cf0xdfN", "ZQ1ABdZkuPv", "NulB92wwRoc", "1hnNnNJVId2", "iuxxVB1x6nv", "iclr_2022_gpp7cf0xdfN", "tQrw8KnuCka", "yrd5P4oQB9", "yrd5P4oQB9", "FY5YRiP3eZY", "1hnNnNJVId2", "iclr_2022_gpp7cf0xdfN", "iclr_2022_gpp7cf0xdfN", "FY5YRiP3eZY", "FY5YR...
iclr_2022_srtIXtySfT4
Neural Parameter Allocation Search
Training neural networks requires increasing amounts of memory. Parameter sharing can reduce memory and communication costs, but existing methods assume networks have many identical layers and utilize hand-crafted sharing strategies that fail to generalize. We introduce Neural Parameter Allocation Search (NPAS), a nove...
Accept (Poster)
The reviewers were mostly concerned about the practical impact/implications of the proposed methods. There was a long discussion across multiple threads of the benefits of the approach proposed in CNNs vs larger language models, dissecting the benefits in terms of training time (as opposed to memory or FLOPs, which may...
test
[ "1oMc6ne-mRW", "SwTT6MKKL9a", "wZ74JVNhp4", "Lv7hhomqbAV", "N-wof21PWcx", "4cpiyywOSjg", "iAiaVv_v20J", "heafKruoTmt", "0VN4RdT106", "XQzC4BgTwt", "1VLsNa0Lwph", "4QLucwaHih", "__fWXvT_WFV", "HKEUXTom3u2", "ib5ghN4EbX3", "jaePe7hPcJ" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks again for your comments. We will incorporate these updates into the final paper, including error@1 for ImageNet.", " I would appreciate the authors' detailed response, which has cleared most of my concerns. Here is still a minor issue: Error@1 is more commonly used for ImageNet evaluation, which is sugge...
[ -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "SwTT6MKKL9a", "wZ74JVNhp4", "Lv7hhomqbAV", "1VLsNa0Lwph", "__fWXvT_WFV", "iclr_2022_srtIXtySfT4", "4QLucwaHih", "ib5ghN4EbX3", "heafKruoTmt", "HKEUXTom3u2", "jaePe7hPcJ", "4cpiyywOSjg", "XQzC4BgTwt", "iclr_2022_srtIXtySfT4", "iclr_2022_srtIXtySfT4", "iclr_2022_srtIXtySfT4" ]
iclr_2022_pjqqxepwoMy
Variational oracle guiding for reinforcement learning
How to make intelligent decisions is a central problem in machine learning and artificial intelligence. Despite recent successes of deep reinforcement learning (RL) in various decision making problems, an important but under-explored aspect is how to leverage oracle observation (the information that is invisible during...
Accept (Poster)
This paper studies the problem of using oracle information that's only available during training in RL. The key contributions are 1) a variational Bayesian approach that models the oracle observation as latent variables; and 2) a Mahjong environment for benchmarking RL with oracle guiding. The novelty of the proposed a...
val
[ "GiryzreL3aF", "sH68-l8YU1X", "9thKi4V687m", "PPJZzrCSk-6", "dITmEKbIpKS", "GV8H9YbhbXN", "jPmoUfkgB-Q", "zpgpReEk_1n", "l2w-lfspd5M", "gS080s76irR", "gbcpcpXMCRf", "JXrd4teCMHa", "Qx3gFIOD5SI", "fszHx0bymg", "txNTF5hgP4Y", "mbYxe63J8Fd", "Rrvt9ueu7Cc", "VWsZNsOScP_", "SXmE_9oXG1...
[ "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", ...
[ " Dear Reviewer 8vyx and AC/SAC/PC,\n\n\\\nIn case you haven't see our reply (posted 22:02 Nov. 29 AOE Time) to this comment (posted 19:23 Nov. 29 AOE Time) due to nested replies, we attach [the link to our reply](https://openreview.net/forum?id=pjqqxepwoMy&noteId=sH68-l8YU1X) . Thanks! \n\n\\\nBest regards,\n\nPap...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "9thKi4V687m", "jPmoUfkgB-Q", "GV8H9YbhbXN", "dITmEKbIpKS", "GV8H9YbhbXN", "jPmoUfkgB-Q", "txNTF5hgP4Y", "gS080s76irR", "iclr_2022_pjqqxepwoMy", "Rrvt9ueu7Cc", "fszHx0bymg", "Qx3gFIOD5SI", "VWsZNsOScP_", "r4WeRD6oKfV", "oL3cLvKrsl", "iclr_2022_pjqqxepwoMy", "l2w-lfspd5M", "SXmE_9oX...
iclr_2022_49h_IkpJtaE
How to Train Your MAML to Excel in Few-Shot Classification
Model-agnostic meta-learning (MAML) is arguably one of the most popular meta-learning algorithms nowadays. Nevertheless, its performance on few-shot classification is far behind many recent algorithms dedicated to the problem. In this paper, we point out several key facets of how to train MAML to excel in few-shot clas...
Accept (Poster)
Three of four reviewers rated this paper as an 8. These positive reviewers felt that this paper provided a lot of value through extensive experimentation with MAML in the few-shot setting. It was felt that the detailed analysis of the inner and outer loop of MAML provided a lot of understanding to the reader regarding...
train
[ "zjLKZqlB5VF", "x2JbFcYSyu", "n9LvlrPDpCL", "XhESmg3MoAA", "nLfhfsJFw3U", "WqG39GPMrLm", "ZOMDFWLa2y", "3oP19F0tTu_", "tp0AJTwpdk", "Fidv_lKj89", "NHDRxO3ISyY", "osMuChBAMJj", "LHZETngsEqO", "_K21-0XW_hP" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the quick response. We appreciate your opinions. More importantly, we respectfully think that your opinions have no conflicts with our contributions/claims/rebuttals and the strengths of our paper listed by other reviewers (e.g., by Reviewer mgm6). \n\nFirst, about “classifiers with diff...
[ -1, -1, -1, 3, -1, 8, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ -1, -1, -1, 5, -1, 4, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "x2JbFcYSyu", "n9LvlrPDpCL", "nLfhfsJFw3U", "iclr_2022_49h_IkpJtaE", "XhESmg3MoAA", "iclr_2022_49h_IkpJtaE", "iclr_2022_49h_IkpJtaE", "XhESmg3MoAA", "XhESmg3MoAA", "WqG39GPMrLm", "LHZETngsEqO", "_K21-0XW_hP", "iclr_2022_49h_IkpJtaE", "iclr_2022_49h_IkpJtaE" ]
iclr_2022_B6EIcyp-Rb7
Learning Object-Oriented Dynamics for Planning from Text
The advancement of dynamics models enables model-based planning in complex environments. Existing dynamics models commonly study image-based games with fully observable states. Generalizing these models to Text-Based Games (TBGs), which commonly describe the partially observable states with noisy text observations, is ...
Accept (Poster)
This manuscript makes an interesting observation: there is no reason why planning-based methods like MDPs must be limited to physical or grounded environments. One can plan about more abstract textual domains. It adapts the standard methods from planning to such text domains in a fairly straightforward way. The fact th...
train
[ "IGgzIkBNQ9", "uhegRF41Tno", "CdjvAfzCn9W", "KdZfK_dImz_", "qe3j73hE_v", "uTlUiW53vkx", "AGtb63kRzmK", "oOAOQPa0nk-", "KPrER_pJwCg", "4vBsuQO2TUX", "wuSOzWZbDZ", "0oXDiEi4pZW", "ERO-1L_-xqG", "n5QxrLDFhQp", "d58ILDkrqO6" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your update. I believe this is a good paper with a minor issue with the writing. Your comment has clarified many of my concerns but I am still worried about the overall structures and the missing details which may be put in an appendix. I will keep my score but I will not argue for a rejection.", " D...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 8, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 3, 4 ]
[ "KPrER_pJwCg", "iclr_2022_B6EIcyp-Rb7", "AGtb63kRzmK", "CdjvAfzCn9W", "0oXDiEi4pZW", "ERO-1L_-xqG", "oOAOQPa0nk-", "n5QxrLDFhQp", "uTlUiW53vkx", "0oXDiEi4pZW", "d58ILDkrqO6", "iclr_2022_B6EIcyp-Rb7", "iclr_2022_B6EIcyp-Rb7", "iclr_2022_B6EIcyp-Rb7", "iclr_2022_B6EIcyp-Rb7" ]
iclr_2022_t98k9ePQQpn
Optimal Transport for Long-Tailed Recognition with Learnable Cost Matrix
It is attracting attention to the long-tailed recognition problem, a burning issue that has become very popular recently. Distinctive from conventional recognition is that it posits that the allocation of the training set is supremely distorted. Predictably, it will pose challenges to the generalisation behaviour of th...
Accept (Poster)
All reviewers agreed that the idea proposed by the paper is interesting and is well-motivated for handling long-tailed recognition problems. As suggested by the reviewers, it seems important that the limitations the paper be addressed in the final version of the paper.
train
[ "lyTBwuGTIHX", "UheZ49xbNJO", "nuVETXrhc6x", "TYfQu1VoHD3", "1UdSwL83eGf", "SZxsJIhdVBZ", "Q_DC4FpM8XH", "p_koDSIq5tb", "qRI5P465QLQ", "zgNiTDPubJa", "vkBW9BlwOH0", "ssj5P9yYVo9", "WMGhGFc3Es", "1Mpd27ANfW", "P3Je1WbwDnD", "vOGkHTrMVjP" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author" ]
[ " So far, we really appreciate your interest in our paper. Thank you for the great efforts and time you have devoted to our work. You are really one of the best reviewers we have ever met. You raise excellent questions that keep us engaged in an ongoing process of thinking about and coming up with solutions to prob...
[ -1, 6, -1, -1, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 3, -1, -1, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "UheZ49xbNJO", "iclr_2022_t98k9ePQQpn", "UheZ49xbNJO", "iclr_2022_t98k9ePQQpn", "iclr_2022_t98k9ePQQpn", "iclr_2022_t98k9ePQQpn", "zgNiTDPubJa", "qRI5P465QLQ", "vkBW9BlwOH0", "1Mpd27ANfW", "ssj5P9yYVo9", "WMGhGFc3Es", "SZxsJIhdVBZ", "UheZ49xbNJO", "1UdSwL83eGf", "iclr_2022_t98k9ePQQpn"...
iclr_2022_vaRCHVj0uGI
Solving Inverse Problems in Medical Imaging with Score-Based Generative Models
Reconstructing medical images from partial measurements is an important inverse problem in Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). Existing solutions based on machine learning typically train a model to directly map measurements to medical images, leveraging a training dataset of paired images an...
Accept (Poster)
This is an interesting paper on improving score-based conditional sampling and its use in solving inverse problems. The current method of sampling from NCSNv2 is somewhat inefficient and the authors propose a different SDE that seems to work better for conditional generation. The paper is applied to Computational ima...
train
[ "0gEpq-KHgsk", "lB0_4UCwEFi", "HfmMHWyqyTr", "XX94ST0dsl7", "PEv8MMMAV8y", "vG_aTBACPz", "nBCCR-URzvN", "QY2Fh6I-ZRX", "eBzjkVGKXQW", "URXyQYa_QAf", "junbz2aWNI", "4xNt5Wn-7So", "Z1UiuwQVfUU", "LxZhXiqAg2I", "kY1P-fSLW45", "r3_nYOSJ03e", "guFK9bm1kD2", "lt5RBaGaA56" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The manuscript applies denoising score matching to linear inverse problems to solve compressed sensing problems in medical imaging, such as angular-undersampled CT and accelerated MRI reconstruction. Throughout the paper, the observed measurements $y$ are considered noise-free, which is reflected by a Dirac measur...
[ 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2022_vaRCHVj0uGI", "HfmMHWyqyTr", "vG_aTBACPz", "QY2Fh6I-ZRX", "nBCCR-URzvN", "0gEpq-KHgsk", "lt5RBaGaA56", "guFK9bm1kD2", "URXyQYa_QAf", "4xNt5Wn-7So", "4xNt5Wn-7So", "Z1UiuwQVfUU", "0gEpq-KHgsk", "lt5RBaGaA56", "guFK9bm1kD2", "iclr_2022_vaRCHVj0uGI", "iclr_2022_vaRCHVj0uGI", ...
iclr_2022_xFOyMwWPkz
Quantitative Performance Assessment of CNN Units via Topological Entropy Calculation
Identifying the status of individual network units is critical for understanding the mechanism of convolutional neural networks (CNNs). However, it is still challenging to reliably give a general indication of unit status, especially for units in different network models. To this end, we propose a novel method for qua...
Accept (Poster)
This paper proposes a new method for understanding the role and importance of individual units in convolutional neural networks. The reviewers were in agreement that the technique is novel and provides potentially valuable insights into neural network behavior. The reviewers were less certain about the utility or signi...
train
[ "hcdnkUm6Spc", "PrAMCzxPKU", "lhfBXLcCJV7", "CoPjnomGNGx", "kgRGWgj9EkN", "smerEBu3zSF", "foGVnjBBl5B", "DRBYAWSdrV", "jjDnsGnwg_E", "zxdthlqY-yL", "nontzR3nP", "h_xth7zKE9n", "L7tN7W_yBhw", "KefKda1HJ1", "kKjvvvJ0YQx", "VbbluhYDSho", "wuJ_g3R_gNO", "ulVoEpr_3vV" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " We thank the reviewer for all the valuable comments. We have carefully studied all the comments and addressed them one by one. This is just a gentle kind discussion about whether the concerns have been addressed. Meanwhile, we are glad to address any further concerns. We thank the reviewer for the valuable time."...
[ -1, 6, 6, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ -1, 4, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "ulVoEpr_3vV", "iclr_2022_xFOyMwWPkz", "iclr_2022_xFOyMwWPkz", "smerEBu3zSF", "iclr_2022_xFOyMwWPkz", "L7tN7W_yBhw", "DRBYAWSdrV", "jjDnsGnwg_E", "kKjvvvJ0YQx", "nontzR3nP", "ulVoEpr_3vV", "PrAMCzxPKU", "wuJ_g3R_gNO", "iclr_2022_xFOyMwWPkz", "lhfBXLcCJV7", "h_xth7zKE9n", "kgRGWgj9EkN...
iclr_2022_6MmiS0HUJHR
When Can We Learn General-Sum Markov Games with a Large Number of Players Sample-Efficiently?
Multi-agent reinforcement learning has made substantial empirical progresses in solving games with a large number of players. However, theoretically, the best known sample complexity for finding a Nash equilibrium in general-sum games scales exponentially in the number of players due to the size of the joint action spa...
Accept (Poster)
This paper proposes algorithms for learning (coarse) correlated equilibrium in multi-agent general-sum Markov games, with improved sample complexities that are polynomial in the maximum size of the action sets of different players. This is a very solid work along the line of multi-agent reinforcement learning and there...
train
[ "Tq-VuShg-y_", "aKAbpj6hr3", "aoVJCZJcjP", "wU7eb8pPmgT", "agqB1NTPECC", "a9JTrF_nOjs", "DapLh9gt74g", "AocLBAmOT6k", "asGcuqcxPQM" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank all reviewers for their valuable feedback. We have uploaded a new revision of our submission to incorporate reviewers’ suggestion and include more explanations of our algorithms and proof techniques. For clarity, all our changes are marked in red.", "The paper proposes algorithm for learning coarse cor...
[ -1, 8, -1, -1, -1, -1, 6, 8, 8 ]
[ -1, 3, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2022_6MmiS0HUJHR", "iclr_2022_6MmiS0HUJHR", "DapLh9gt74g", "AocLBAmOT6k", "asGcuqcxPQM", "aKAbpj6hr3", "iclr_2022_6MmiS0HUJHR", "iclr_2022_6MmiS0HUJHR", "iclr_2022_6MmiS0HUJHR" ]
iclr_2022_rpxJc9j04U
Proof Artifact Co-Training for Theorem Proving with Language Models
Labeled data for imitation learning of theorem proving in large libraries of formalized mathematics is scarce as such libraries require years of concentrated effort by human specialists to be built. This is particularly challenging when applying large Transformer language models to tactic prediction, because the scalin...
Accept (Poster)
This paper has potential impact in the theorem proving community, and demonstrated the possibility of using LMs for theorem proving in Lean, and is good enough to use "in the real world" through an interactive theorem proving tool. The reviewers wish their data/models were public to address some concerns raised by the...
test
[ "jR35uP3NY_i", "VAAgFR_46tr", "XPP3NpxhdwY", "2WGEPpfVfs", "Vb9hv-_JQiZ", "wC6YIok143p", "UU707mY2xv4", "40qJYzDMoA-", "ssJ8jrDMNRq" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes to train large Transformers for proving theorems on Lean by using the auxiliary training objective built with proof terms. The main contribution of this paper is (1) building the theorem proving benchmark for Lean (2) building a sequence of auxiliary training tasks using proof terms and verifyi...
[ 8, -1, -1, -1, -1, -1, 8, 5, 5 ]
[ 5, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "iclr_2022_rpxJc9j04U", "UU707mY2xv4", "40qJYzDMoA-", "iclr_2022_rpxJc9j04U", "ssJ8jrDMNRq", "jR35uP3NY_i", "iclr_2022_rpxJc9j04U", "iclr_2022_rpxJc9j04U", "iclr_2022_rpxJc9j04U" ]
iclr_2022_gNp54NxHUPJ
Fast Regression for Structured Inputs
We study the $\ell_p$ regression problem, which requires finding $\mathbf{x}\in\mathbb R^{d}$ that minimizes $\|\mathbf{A}\mathbf{x}-\mathbf{b}\|_p$ for a matrix $\mathbf{A}\in\mathbb R^{n \times d}$ and response vector $\mathbf{b}\in\mathbb R^{n}$. There has been recent interest in developing subsampling methods for t...
Accept (Poster)
Dear Authors, The paper was received nicely and discussed during the rebuttal period. There is consensus among the reviewers that the paper should be accepted: - The new result about query complexity of regression problem that the authors have added. Along with the result on for (noisy) Vandemonde matrix, these mak...
train
[ "-jwYgaciYzZ", "OBLej5CqwHV", "InYm0kPHk96" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper studies ways of subsampling tall-and-dense p-norm regression involving structured Vandermonde matrices. It shows that in this setting with additional structure, sampling by Lewis weights produces poly(p) * d row sized samples for all values of p. Theoretically, this has two major advantages: it avoids m...
[ 10, 6, 8 ]
[ 4, 3, 3 ]
[ "iclr_2022_gNp54NxHUPJ", "iclr_2022_gNp54NxHUPJ", "iclr_2022_gNp54NxHUPJ" ]
iclr_2022_jXKKDEi5vJt
Byzantine-Robust Learning on Heterogeneous Datasets via Bucketing
In Byzantine robust distributed or federated learning, a central server wants to train a machine learning model over data distributed across multiple workers. However, a fraction of these workers may deviate from the prescribed algorithm and send arbitrary messages. While this problem has received significant attention...
Accept (Spotlight)
This manuscript proposes and analyses a bucketing method for Byzantine-robustness in non-iid federated learning. The manuscript shows how existing Byzantine-robust methods suffer vulnerabilities when the devices are non-iid, and describe a simple coordinated attack that defeats many existing defenses. In response, the ...
train
[ "awG-duTxoM", "W9gbGDg7CL0", "MEQj40nHyuH", "UsC30ig9fL1", "GzPlmuhB103", "7yswZzH2Ob3", "tJwWuYg1fU", "rU3J6HXL2hm", "8t9McEA18bb", "_G4Hb7P3SQY", "wYiJLFM7X1m", "vwmtI44Z0mN", "xjQUUREFFUq", "bwnvpOPtMbe", "kOIh0zI9Yje", "lea4CBosdsm", "xSRHzQU3TYi", "2wBO4qDmNfB", "jR_J3kkUVrO...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "o...
[ " > the case of small $\\delta$ is important in practice \n\n> we never have the confidence that \\delta=0\n\nWhile the latter statement is true, a good way to make sure our algorithm is performing well under small $\\delta$ is to looks at what happens when $\\delta \\rightarrow 0$. Another way of saying this is ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, 10, -1, -1, -1, -1, -1, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, 4, 3 ]
[ "W9gbGDg7CL0", "MEQj40nHyuH", "UsC30ig9fL1", "GzPlmuhB103", "7yswZzH2Ob3", "wYiJLFM7X1m", "_Tb3bQHxki6", "_G4Hb7P3SQY", "iclr_2022_jXKKDEi5vJt", "vwmtI44Z0mN", "xjQUUREFFUq", "Zo4l2rIYyQq", "kOIh0zI9Yje", "B64bUODtv4s", "bwnvpOPtMbe", "2wBO4qDmNfB", "iclr_2022_jXKKDEi5vJt", "jR_J3k...
iclr_2022_tyrJsbKAe6
Pessimistic Model-based Offline Reinforcement Learning under Partial Coverage
We study model-based offline Reinforcement Learning with general function approximation without a full coverage assumption on the offline data distribution. We present an algorithm named Constrained Pessimistic Policy Optimization (CPPO) which leverages a general function class and uses a constraint over the models to ...
Accept (Poster)
In this paper, the authors consider the offline RL with only realizability and partial coverage assumption, under which a model-based pessimistic policy optimization algorithm has been proposed and rigorously justified. Moreover, variety of special MDP models, including kernelized nonlinear regulator and linear mixture...
train
[ "9tFEEZD37r", "mlezlXSqct6", "0Q-dbRPdL_g", "aycXaT7A1cF", "cQT_E6NAM1v", "Z0J7M0kQmF", "rbqbSd0gp6-", "aVWE8aA3tn", "e54M2gZCA_g", "cIF_akQAAaS", "ZCnDg-TkMuo", "iUB2DGaMzG4" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your feedback and the pointer to the appendix of the work [Rashidinejad 2021]. \n\n### Main motivation and contribution of this work:\n\nFirst of all, we would like to emphasize our major motivation and contribution: we aim to provably **move beyond tabular and linear models**, and our main contribu...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 8, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 3 ]
[ "0Q-dbRPdL_g", "9tFEEZD37r", "aVWE8aA3tn", "iclr_2022_tyrJsbKAe6", "iUB2DGaMzG4", "ZCnDg-TkMuo", "cIF_akQAAaS", "e54M2gZCA_g", "iclr_2022_tyrJsbKAe6", "iclr_2022_tyrJsbKAe6", "iclr_2022_tyrJsbKAe6", "iclr_2022_tyrJsbKAe6" ]
iclr_2022_J4iSIR9fhY0
Representation Learning for Online and Offline RL in Low-rank MDPs
This work studies the question of Representation Learning in RL: how can we learn a compact low-dimensional representation such that on top of the representation we can perform RL procedures such as exploration and exploitation, in a sample efficient manner. We focus on the low-rank Markov Decision Processes (MDPs) whe...
Accept (Spotlight)
In this paper, the authors extend the FLAMBE to the infinite-horizon MDP and largely improved the sample complexity of the representation learning in FLAMBE. Meanwhile, the authors also consider the offline representation learning with the same framework. Although there is still some computational issue in MLE for the ...
train
[ "4eyirdPhUbK", "yFL_9sAxaab", "zo35pzZ4Y2u", "G0rtN1tFTvX", "5bzZVEOc8XW", "JF5Fr2Ep4IH", "xYDfaM1Z6b2", "PIBX2X1rDE", "kk6eEynX_fd", "ninrxqaAx3B", "P0PHm80LeGM", "2OqlpoVHfhu", "h2f6bMmgTJN", "U3crHrwp7_a", "256vqzEOQe4" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "This paper studies low-rank episodic MDPs when the reward is deterministic and the reward function is known. The paper proposes a method that collects data, uses the data to estimate the low-rank MDP, uses this estimate, confidence bound around it, to come up with the policy to be used for the next episode. \n\nTh...
[ 5, 6, -1, 8, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1 ]
[ 4, 4, -1, 4, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2022_J4iSIR9fhY0", "iclr_2022_J4iSIR9fhY0", "U3crHrwp7_a", "iclr_2022_J4iSIR9fhY0", "256vqzEOQe4", "xYDfaM1Z6b2", "PIBX2X1rDE", "kk6eEynX_fd", "2OqlpoVHfhu", "iclr_2022_J4iSIR9fhY0", "iclr_2022_J4iSIR9fhY0", "ninrxqaAx3B", "4eyirdPhUbK", "yFL_9sAxaab", "G0rtN1tFTvX" ]
iclr_2022_gJcEM8sxHK
Mapping Language Models to Grounded Conceptual Spaces
A fundamental criticism of text-only language models (LMs) is their lack of grounding---that is, the ability to tie a word for which they have learned a representation, to its actual use in the world. However, despite this limitation, large pre-trained LMs have been shown to have a remarkable grasp of the conceptual st...
Accept (Poster)
The authors explore the hypothesis of whether grounded representations can be leaved from text only. They show that a language model trained with relatively little data can make conceptual domains such as color to a grounded world representation such as RGB coordinates. The paper was positively received by the reviewer...
train
[ "gK8ow0e9OPy", "BIYnbST0X1Q", "lZ-Kg48JY3r", "9VcJmFB-OzH", "tSn_rcjABdA", "sJbxOEM3Hj5", "g2-nCgH5tE1", "KnKL8Y9zHL", "Ua85QJpL2aO", "lTkm19IP_bC", "9xedH0OTh7g" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Just following up on the second point, regarding “we argue that this similarity, given the nature of the input prompts, does imply grounding”: I agree this “similarity” could imply “grounding” in general, but my point was more about whether this shows language grounding over the specific investigated concepts (e....
[ -1, -1, -1, -1, -1, -1, -1, 6, 5, 8, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "g2-nCgH5tE1", "tSn_rcjABdA", "lTkm19IP_bC", "9xedH0OTh7g", "sJbxOEM3Hj5", "Ua85QJpL2aO", "KnKL8Y9zHL", "iclr_2022_gJcEM8sxHK", "iclr_2022_gJcEM8sxHK", "iclr_2022_gJcEM8sxHK", "iclr_2022_gJcEM8sxHK" ]
iclr_2022_DhP9L8vIyLc
PAC Prediction Sets Under Covariate Shift
An important challenge facing modern machine learning is how to rigorously quantify the uncertainty of model predictions. Conveying uncertainty is especially important when there are changes to the underlying data distribution that might invalidate the predictive model. Yet, most existing uncertainty quantification alg...
Accept (Poster)
All reviewers were clear in their opinion that the paper deserves to be accepted. One reviewer also indicated a wish to increase the score from 6 to 7 but was not able to do that, so it isn't reflected in the final score. The reviewers appreciated the methodological contribution made by the paper.
train
[ "O-UqBycPHLS", "lzr7npg_RVz", "SslpGBu6QN1", "DOLmnNyfpTu", "JP-1p-qJSYd", "p2pcrjIxFS", "a59X5VkfaU-", "9WG8kYSO5dm", "Xdx2ty9ahFI" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Let $X$ be an instance space, $Y$ a set of labels, $D$ some underlying (hidden) distribution over $X \\times Y$. This work studies a new method for converting the output of a probabilistic predictor (e.g. a deep net) to a good prediction set: that is a mapping from $C: X \\to 2^Y$, such that for most samples $(x,y...
[ 6, -1, -1, -1, -1, -1, 8, 6, 6 ]
[ 2, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "iclr_2022_DhP9L8vIyLc", "Xdx2ty9ahFI", "Xdx2ty9ahFI", "O-UqBycPHLS", "9WG8kYSO5dm", "a59X5VkfaU-", "iclr_2022_DhP9L8vIyLc", "iclr_2022_DhP9L8vIyLc", "iclr_2022_DhP9L8vIyLc" ]
iclr_2022_NudBMY-tzDr
Natural Language Descriptions of Deep Features
Some neurons in deep networks specialize in recognizing highly specific perceptual, structural, or semantic features of inputs. In computer vision, techniques exist for identifying neurons that respond to individual concept categories like colors, textures, and object classes. But these techniques are limited in scope,...
Accept (Oral)
This paper presents a method to interpret neurons in the vision neural models by generating natural language description that specifies the activation selectivity of a given neuron. The proposed method first identifies an exemplar set of input image regions that corresponds to a neuron, then searches a natural language...
train
[ "z4LbREHZDkt", "W6m3jOWdmeZ", "Ioq3tpgF1ew", "3QAwivloTOJ", "Ujf3ruxzJAc5", "TmZcdupLvPP", "VU3Ii6L18nq", "bWuRN2ZE7t0", "4yVBG5TxJLU", "W9B1yB4vEQ", "hZxI3t92Fp3", "4xkbEpbPDJr" ]
[ "official_reviewer", "author", "author", "public", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " thank you. Nothing further from me.", " Thanks again to all the reviewers for the helpful suggestions! We’ve uploaded a new version of the paper that incorporates some of the reviewer comments. These include:\n- **[R3]** changing the title to “Natural Language Descriptions of Deep **Visual** Features” to better...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "VU3Ii6L18nq", "iclr_2022_NudBMY-tzDr", "3QAwivloTOJ", "iclr_2022_NudBMY-tzDr", "TmZcdupLvPP", "4xkbEpbPDJr", "hZxI3t92Fp3", "W9B1yB4vEQ", "iclr_2022_NudBMY-tzDr", "iclr_2022_NudBMY-tzDr", "iclr_2022_NudBMY-tzDr", "iclr_2022_NudBMY-tzDr" ]