paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2022_0RqDp8FCW5Z
W-CTC: a Connectionist Temporal Classification Loss with Wild Cards
Connectionist Temporal Classification (CTC) loss is commonly used in sequence learning applications. For example, in Automatic Speech Recognition (ASR) task, the training data consists of pairs of audio (input sequence) and text (output label),without temporal alignment information. Standard CTC computes a loss by aggr...
Accept (Poster)
This paper proposes an extension of CTC by considering the wild-card to adjust the label missing issues during training. The authors propose to minimize the loss over all possible sub-segments of the input to automatically align the one that matches the available transcript. It is empirically proved to significantly im...
train
[ "gZQ1GZXyMRy", "BJwKRd8PFoi", "iVlBlZVQZ-", "pwnChOiAw6e", "LWdSYBmfHt9", "kdJ_bhInjX", "cMKf0liGgLe", "k7u5d3PTVIT", "oXJkLggTXqN", "sVW9fLN6NVq", "xQwyGhtiYPwh", "IzVrZn47j_HU", "IAmgFxox-yl", "YnvA1_da7p6", "Cze_CVFR_ZE", "XTsI31bDIgJ", "RCEAHlXVvn_", "gt_InYpcRPT", "rbKgMFBu1...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "...
[ " Thanks again authors for additional experiments (including resolving my concern on wav2vec features) and detailed explanations! \n\nAs I said before proposed idea is mainly based on existing DTW algorithm (SPRING), but important adjustments are done for CTC case. I recommend the paper for acceptance, but still pr...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "pwnChOiAw6e", "iclr_2022_0RqDp8FCW5Z", "iclr_2022_0RqDp8FCW5Z", "LWdSYBmfHt9", "k7u5d3PTVIT", "cMKf0liGgLe", "XTsI31bDIgJ", "oXJkLggTXqN", "sVW9fLN6NVq", "gt_InYpcRPT", "iclr_2022_0RqDp8FCW5Z", "RCEAHlXVvn_", "iclr_2022_0RqDp8FCW5Z", "Cze_CVFR_ZE", "BJwKRd8PFoi", "rbKgMFBu1y9", "icl...
iclr_2022_sPIFuucA3F
Offline Neural Contextual Bandits: Pessimism, Optimization and Generalization
Offline policy learning (OPL) leverages existing data collected a priori for policy optimization without any active exploration. Despite the prevalence and recent interest in this problem, its theoretical and algorithmic foundations in function approximation settings remain under-developed. In this paper, we consider t...
Accept (Poster)
This paper studies off-policy learning of contextual bandits with neural network generalization. The proposed algorithm NeuraLCB acts based on pessimistic estimates of the rewards obtained through lower confidence bounds. NeuraLCB is both analyzed and empirically evaluated. This paper received four borderline reviews,...
train
[ "O3-cvivl9da", "ZlqidjSTfvx", "NdpenzFZdBV", "EOcZqDVIAHT", "ok2FHn0PzJ", "r6aJgN68-Kc", "7EgzdVra1d", "sXaFEopUNrn", "a-VDLBa6EY", "i2oKQl43X9Y", "33ggZFlDr6_", "zKyoUpqdJD", "c4yQfNdZTK", "6T5zsXaX1M", "AuqCGnerAA", "E8kNTP_siXl" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for appreciating our work and increasing your score. ", "This paper proposes a neural network based contextual bandit algorithm in the offline setting where a dataset of contexts and rewards are given by a logging policy. The goal of the proposed algorithm is to learn an optimal policy from the offlin...
[ -1, 6, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, 3, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "NdpenzFZdBV", "iclr_2022_sPIFuucA3F", "EOcZqDVIAHT", "6T5zsXaX1M", "33ggZFlDr6_", "sXaFEopUNrn", "iclr_2022_sPIFuucA3F", "c4yQfNdZTK", "i2oKQl43X9Y", "iclr_2022_sPIFuucA3F", "7EgzdVra1d", "E8kNTP_siXl", "AuqCGnerAA", "ZlqidjSTfvx", "iclr_2022_sPIFuucA3F", "iclr_2022_sPIFuucA3F" ]
iclr_2022_KeI9E-gsoB
Learning Curves for Gaussian Process Regression with Power-Law Priors and Targets
We characterize the power-law asymptotics of learning curves for Gaussian process regression (GPR) under the assumption that the eigenspectrum of the prior and the eigenexpansion coefficients of the target function follow a power law. Under similar assumptions, we leverage the equivalence between GPR and kernel ridge r...
Accept (Poster)
The title of the paper nicely summarizes the main goal of the paper and the abstract does the same for the achieved results. For this reason I abstain from providing another summary. The initial reviews were somewhat mixed but during the discussion phase, a lot of questions have been resolved so that actually three r...
train
[ "A_9Cdzmhvg3", "OXEFM9ifQsy", "2h4gNldtlIF", "S-Ycx26Q2v5", "IyWgKZaM_Hk", "J4coNc_3EUc", "YyFyLQV2Q-F", "DQ50GcKAIxt", "wpZ0vBhC5GS", "SBrr0iUko76", "xOOHN1HHpB_", "y03eRUdEVNh", "P430kGNTnmw", "n_hp1ctJ6L5", "9PMrC5NyXVx", "aRkEzpBpb-T", "qxzKeEyPSJK", "PoEFImTUI8u", "Q0NJHlGuW...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "...
[ " Thank you for comparing our bounds with those in Refs [1] and [2]:\n\n[1] Steinwart, Hush, and Scovel (2009), Optimal Rates for Regularized Least Squares Regression.\n\n[2] Fischer and Steinwart (2020), Sobolev Norm Learning Rates for Regularized Least-Squares Algorithms.\n\nIn the following we offer a comparison...
[ -1, 6, -1, -1, 6, 6, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 2, -1, -1, 4, 4, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "YyFyLQV2Q-F", "iclr_2022_KeI9E-gsoB", "9PMrC5NyXVx", "DQ50GcKAIxt", "iclr_2022_KeI9E-gsoB", "iclr_2022_KeI9E-gsoB", "iclr_2022_KeI9E-gsoB", "n_hp1ctJ6L5", "iclr_2022_KeI9E-gsoB", "xOOHN1HHpB_", "IJ7DwYL555Y", "9PMrC5NyXVx", "9PMrC5NyXVx", "qxzKeEyPSJK", "aRkEzpBpb-T", "GD7kT1pbR2r", ...
iclr_2022_HMJdXzbWKH
Online Target Q-learning with Reverse Experience Replay: Efficiently finding the Optimal Policy for Linear MDPs
Q-learning is a popular Reinforcement Learning (RL) algorithm which is widely used in practice with function approximation (Mnih et al., 2015). In contrast, existing theoretical results are pessimistic about Q-learning. For example, (Baird, 1995) shows that Q-learning does not converge even with linear function approxi...
Accept (Poster)
The paper analyzes a variant of the Q-Learning algorithm with two modifications: Online Target Learning (OTL), and Reverse Experience Replay (REP). OTL is essentially the same as using the target network. REP is a new modification of ER, which instead of randomly selecting samples from the buffer, replays them in the r...
val
[ "nMCp6JzRq9c", "vqCNPdZNspa", "-dbj7AOrk5", "yNnpK-BlSC1", "JSibYUAHVfA", "V6BwM9A7hAW", "sOX1JH9pBO2", "PDaBr6t082_", "KX-0m3fcDox", "C6NnRyrRwie", "my0hEvJdhr0", "B6K_PllyOnd", "BHXjb7jkHsx", "eGJvgXhoS2D", "hEBwE6CX0As", "OKcZNE6Q7DG", "iXnjBCjuP5y", "SWwZFAfM1cL" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_rev...
[ " The authors have addressed my concerns and I retain my score.", " Thanks for the clarification. The authors's responses have clearly addressed all my concerns. Therefore, I raise my score from 6 to 8.", "This paper studies the sample efficiency of the Q-learning algorithm in both tabular and linear setting. ...
[ -1, -1, 8, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5 ]
[ -1, -1, 3, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "OKcZNE6Q7DG", "PDaBr6t082_", "iclr_2022_HMJdXzbWKH", "sOX1JH9pBO2", "C6NnRyrRwie", "iclr_2022_HMJdXzbWKH", "eGJvgXhoS2D", "KX-0m3fcDox", "BHXjb7jkHsx", "hEBwE6CX0As", "iclr_2022_HMJdXzbWKH", "iclr_2022_HMJdXzbWKH", "-dbj7AOrk5", "V6BwM9A7hAW", "SWwZFAfM1cL", "iXnjBCjuP5y", "iclr_202...
iclr_2022_tFgdrQbbaa
Learning curves for continual learning in neural networks: Self-knowledge transfer and forgetting
Sequential training from task to task is becoming one of the major objects in deep learning applications such as continual learning and transfer learning. Nevertheless, it remains unclear under what conditions the trained model's performance improves or deteriorates. To deepen our understanding of sequential training, ...
Accept (Poster)
After carefully reading the reviews and rebuttal, I believe this work is of sufficient quality for acceptance. Understanding continual learning from a theoretical stand point is a very important topic. I find that one of the main issue raised by reviewers was about the exact meaning of Continual learning, and whether w...
val
[ "zYZ3UOJFMFO", "9SLPn2yLFbB", "ohJqiK5wjYx", "FMCijX_LLNI", "ObIUWmAg7oM", "D_W56PNGD3G", "4_JtJpLCIWu", "NGMiA-N9jW9", "UNWCGFUnMA0", "gBZJEZuAYd", "l0z4k051rLG", "c6pGJsE4jmm", "6G0KxSFlRwW" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the answer to my review. After the clarification made by the authors, I feel the paper deserves a bit more than a 6 recommendation but I am still skeptical about if the theoretical setting proposed fits an actual continual learning problem. Therefore, I do not upgrade my recommendation to 8 and keep it...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 2 ]
[ "4_JtJpLCIWu", "D_W56PNGD3G", "FMCijX_LLNI", "UNWCGFUnMA0", "iclr_2022_tFgdrQbbaa", "6G0KxSFlRwW", "c6pGJsE4jmm", "l0z4k051rLG", "gBZJEZuAYd", "iclr_2022_tFgdrQbbaa", "iclr_2022_tFgdrQbbaa", "iclr_2022_tFgdrQbbaa", "iclr_2022_tFgdrQbbaa" ]
iclr_2022_kezNJydWvE
Clean Images are Hard to Reblur: Exploiting the Ill-Posed Inverse Task for Dynamic Scene Deblurring
The goal of dynamic scene deblurring is to remove the motion blur in a given image. Typical learning-based approaches implement their solutions by minimizing the L1 or L2 distance between the output and the reference sharp image. Recent attempts adopt visual recognition features in training to improve the perceptual qu...
Accept (Poster)
The paper introduces an idea that was found interesting by all reviewers (including Gxxe who recommends a marginal reject). A majority of the reviewers also point out a few weaknesses of the paper, notably in terms of clarity of several statements that were found to be hand-wavy (see the reviews of Gxxe and oSPE for mo...
test
[ "oRrmV75cYus", "Rezub_FSUIe", "1cDjStl8DTy", "_Uj4hsq-OB", "rC6dU1ABiu", "Ivj-h3Is7CM", "w1Ly_o18yS-", "OEvlgAy_8Ie", "BSfFp-I-79X", "4HmAhN7fbtS", "cFyM46PyPL" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " All concerns in the review are well-address in the revised version, including adjusting figures and proofreading!", "The paper proposes a novel approach to using deep networks for image deblurring. Since training with simply reconstruction losses usually leads to oversmoothed results, recent works have looked a...
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "OEvlgAy_8Ie", "iclr_2022_kezNJydWvE", "_Uj4hsq-OB", "rC6dU1ABiu", "BSfFp-I-79X", "w1Ly_o18yS-", "Rezub_FSUIe", "4HmAhN7fbtS", "cFyM46PyPL", "iclr_2022_kezNJydWvE", "iclr_2022_kezNJydWvE" ]
iclr_2022_DhzIU48OcZh
P-Adapters: Robustly Extracting Factual Information from Language Models with Diverse Prompts
Recent work (e.g. LAMA (Petroni et al., 2019)) has found that the quality of the factual information extracted from Large Language Models (LLMs) depends on the prompts used to query them. This inconsistency is problematic because different users will query LLMs for the same information using different wording, but shou...
Accept (Poster)
This paper introduces a prompting technique for eliciting factual knowledge from frozen pretained transformer LMs. The key idea is to modify the embeddings produced by the embedding layer before they are passed to the first attention layer and the paper investigates several different design choices. The Reviewers all a...
val
[ "xuP5_Iw4pyF", "yhaZzqxYq1T", "X7mqiYpAnQ", "8U0_yKXdmpJ", "RCI5akjRk5Y", "SLqgFY_4gHu", "GKPf17Q5tDP", "CyVFgN-MY49", "bs2tilQzUCn", "fRdZlM9Czf3", "HYEDmZhn7jx", "fFQOOPseYrE", "sjeckEVrfE6", "f4vcjxlAmK", "3pTVn6qzprI" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I updated the score in my original review.", "This paper explores methods for improving the consistency of prompt-based factual probing of pre-trained language models. The main contributions are (1) several methods for mapping natural language prompts to continuous prompts that empirically improve accuracy and ...
[ -1, 5, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 8, 6, 8 ]
[ -1, 4, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "X7mqiYpAnQ", "iclr_2022_DhzIU48OcZh", "8U0_yKXdmpJ", "bs2tilQzUCn", "GKPf17Q5tDP", "iclr_2022_DhzIU48OcZh", "SLqgFY_4gHu", "3pTVn6qzprI", "yhaZzqxYq1T", "f4vcjxlAmK", "sjeckEVrfE6", "iclr_2022_DhzIU48OcZh", "iclr_2022_DhzIU48OcZh", "iclr_2022_DhzIU48OcZh", "iclr_2022_DhzIU48OcZh" ]
iclr_2022_Jjcv9MTqhcq
Rethinking Supervised Pre-Training for Better Downstream Transferring
The pretrain-finetune paradigm has shown outstanding performance on many applications of deep learning, where a model is pre-trained on an upstream large dataset (e.g. ImageNet), and is then fine-tuned to different downstream tasks. Though for most cases, the pre-training stage is conducted based on supervised methods,...
Accept (Poster)
The paper introduces a simple yet effective technique for supervised pre-training based on kNN lookup from a MoCo memory queue . Initially, the reviewers raised concerns about limited novelty with respect to neighborhood component analysis, baseline results lower than the original papers, and several other questions su...
test
[ "57lnY0PbJJ", "yF2f1vx5Y0U", "5E-QNxbr-xe", "lvk1fSlw44c", "gztdLurClf", "A5v_i-k-q1d", "4X5xMrdIr9E", "tJlsIUfUThP", "ezY6rQtXNrK", "ZQLB4F5amuY", "5UnJQ1xphs", "5NEcGK5KTKr", "9C83c2oQ64x", "in-jkAo2zh9", "oqXA4bBkPF", "qir59DXk_K2", "7nSLJEfgNWX", "M5TEHnMeEYG", "LfTjOTj55LH",...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", ...
[ " Dear Chairs and Reviewers, \n\nHope this message finds your well. \n\nWith the closing of the discussion period, we present a brief summary of our discussion with the reviewers as an overview for reference.\n\nFirst of all, we thank all the reviewers for their careful reading and valuable comments. We are encou...
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2022_Jjcv9MTqhcq", "lvk1fSlw44c", "iclr_2022_Jjcv9MTqhcq", "5NEcGK5KTKr", "oqXA4bBkPF", "4X5xMrdIr9E", "tJlsIUfUThP", "ezY6rQtXNrK", "qir59DXk_K2", "eBjaJeCYUN2", "5E-QNxbr-xe", "5E-QNxbr-xe", "iclr_2022_Jjcv9MTqhcq", "eBjaJeCYUN2", "5E-QNxbr-xe", "khJikBJUv5h", "9C83c2oQ64x", ...
iclr_2022_7B3IJMM1k_M
Optimal ANN-SNN Conversion for High-accuracy and Ultra-low-latency Spiking Neural Networks
Spiking Neural Networks (SNNs) have gained great attraction due to their distinctive properties of low power consumption and fast inference on neuromorphic hardware. As the most effective method to get deep SNNs, ANN-SNN conversion has achieved comparable performance as ANNs on large-scale datasets. Despite this, it re...
Accept (Poster)
The authors present an improved method to convert ANNs to spiking neural networks (SNNs). First, a network with quantized activations is constructed, then it is converted. They analyze the conversion errors theoretically. In addition to previously considered errors [Li et al. 2021] they also consider an error they call...
train
[ "aA7xZ6EN84v", "WoEwFWXb_Kq", "eXyifFJWHte", "S7hkSJWRtn3", "vl36MFdfiau", "L9gjFv2oMAP", "ilqDDa9lesm", "eG58v2Q2cQ1", "aWh0mrsmhN0", "A2NvcZJL6J", "MTRdduAtYIY", "qPkYZUW7lvQ", "ejce0BP-K90", "U5rDgIBf_Qz", "BCQQy0dD805", "sXrHp7B8uVy", "ZB5UxgUBGNy", "gsUd019Lfr_" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " The effort made by the authors in addressing the reviewers' comments is appreciated. The paper can be accepted.", " Thanks for pointing it out! We have revised the references in the responses and will revise them in the paper.", " ### 3. I am not 100% sure about the way unevenness error has been portrayed her...
[ -1, -1, -1, -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, -1, -1, -1, 4, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "ejce0BP-K90", "L9gjFv2oMAP", "U5rDgIBf_Qz", "eXyifFJWHte", "iclr_2022_7B3IJMM1k_M", "S7hkSJWRtn3", "qPkYZUW7lvQ", "iclr_2022_7B3IJMM1k_M", "A2NvcZJL6J", "MTRdduAtYIY", "BCQQy0dD805", "ZB5UxgUBGNy", "gsUd019Lfr_", "vl36MFdfiau", "eG58v2Q2cQ1", "iclr_2022_7B3IJMM1k_M", "iclr_2022_7B3I...
iclr_2022_O50443AsCP
TAPEX: Table Pre-training via Learning a Neural SQL Executor
Recent progress in language model pre-training has achieved a great success via leveraging large-scale unstructured textual data. However, it is still a challenge to apply pre-training on structured tabular data due to the absence of large-scale high-quality tabular data. In this paper, we propose TAPEX to show that ta...
Accept (Poster)
Reviewers are positive overall -- the is a general consensus towards acceptance. Reviewers viewed the simplicity, novelty, and effectiveness of the propose pre-training approach as strengths. Further, reviewers praised the draft as very clearly written, and viewed experimental ablations as relatively in-depth -- e.g. t...
val
[ "6zbJMCw-lzl", "KgdXTnU9THV", "icp2gZpQ5Zv", "F6g-3fTt9i1", "RpCVRC_E7Wy", "qF4xM9zoaZE", "hb9_kESQjlt", "JhRJ2g5Yodj", "PtBTAkSNZHy", "1pI87OhNxZQ", "ChDQ6_Xproq", "u0HoQ85dKg-", "23S7dVtN06Q", "wVhxCIbo3O", "OP4xhX180u6", "5OAeMfwFd8Q", "yPzwBHzR2D", "10beN9umx7P", "ElUKsX_jFGr...
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_r...
[ " First of all, we would like to thank you for your quick response and active discussion, especially as we are getting closer to the end of the discussion stage.\n\nAs you stated, the line of table pre-training [1,2,3,4,5,6], including our paper, inherit the idea of [7] through further pre-training language models ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, -1, -1, -1, -1, -1, 8, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, -1, -1, -1, -1, -1, 5, 4 ]
[ "icp2gZpQ5Zv", "RpCVRC_E7Wy", "KgdXTnU9THV", "u0HoQ85dKg-", "1pI87OhNxZQ", "AFPqh50jGWv", "PtBTAkSNZHy", "wVhxCIbo3O", "10beN9umx7P", "u0HoQ85dKg-", "u0HoQ85dKg-", "iclr_2022_O50443AsCP", "iclr_2022_O50443AsCP", "5OAeMfwFd8Q", "iclr_2022_O50443AsCP", "23S7dVtN06Q", "AFPqh50jGWv", "...
iclr_2022_aYAA-XHKyk
Rethinking Class-Prior Estimation for Positive-Unlabeled Learning
Given only positive (P) and unlabeled (U) data, PU learning can train a binary classifier without any negative data. It has two building blocks: PU class-prior estimation (CPE) and PU classification; the latter has been well studied while the former has received less attention. Hitherto, the distributional-assumption-f...
Accept (Poster)
This paper received a majority vote for acceptance from reviewers and me. I have read all the materials of this paper including manuscript, appendix, comments and response. Based on collected information from all reviewers and my personal judgement, I can make the recommendation on this paper, *acceptance*. Here are th...
val
[ "eMTIq-3xKOf", "Si9pZwSYllD", "y7b6vOwF86v", "5xqz8xhvli", "0uWPOpxZEqJ", "EzSon3kzh2", "aSgVPrsgUeF", "rBI3qbPx4Nf", "bjtatTqF4LR", "cEFOwhFVcg", "rwn3M0pjTS", "w202zp0LeX5", "WeYTEzG93xE" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the authors' feedback. I decide to keep my score after reading other reviewers' comments and the authors' response. ", " Thanks for your responses.\n\nFigure 2 (b) does show some evidences about the selection of $p$ on synthetic data, while I think it would be more convincing if the hyper-\nparame...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 5, 8, 6 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "bjtatTqF4LR", "0uWPOpxZEqJ", "rwn3M0pjTS", "iclr_2022_aYAA-XHKyk", "WeYTEzG93xE", "5xqz8xhvli", "rBI3qbPx4Nf", "EzSon3kzh2", "w202zp0LeX5", "rwn3M0pjTS", "iclr_2022_aYAA-XHKyk", "iclr_2022_aYAA-XHKyk", "iclr_2022_aYAA-XHKyk" ]
iclr_2022_7DI6op61AY
Neural Markov Controlled SDE: Stochastic Optimization for Continuous-Time Data
We propose a novel probabilistic framework for modeling stochastic dynamics with the rigorous use of stochastic optimal control theory. The proposed model called the neural Markov controlled stochastic differential equation (CSDE) overcomes the fundamental and structural limitations of conventional dynamical models by ...
Accept (Poster)
The authors propose to combine ideas from SDEs and time series modeling with stochastic optimal control to present a framework for modeling continuous-time stochastic dynamics. The reviewers are in agreement that there are several good ideas presented here and that the interface of the perspectives the authors combine ...
test
[ "Ih3X6pGGOqs", "evd0J-rpvTp", "ZlqLrvB9onL", "z4XGYvxD3WQ", "sTYYYczf2Hz", "YlH9ucGND2", "-3XRogSlS1W", "P5cExTBpkF9", "TcKKxeyr4SC", "onzWaP0rqXV", "nx-pcrVcEuM", "afPppetHc3", "cic0gAjAy8" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " $\\textbf{Q3. }$ Page 4, Definition 1. For each observation y, the dependent forward loss is calculated till time t as shown in the integration. So observations sampled at the time far away from t usually will incur larger losses due to drift and diffusion effects accumulated over longer time span. So observation...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 2, 4 ]
[ "nx-pcrVcEuM", "nx-pcrVcEuM", "afPppetHc3", "nx-pcrVcEuM", "YlH9ucGND2", "cic0gAjAy8", "cic0gAjAy8", "cic0gAjAy8", "onzWaP0rqXV", "iclr_2022_7DI6op61AY", "iclr_2022_7DI6op61AY", "iclr_2022_7DI6op61AY", "iclr_2022_7DI6op61AY" ]
iclr_2022_ef0nInZHKIC
Symbolic Learning to Optimize: Towards Interpretability and Scalability
Recent studies on Learning to Optimize (L2O) suggest a promising path to automating and accelerating the optimization procedure for complicated tasks. Existing L2O models parameterize optimization rules by neural networks, and learn those numerical rules via meta-training. However, they face two common pitfalls: (1) sc...
Accept (Poster)
The paper proposes a method for learning to optimize (L2O) by distilling a numerical L2O optimization rule into a simple mathematical rule, mathematical equation, using special-purpose student learning algorithm. The motivation for using a symbolic distillation is to provide interpretability and scalability of the tra...
train
[ "-CHF_F9RPpS", "8tu22nH9RDC", "fdD1Uyo2Dp", "KAiD8qyWnJI", "OcKId3pwES3", "XHAWkQRZ1jK", "ElJDZUF9JYM", "ySk2IcecjQ_", "9Bp4WYFMwoa", "_QmtgGIYd0L", "TlDtrZsuY06", "snIcZVzRAX", "wReHm6fp2a8", "ggtEf0mfFkJ", "F4iKSkkufi", "9qY6lihbL-O", "p9C0rXFglqL", "fRbmDr7flkj", "PJEZ2ij724d"...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", ...
[ " Dear reviewer FzGU:\n\nWe are so glad that our response was able to turn your assessment into a more positive one. Thanks so much!\n\nRegards,\n\nAuthors of paper 944\n", " Dear reviewer ALsy:\n\nThank you for your response and comments on improving the paper! W.r.t. the new concerns you raised, we have address...
[ -1, -1, 5, -1, 6, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, 4, -1, 4, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "XHAWkQRZ1jK", "KAiD8qyWnJI", "iclr_2022_ef0nInZHKIC", "ySk2IcecjQ_", "iclr_2022_ef0nInZHKIC", "ElJDZUF9JYM", "9Bp4WYFMwoa", "Ox3ENBvvSMC", "wReHm6fp2a8", "snIcZVzRAX", "iclr_2022_ef0nInZHKIC", "ggtEf0mfFkJ", "6qDnWA9-DQA", "F4iKSkkufi", "9qY6lihbL-O", "35pj3KoMu8g", "ksaG3BtJFHU", ...
iclr_2022_XGzk5OKWFFc
CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a labeled source domain to a different unlabeled target domain. Most existing UDA methods focus on learning domain-invariant feature representation, either from the domain level or category level, using convolution neural networks (CNNs)-based...
Accept (Poster)
This paper makes use of cross-attention transformers to extract invariant features for unsupervised domain adaptation. Combined with pseudo-label approaches, the proposed method achieves state-of-the-art performance, possibly because the transformer features are more robust to the noise. In addition, a two-way centre-a...
train
[ "0tgcf6sYAb", "n3FD_81zGV", "6MW0t-zgMrC", "AOuwhz-RG9u", "1HrVyeY06lW", "sZVJpf_QpSu", "ZmP2xddnbW", "iSE8TVnMjGo", "Mgjg0nsCnVg" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ " Thanks a lot for your quick responses. For MRKLD+LRENT, I think you can tune the thresholds to control the selected pseudo-label portion which could increase the recall for each class. But I agree it's hard tune the hyper-parameters in such a short time. Overall I think this is a good paper and will have impacts ...
[ -1, -1, 6, -1, 8, -1, -1, -1, 8 ]
[ -1, -1, 3, -1, 4, -1, -1, -1, 4 ]
[ "n3FD_81zGV", "AOuwhz-RG9u", "iclr_2022_XGzk5OKWFFc", "iSE8TVnMjGo", "iclr_2022_XGzk5OKWFFc", "6MW0t-zgMrC", "Mgjg0nsCnVg", "1HrVyeY06lW", "iclr_2022_XGzk5OKWFFc" ]
iclr_2022_gVRhIEajG1k
Rethinking Adversarial Transferability from a Data Distribution Perspective
Adversarial transferability enables attackers to generate adversarial examples from the source model to attack the target model, which has raised security concerns about the deployment of DNNs in practice. In this paper, we rethink adversarial transferability from a data distribution perspective and further enhance tra...
Accept (Poster)
This work studied an important issue, i.e., adversarial transferability, in adversarial examples. It provides a novel perspective that samples in the low-density region of the ground truth distribution where models are not well trained have stronger transferability across different models. Based on that, it proposed a...
train
[ "wp2rS0zBJqO", "d6SxTfJdF8D", "D6iMPwZG_Z2", "QFPMNXlKDV", "_-HEEJn0c7E", "d_2WhuPPoxP", "FnLeOLzUafs", "eV0t6_wZG-", "JsqVIHmjThk", "RlSmQS49oQ5", "SHD8dTBk4xr" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper presents a new adversarial attack method which could generate adversarial examples with higher transferability. The proposed method is based on the observation that low-density region of the training data is not well trained. To utilize this, the authors try to align the adversarial direction with the d...
[ 8, -1, 8, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_gVRhIEajG1k", "iclr_2022_gVRhIEajG1k", "iclr_2022_gVRhIEajG1k", "_-HEEJn0c7E", "FnLeOLzUafs", "D6iMPwZG_Z2", "D6iMPwZG_Z2", "wp2rS0zBJqO", "SHD8dTBk4xr", "SHD8dTBk4xr", "iclr_2022_gVRhIEajG1k" ]
iclr_2022_Harn4_EZBw
Generative Pseudo-Inverse Memory
We propose Generative Pseudo-Inverse Memory (GPM), a class of deep generative memory models that are fast to write in and read out. Memory operations are recast as seeking robust solutions of linear systems, which naturally lead to the use of matrix pseudo-inverses. The pseudo-inverses are iteratively approximated, wit...
Accept (Poster)
The authors present a new memory-augmented neural network that is related to the Kanerva machine of Wu et. al. The reviewers considered the ideas in the paper novel and interesting, but were concerned about presentation issues and literature review. The authors have improved both... however- authors: please even unde...
train
[ "xSa6CgT4uxW", "pIAKsuiZNjD", "OqnDOtzQ_Zw", "XJT0BxUWBQ9", "iGdtU5gsReq", "6xxbW5PK3I_", "psQXzrwC4jb" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewers for their careful and detailed reviews. In this comment, we would like to highlight what has been improved in the revision. \n\n• We have added the “Related work” section in which we give an overview of memory models, generative models, usages of the pseudo-inverses in neural networks, and ...
[ -1, -1, -1, -1, 5, 8, 5 ]
[ -1, -1, -1, -1, 4, 4, 2 ]
[ "iclr_2022_Harn4_EZBw", "psQXzrwC4jb", "6xxbW5PK3I_", "iGdtU5gsReq", "iclr_2022_Harn4_EZBw", "iclr_2022_Harn4_EZBw", "iclr_2022_Harn4_EZBw" ]
iclr_2022_yjMQuLLcGWK
FP-DETR: Detection Transformer Advanced by Fully Pre-training
Large-scale pre-training has proven to be effective for visual representation learning on downstream tasks, especially for improving robustness and generalization. However, the recently developed detection transformers only employ pre-training on its backbone while leaving the key component, i.e., a 12-layer transforme...
Accept (Poster)
Summary: Authors present an approach for transformer based object detection that “fully pretrains” the encoder structure of the transformer, and drops the pretrained convolutional backbone used in other works. Pros: - Eliminates need of extra visual backbone - Fewer parameters than other works - Achieves competitive ...
train
[ "rPQvQuwi2fX", "xmZHyS-rx8N", "ykBWvlz2X2d", "B73Gz2Oq0FY", "AVGiA9ZaYg3", "DdtqAf8nH2v", "6DuZXK1uefR", "JzxfFz6Uhy8", "vnmUFeatLT4", "VfawEJ4rwy", "_Mo_aTMzXgk", "kRK6ufQYyI" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper argues that recently developed detection transformers didn't employ pre-training on transformer layers and not benefit from pretraining. Hence, it proposes FP-DETR to fully pretrain transformer on image classification and then finetune it for object detection by changing the task adapter. Compared to pr...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5 ]
[ 5, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2022_yjMQuLLcGWK", "_Mo_aTMzXgk", "JzxfFz6Uhy8", "VfawEJ4rwy", "vnmUFeatLT4", "rPQvQuwi2fX", "DdtqAf8nH2v", "kRK6ufQYyI", "6DuZXK1uefR", "iclr_2022_yjMQuLLcGWK", "iclr_2022_yjMQuLLcGWK", "iclr_2022_yjMQuLLcGWK" ]
iclr_2022_f2OYVDyfIB
Scale Efficiently: Insights from Pretraining and Finetuning Transformers
There remain many open questions pertaining to the scaling behaviour of Transformer architectures. These scaling decisions and findings can be critical, as training runs often come with an associated computational cost which have both financial and/or environmental impact. The goal of this paper is to present scaling i...
Accept (Poster)
The results reported in this paper and the model checkpoints released are of interest and broad utility to the community in the opinion of the NLP. While one reviewer was somewhat negative, most reviewers were in favor of acceptance of this paper, which expands the results from [1] to downstream tasks. The AC therefor...
train
[ "ZsBuezHuMaK", "hmejYHgf5pZ", "GP-iHjlJzD", "p6xSv_oyVPn", "q2Ft3UTzkjR", "Py8Ymffpc37", "TO5iE07aP3e", "tuDz2GdCk5", "cfUQKi-0Ech", "w_ZmllU_4N" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have read all comments and the author's response. My main concerns have been addressed and I keep my original score.", " Thanks to all reviewers for the hard work and effort spent reviewing our manuscript. We are greatly appreciative of the insightful comments and feedback.\n\nWe have updated the manuscript w...
[ -1, -1, -1, -1, -1, -1, 8, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "q2Ft3UTzkjR", "iclr_2022_f2OYVDyfIB", "tuDz2GdCk5", "w_ZmllU_4N", "cfUQKi-0Ech", "TO5iE07aP3e", "iclr_2022_f2OYVDyfIB", "iclr_2022_f2OYVDyfIB", "iclr_2022_f2OYVDyfIB", "iclr_2022_f2OYVDyfIB" ]
iclr_2022_T__V3uLix7V
RegionViT: Regional-to-Local Attention for Vision Transformers
Vision transformer (ViT) has recently shown its strong capability in achieving comparable results to convolutional neural networks (CNNs) on image classification. However, vanilla ViT simply inherits the same architecture from the natural language processing directly, which is often not optimized for vision application...
Accept (Poster)
The paper proposed a new architecture called Regional-to-Local attention for the vision transformers. The idea is easy to understand, the model adopts the pyramid structure and adds a regional to local attention instead of using the global attention. The architecture is well-motivated and the paper is generally well wr...
test
[ "eGk_GUWfJQh", "utaHl0Ewq4M", "RvkJADEn2XZ", "w7f2n8H7pTF", "fi9Sk9GvZHI", "0j3BWfKjGud", "aitVEXGyaTx", "5EigUf8MeZ9", "yH5oxAqg97a", "qi-Qnn1ufM4", "qRK3H4iR-ry", "med5hYQ6kb5" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for addressing many raised concerns. \n\nHowever, the answers to the questions (e) and (f) are not convincing. I have gone through the other reviewers comments and the authors response to those. The updated version of the paper is bit better but \"not significantly\" impacted on my decision about this p...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "fi9Sk9GvZHI", "fi9Sk9GvZHI", "qRK3H4iR-ry", "yH5oxAqg97a", "iclr_2022_T__V3uLix7V", "w7f2n8H7pTF", "med5hYQ6kb5", "qi-Qnn1ufM4", "iclr_2022_T__V3uLix7V", "iclr_2022_T__V3uLix7V", "iclr_2022_T__V3uLix7V", "iclr_2022_T__V3uLix7V" ]
iclr_2022_8hWs60AZcWk
Discrete Representations Strengthen Vision Transformer Robustness
Vision Transformer (ViT) is emerging as the state-of-the-art architecture for image recognition. While recent studies suggest that ViTs are more robust than their convolutional counterparts, our experiments find that ViTs are overly reliant on local features (\eg, nuisances and texture) and fail to make adequate use of...
Accept (Poster)
Summary: Authors present an approach to improve the robustness of vision transformers by mapping standard tokens into discrete tokens that are invariant to small perturbations. Method is applied to a variety of backbone architectures and evaluated on a range of out of distribution forms of ImageNet test set. Significan...
val
[ "gSneeDR-K5F", "ZGdT2VFqR0V", "5DYtOodj9R", "7vldYHiSLhP", "DfgMq5K7Xij", "bhLI5IPWDyS", "3O98r08iVzf", "t3za3juiaFi", "RLzXpDlrHEM", "uBdjByJl17Z", "LqHEWTeq-Wp", "PkSlYmJ_N43", "5e96LD-A2xA", "0-YpZn0GxeM" ]
[ "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **We sincerely hope the review can engage in the discussion and respond to our comments.**\n\n**Principled study on texture vs. Shape bias.**\n\nWe performed the same shape-bias study as in [1] and [2] as suggested by the reviewer. The detailed results are in the “quantitative results” paragraphs of Section 4.3....
[ -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, 3, 8, 5 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "LqHEWTeq-Wp", "gSneeDR-K5F", "t3za3juiaFi", "iclr_2022_8hWs60AZcWk", "bhLI5IPWDyS", "3O98r08iVzf", "7vldYHiSLhP", "uBdjByJl17Z", "5e96LD-A2xA", "0-YpZn0GxeM", "PkSlYmJ_N43", "iclr_2022_8hWs60AZcWk", "iclr_2022_8hWs60AZcWk", "iclr_2022_8hWs60AZcWk" ]
iclr_2022_JM2kFbJvvI
Who Is the Strongest Enemy? Towards Optimal and Efficient Evasion Attacks in Deep RL
Evaluating the worst-case performance of a reinforcement learning (RL) agent under the strongest/optimal adversarial perturbations on state observations (within some constraints) is crucial for understanding the robustness of RL agents. However, finding the optimal adversary is challenging, in terms of both whether we ...
Accept (Poster)
Overall the paper makes good contributions to the area of robust deep reinforcement learning. The presentation needs to be improved to avoid any confusions. Please take all the reviews into account and revise the paper accordingly.
train
[ "O5sGY56M3ww", "OhSlulJlHx", "BwadR9AJ_F0", "b4pj643Qc9_", "Al-eevWUIhW", "pvZC1PlFyC0", "pfffnfq_DKF", "yp7EPyoG2Uk", "nD4AIqQyrf", "VOOXQvAErFM", "KFcBI5rh-e8", "2V43Tg3Klns", "7_sn_f0WI1H", "uPwioAKIOM", "xNAQ-FaZqm", "Gkbv-5wHE_9", "zmfGNMpG31" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer RTbW,\n\nAs the discussion period is going to end, we would like to kindly ask you to consider our clarifications. We would like to again emphasize the importance of the studied problem, which seems to be the major concern of the reviewer. In particular, the reviewer said\n\n> The fact that there ar...
[ -1, -1, 3, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ -1, -1, 2, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "OhSlulJlHx", "BwadR9AJ_F0", "iclr_2022_JM2kFbJvvI", "iclr_2022_JM2kFbJvvI", "nD4AIqQyrf", "iclr_2022_JM2kFbJvvI", "yp7EPyoG2Uk", "zmfGNMpG31", "b4pj643Qc9_", "Gkbv-5wHE_9", "7_sn_f0WI1H", "KFcBI5rh-e8", "iclr_2022_JM2kFbJvvI", "xNAQ-FaZqm", "BwadR9AJ_F0", "iclr_2022_JM2kFbJvvI", "ic...
iclr_2022_RXQ-FPbQYVn
Anti-Concentrated Confidence Bonuses For Scalable Exploration
Intrinsic rewards play a central role in handling the exploration-exploitation tradeoff when designing sequential decision-making algorithms, in both foundational theory and state-of-the-art deep reinforcement learning. The LinUCB algorithm, a centerpiece of the stochastic linear bandits literature, prescribes an ellip...
Accept (Poster)
This paper tackles the problem of exploration in Deep RL in settings with a large action space. To this end, the authors introduce an intrinsic reward inspired by the exploration bonus of LinUCB. This novel exploration method called anti-concentrated confidence bounds (ACB) provably approximates the elliptical explorat...
train
[ "00c_XUQKI9", "LSO9_HRHRIv", "V6Tr-DOct9v", "kDEzvYLIgIl", "j8vfLn39Nmv", "Hvhn3fjge6L", "2c2fZINK7WX", "-Knf-U3yhXE" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " As a practical matter, the computational efficiency of ACB is not superior to RND. \n\nThe reviewer appears to be asking us to argue for the merits of theoretical transparency in algorithm design. Like many in machine learning, we believe that the gold standard for “understanding” an approach is an ability to mat...
[ -1, -1, -1, -1, -1, 8, 6, 5 ]
[ -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "LSO9_HRHRIv", "kDEzvYLIgIl", "2c2fZINK7WX", "-Knf-U3yhXE", "Hvhn3fjge6L", "iclr_2022_RXQ-FPbQYVn", "iclr_2022_RXQ-FPbQYVn", "iclr_2022_RXQ-FPbQYVn" ]
iclr_2022_zq1iJkNk3uN
Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm
Recently, large-scale Contrastive Language-Image Pre-training (CLIP) has attracted unprecedented attention for its impressive zero-shot recognition ability and excellent transferability to downstream tasks. However, CLIP is quite data-hungry and requires 400M image-text pairs for pre-training, thereby restricting its a...
Accept (Poster)
This paper aims at improving the data efficiency of pretraining in CLIP. This is a practically meaningful research direction. The proposed method is simple, even kind of straightforward and has limited innovations. It combines self-supervision within each modality, multi-view supervision across modalities, and nearest-...
train
[ "AZrxIVnuhYG", "NkaP4s_2Iip", "YTF-D6icPM4", "NfwmpnGvw_s", "hwZqfPnmXPZ", "q5Rc5uqpKZ", "ZlfrMHXiUlP", "wrfD5uMhw4z", "10AeRX6KHfY", "XbdAPZsuju", "b_4lsbGD0X", "C7eEUbcCVI" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper proposes DeCLIP to further utilize the data potential by adding three training objectives to CLIP pre-training: 1) inspired by SimSiam and BERT, self-supervised objectives are added to both image and text; 2) they generate different views for both images and text, and apply contrastive objectives; 3) the...
[ 6, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 3, 4, -1, 5, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2022_zq1iJkNk3uN", "iclr_2022_zq1iJkNk3uN", "10AeRX6KHfY", "iclr_2022_zq1iJkNk3uN", "q5Rc5uqpKZ", "NkaP4s_2Iip", "iclr_2022_zq1iJkNk3uN", "NfwmpnGvw_s", "wrfD5uMhw4z", "AZrxIVnuhYG", "C7eEUbcCVI", "iclr_2022_zq1iJkNk3uN" ]
iclr_2022_Bn09TnDngN
How to Inject Backdoors with Better Consistency: Logit Anchoring on Clean Data
Since training a large-scale backdoored model from scratch requires a large training dataset, several recent attacks have considered to inject backdoors into a trained clean model without altering model behaviors on the clean data. Previous work finds that backdoors can be injected into a trained clean model with Adver...
Accept (Poster)
This paper proposes measures of consistency between back-doored and clean models, proposes regularization using those consistency measures, and showcases that such trained models indeed exhibit better consistency. Also, it is demonstrated that the fine-tuned model does not deviate too far from the original clean model....
train
[ "uuQSroAWzT", "QHCkQ2VosP", "2fpyUgnkkud", "FPY37Jpzr6n", "pPKSuWNfogy", "KmrcQq2byIr", "ycS7xbwGL0P", "ioxmvpWi7z-", "E1pVAE_-_M4", "PTIyvo5uUMh", "QXGXgQdYH4" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have read the response of the authors. Some details and concerns have been addressed. Overall, it is a good work and I vote for acceptance.", " I have checked the detailed response from the authors. It has fully addressed all my concerns. Overall, I think it is a very excellent work in NLP security domain. So...
[ -1, -1, -1, -1, -1, -1, -1, 8, 8, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "pPKSuWNfogy", "ycS7xbwGL0P", "KmrcQq2byIr", "QXGXgQdYH4", "PTIyvo5uUMh", "E1pVAE_-_M4", "ioxmvpWi7z-", "iclr_2022_Bn09TnDngN", "iclr_2022_Bn09TnDngN", "iclr_2022_Bn09TnDngN", "iclr_2022_Bn09TnDngN" ]
iclr_2022_9XhPLAjjRB
SGD Can Converge to Local Maxima
Previous works on stochastic gradient descent (SGD) often focus on its success. In this work, we construct worst-case optimization problems illustrating that, when not in the regimes that the previous works often assume, SGD can exhibit many strange and potentially undesirable behaviors. Specifically, we construct land...
Accept (Spotlight)
Overall, the paper provides interesting counter examples for the SGD with constant step-size (that relies on a relative noise model that diminishes at the critical points), which provide critical (counter) insights into what we consider as good convergence metrics, such as expected norm of the gradient. The initial s...
train
[ "Kc6vBGmt2vk", "lCuwQlYXDF", "jJqwjcI8A_M", "-cm8xQt5jJj", "H2AQxOQwTx7", "kn7ixqa8Xdj", "TYFyg0-vX0P", "bI-1Q0hTL3", "U7ySSe6S-Og", "4n3FtxH4atR", "Y4zqVutPXpu", "DYJIZkpTgND", "w3t468nN1Pw", "RJGlRNOX16g", "BzQcXbh2hlb", "9i3HtF9z7F", "ycRzh9SwGD_", "PtGjdvvxn57", "SLL17bZUtk-"...
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", ...
[ " Note: This thread is updated after the third revision.\nAgain, we thank all the suggestions made by the individual reviewers. In addition to the revisions made in the first revision, we made the following revisions to reduce the overclaiming issues:\n\n1. We propose to change the title to make it more informative...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, 6, 8 ]
[ -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "iclr_2022_9XhPLAjjRB", "_l1mUV2Y7b", "kn7ixqa8Xdj", "jJqwjcI8A_M", "TYFyg0-vX0P", "iclr_2022_9XhPLAjjRB", "bI-1Q0hTL3", "U7ySSe6S-Og", "4n3FtxH4atR", "kn7ixqa8Xdj", "DYJIZkpTgND", "kn7ixqa8Xdj", "_XwPiKR08i3", "BzQcXbh2hlb", "kn7ixqa8Xdj", "ycRzh9SwGD_", "PtGjdvvxn57", "SLL17bZUtk...
iclr_2022_QkRV50TZyP
Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains
Adversarial examples have posed a severe threat to deep neural networks due to their transferable nature. Currently, various works have paid great efforts to enhance the cross-model transferability, which mostly assume the substitute model is trained in the same domain as the target model. However, in reality, the rele...
Accept (Poster)
This paper considers that the model's training data may be not accessible when learning the attacking model, and thus a more practical blackbox attack scheme, Beyond ImageNet Attack (BIA) framework, is designed. All the reviewers agreed that the setting in this paper is important and helpful when designing attack metho...
train
[ "gMcKN6Je7Of", "35TcJQr9re", "BYPwpXWwBM3", "MxzJ6B3sVi9", "H91rMv7ZWV", "r1ic_dfwS5", "3QLzjuae4mK", "Ejl5jHv80xQ", "eeo_VOu3p4p", "A35f1jfdAyT", "R6ieaYpIlIF", "VLMbAdbjzxo", "oltkeQYLnuX", "VwfsQ2AzJsM", "WziAruQKZM-", "6xLovCEBU1", "Xdb_C0nEEf" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " _**Experiments for novel transfer attacks**_\n\nTo accelerate the evaluation for novel transfer attacks in our practical black-box scenario, we randomly sampled 1,000 images of each black-box dataset and perform attacks on them. As demonstrated in Table h, the state-of-the-art transfer attacks such as NI-SI-FGSM ...
[ -1, -1, -1, -1, -1, -1, -1, 8, -1, 6, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, 4 ]
[ "35TcJQr9re", "H91rMv7ZWV", "R6ieaYpIlIF", "eeo_VOu3p4p", "r1ic_dfwS5", "Xdb_C0nEEf", "Ejl5jHv80xQ", "iclr_2022_QkRV50TZyP", "VwfsQ2AzJsM", "iclr_2022_QkRV50TZyP", "oltkeQYLnuX", "6xLovCEBU1", "A35f1jfdAyT", "3QLzjuae4mK", "VLMbAdbjzxo", "iclr_2022_QkRV50TZyP", "iclr_2022_QkRV50TZyP"...
iclr_2022_OT3mLgR8Wg8
IFR-Explore: Learning Inter-object Functional Relationships in 3D Indoor Scenes
Building embodied intelligent agents that can interact with 3D indoor environments has received increasing research attention in recent years. While most works focus on single-object or agent-object visual functionality and affordances, our work proposes to study a novel, underexplored, kind of visual relations that is...
Accept (Poster)
The paper presents an approach for learning inter-object relations. The relationships are represented in terms of scene graphs, and are processed with graph convolutional networks. All of the reviewers find the problem interesting and meaningful, which is the main strength of the paper. The approach assumes good object...
test
[ "YMkh5Yq1CIt", "Uu4OErWh3DR", "xnZO0xxqf-P", "PNvRuL5L7aT", "Op1IqSYmyl", "phelfUnJgP3", "vx-9cEo76If", "hC3ATcW5GdD", "jw3WDev3aAV", "-TQ4CoosYhn", "5CdeHqOGxcW", "gb6ijpmAdSH", "IBuLuAyvOOX", "3NOA_bmMJAd", "RMjgnobduqt", "z6_K3jXUAy7", "xA2jrIRrSyd", "IHO0PRprGzQ", "7Vuqw_R5Cq...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks for your final positive rating. We are glad that our responses help alleviate your concerns. Thanks again for all your valuable feedback!", " Thanks for your final positive rating. We are glad that our responses help alleviate your concerns. Thanks again for all your valuable feedback!", " Thanks for r...
[ -1, -1, -1, -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ -1, -1, -1, -1, 3, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "vx-9cEo76If", "phelfUnJgP3", "PNvRuL5L7aT", "z6_K3jXUAy7", "iclr_2022_OT3mLgR8Wg8", "RMjgnobduqt", "7Vuqw_R5Cq0", "iclr_2022_OT3mLgR8Wg8", "xA2jrIRrSyd", "hC3ATcW5GdD", "gb6ijpmAdSH", "7Vuqw_R5Cq0", "3NOA_bmMJAd", "IHO0PRprGzQ", "-TQ4CoosYhn", "jw3WDev3aAV", "Op1IqSYmyl", "iclr_20...
iclr_2022_3YqeuCVwy1d
GDA-AM: ON THE EFFECTIVENESS OF SOLVING MIN-IMAX OPTIMIZATION VIA ANDERSON MIXING
Many modern machine learning algorithms such as generative adversarial networks (GANs) and adversarial training can be formulated as minimax optimization.Gradient descent ascent (GDA) is the most commonly used algorithm due to its simplicity. However, GDA can converge to non-optimal minimax points. We propose a new m...
Accept (Poster)
This paper proposes to use Anderson Acceleration on min-max problems, provides some theoretical convergence rates and presents numerical results on toy bilinear problems and GANs. After the discussion, the reviewers agreed that this paper makes a nice contribution to ICLR. Some concerns were originally expressed in te...
val
[ "Ocd7rqofdK", "qM-ul1Zs7EK", "2E8XaLHJORk", "ohALdO3VVYf", "ZjXUTs9alkh", "KJAi7DtP8qo", "2-vhYv9KhxN", "rFUMYPdEHUE", "vOTzTG0xQ6g", "hk1qFVKhLA_", "jJg3ndL2ve-", "ZF0VFvPinZb", "rY6iYcSkZKs", "EHLqumNL8ls", "SFlHhmzNLPm", "XkEfGNCE6s8", "TRDwNpl4mNZ", "siczgrGnFiF", "68aBInhlgk...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official...
[ " Thanks for your feedback. We are happy to further discuss with you on this. \n\n**3:** We use 100 in figure 2b. We can observe similar patterns when the dimension is larger than 100. \n\n**4:** There is a small typo in the formula of the eigenvalues in your response. It should be $\\lambda_{i}=\\frac{\\eta^{2} \\...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "qM-ul1Zs7EK", "2E8XaLHJORk", "ohALdO3VVYf", "2n4D30hbql", "jJg3ndL2ve-", "jJg3ndL2ve-", "YhiOxdCvSyp", "kkfMXoj4BRn", "kkfMXoj4BRn", "iclr_2022_3YqeuCVwy1d", "iclr_2022_3YqeuCVwy1d", "iclr_2022_3YqeuCVwy1d", "SFlHhmzNLPm", "XkEfGNCE6s8", "iclr_2022_3YqeuCVwy1d", "TRDwNpl4mNZ", "jJg3...
iclr_2022_hcQHRHKfN_
Continuously Discovering Novel Strategies via Reward-Switching Policy Optimization
We present Reward-Switching Policy Optimization (RSPO), a paradigm to discover diverse strategies in complex RL environments by iteratively finding novel policies that are both locally optimal and sufficiently different from existing ones. To encourage the learning policy to consistently converge towards a previously u...
Accept (Poster)
In this paper, a new method is proposed to discover diverse policies solving a given task. The key ideas are to (1) learn one policy at a time, with each new policy trying to be different enough from the previous ones, and (2) switch between two rewards on a per-trajectory basis: the "normal" reward on trajectories tha...
train
[ "Dt9iNcBTleu", "jJumIS3iPXA", "EEjqyf3x3HN", "K901YOXzI7", "nQ5wprzhEYG", "29xB7_48CDB", "j5E2MZc4mF", "0J7CNZ9jUC_", "49c9tarCyV", "zMu5I6tcvLA", "UVAG_Zd-b56", "ujbpX3G3tnb", "j6xYIWTzAt", "ldWcDvYvJdf", "xlroikTCExD", "EtseB_by5kE", "eL8qP4T9_Ya", "YOo0MnHFfjb", "LjdmOhsbYnh",...
[ "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "o...
[ " Dear reviewers, we have revised our paper according to your valuable suggestions. We have…\n+ Added sensitivity analysis in Appendix B.3\n+ Added the additional PGA-MAP-Elites baseline and corresponding discussions in Appendix B.4\n+ Clarified some terminologies in the method section\n+ Clarified some implementat...
[ -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, 8, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, 3, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2022_hcQHRHKfN_", "EEjqyf3x3HN", "K901YOXzI7", "nQ5wprzhEYG", "0J7CNZ9jUC_", "iclr_2022_hcQHRHKfN_", "0J7CNZ9jUC_", "49c9tarCyV", "ujbpX3G3tnb", "PpNJY2Xk3XT", "iclr_2022_hcQHRHKfN_", "xlroikTCExD", "iclr_2022_hcQHRHKfN_", "tBR-DgaOnZv", "EtseB_by5kE", "eL8qP4T9_Ya", "YOo0MnHFf...
iclr_2022_7gWSJrP3opB
A General Analysis of Example-Selection for Stochastic Gradient Descent
Training example order in SGD has long been known to affect convergence rate. Recent results show that accelerated rates are possible in a variety of cases for permutation-based sample orders, in which each example from the training set is used once before any example is reused. In this paper, we develop a broad condit...
Accept (Spotlight)
This paper studies the dependency of SGD convergence on order of examples. The main observation of the paper is: if the averages of consecutive stochastic gradients converge faster to the full gradient, then the SGD with the corresponding sampling strategy will have a faster convergence rate. For different sampling str...
val
[ "-dK-ukgEZs6", "tdg0D5FNU2b", "iQ88-FMxkuz", "y4uba4yKYOB", "hddPUPasok", "oogsM_UXmnO", "t20Tm4R6tZG", "5Dau3xsFO0H", "FAKVvbdn9zA", "pOP-WVvnQe" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " > The dependence on $d$ can easily be removed (along with removing the bounded iterates assumption) by simply using Proposition 1 which holds for any permutation-based SGD, although in that case we would suffer an extra dependence on $n$ (instead of $\\sqrt{n}$ in Proposition 2).\n\nI think that in this case the ...
[ -1, -1, -1, -1, -1, -1, 8, 8, 8, 8 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4, 3 ]
[ "tdg0D5FNU2b", "hddPUPasok", "pOP-WVvnQe", "FAKVvbdn9zA", "5Dau3xsFO0H", "t20Tm4R6tZG", "iclr_2022_7gWSJrP3opB", "iclr_2022_7gWSJrP3opB", "iclr_2022_7gWSJrP3opB", "iclr_2022_7gWSJrP3opB" ]
iclr_2022_3ILxkQ7yElm
Learning Continuous Environment Fields via Implicit Functions
We propose a novel scene representation that encodes reaching distance -- the distance between any position in the scene to a goal along a feasible trajectory. We demonstrate that this environment field representation can directly guide the dynamic behaviors of agents in 2D mazes or 3D indoor scenes. Our environment...
Accept (Poster)
This paper proposes environment fields, a representation that models reaching distances within a scene. Dense environment fields are learnt using a neural network, and the effectiveness of this representation is shown on 2D maze environments and 3D indoor environments. This paper received hugely contrasting reviews, wi...
train
[ "VBuiYB9qMqs", "Wo4J2j2uq_", "XugqhHaPRkg", "-JsJQ3RrpWL", "cgvIeo2m6lr", "a4VdqT_U9lb", "t8LZ7Ejr_o", "FKE1Cj9Bdg5", "hsMdewqbiE0", "yVEOQKQXd1-", "nRq_iuzMnM", "qCZn7iIhAzk", "Roj9Y-IfJKF", "PcgrT3EdCqi", "is_3wpRCYpp", "SsIAeTMfZE7", "Gtpnp2sblb0" ]
[ "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " > **Q:** Safety & RRT: I do not dispute that the paper's main goal is the introduction of an environment representation, however, if comparisons are made on specific tasks to showcase its utility the concerns that arise in such tasks need to be taken into account. I have no issue if the proposed method not being ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 5 ]
[ "a4VdqT_U9lb", "a4VdqT_U9lb", "-JsJQ3RrpWL", "yVEOQKQXd1-", "is_3wpRCYpp", "Roj9Y-IfJKF", "Gtpnp2sblb0", "is_3wpRCYpp", "nRq_iuzMnM", "is_3wpRCYpp", "SsIAeTMfZE7", "Gtpnp2sblb0", "Gtpnp2sblb0", "iclr_2022_3ILxkQ7yElm", "iclr_2022_3ILxkQ7yElm", "iclr_2022_3ILxkQ7yElm", "iclr_2022_3ILx...
iclr_2022_oJGDYQFKL3i
OBJECT DYNAMICS DISTILLATION FOR SCENE DECOMPOSITION AND REPRESENTATION
The ability to perceive scenes in terms of abstract entities is crucial for us to achieve higher-level intelligence. Recently, several methods have been proposed to learn object-centric representations of scenes with multiple objects, yet most of which focus on static scenes. In this paper, we work on object dynamics a...
Accept (Poster)
This work presents a novel method h to learn object dynamics from unlabelled videos and shows its benefits on causal reasoning and future frame prediction. This paper received 4 positive reviews and 1 negative review. In the rebuttal, the authors have addressed most of the concerns. AC feels this work is very interest...
train
[ "Gg1KEOIFgX2", "Yxse3dFmdeN", "RVk4rbUlzaV", "4WR195EQ5Ro", "51vFLBPX6ka", "VP0dZX0HFkS", "SwsLM6kRBK", "6Trld2ZVRcg", "UPwOEN1D7oa", "5dexeHIkhcI", "Pk4qaUHE98z", "9wx-Xpof6XX", "omYZJRWgpre", "2-dYawISQpX", "1tMOkIvCC-", "dwjTfPXv2Pw", "usD9YlZNw25" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I leaned toward 'acceptance' of this paper initially. Though I consider the paper borrowed much technique from existing work, the idea of using transformer structure to align latent features and distillate object dynamics is novel and has merit. Since the feedback has resolved many concerns about the experiments ...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6, 6, 8, 5 ]
[ -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, 2, 3, 3, 3 ]
[ "omYZJRWgpre", "RVk4rbUlzaV", "5dexeHIkhcI", "Pk4qaUHE98z", "UPwOEN1D7oa", "iclr_2022_oJGDYQFKL3i", "VP0dZX0HFkS", "9wx-Xpof6XX", "VP0dZX0HFkS", "usD9YlZNw25", "1tMOkIvCC-", "2-dYawISQpX", "dwjTfPXv2Pw", "iclr_2022_oJGDYQFKL3i", "iclr_2022_oJGDYQFKL3i", "iclr_2022_oJGDYQFKL3i", "iclr...
iclr_2022_SlxSY2UZQT
Label-Efficient Semantic Segmentation with Diffusion Models
Denoising diffusion probabilistic models have recently received much research attention since they outperform alternative approaches, such as GANs, and currently provide state-of-the-art generative performance. The superior performance of diffusion models has made them an appealing tool in several applications, includi...
Accept (Poster)
The paper proposes using the intermediate representation learned in a denoising diffusion model for the label-efficient semantic segmentation task. The reviewers are generally positive with the submission. They like the simplicity of the proposed algorithm. They also like the effort of the paper in verifying the interm...
test
[ "tbnSYkfiGeq", "dv2ew7iDZw5", "cyDhN2Ne-Ij", "yMXp44lzCr", "xO_ybPPQN8", "s1kFOVKF364", "UBO6s-03j6q", "eP8qvQnjm2k", "zN-oQoYwAqT", "BrjO7tgs3Iy", "h3ZGNZP1vg", "-7MUVwEUCM", "Jl2uMrFGDHj", "c_sFnj_YOzD" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author" ]
[ "This paper explores to what extent Denoising Diffusion Probabilistic Models (DDPMs) serve as good representational learners for transfer or semi-supervised learning on downstream tasks. They are particularly interested in the semantic segmentation as a prototypical dense computer vision task. They find that DDPM ...
[ 6, 8, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2022_SlxSY2UZQT", "iclr_2022_SlxSY2UZQT", "h3ZGNZP1vg", "tbnSYkfiGeq", "iclr_2022_SlxSY2UZQT", "iclr_2022_SlxSY2UZQT", "eP8qvQnjm2k", "s1kFOVKF364", "tbnSYkfiGeq", "h3ZGNZP1vg", "Jl2uMrFGDHj", "Jl2uMrFGDHj", "dv2ew7iDZw5", "s1kFOVKF364" ]
iclr_2022__4GFbtOuWq-
Capacity of Group-invariant Linear Readouts from Equivariant Representations: How Many Objects can be Linearly Classified Under All Possible Views?
Equivariance has emerged as a desirable property of representations of objects subject to identity-preserving transformations that constitute a group, such as translations and rotations. However, the expressivity of a representation constrained by group equivariance is still not fully understood. We address this gap by...
Accept (Poster)
The authors' provide a discussion of Cover's Theorem in the setting of equivariance. The reviewers consider the work well explained and interesting, especially after the revisions, and so I will vote to accept.
train
[ "hE8fBOSfMm", "BYpOuQXWpBx", "bInfj9kwJ8g", "YK0X8_puCCY", "Vz5DeSHcW4Q", "roaxv-i8nw6", "aFEs0MySs_G", "tuW3fD5AIL6", "QmrIKcfQHhS", "jPtH9QYiCrR", "cL4pax-fkbn", "s0VB_fZXifG", "7X45C6oHTR", "sBJSDvpObh", "oKbHttUVJuL", "PkmwlqshFy2", "5UrdIg1RSZT", "Khadu9LIRK6", "byCtbPXpfN",...
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", ...
[ " Yes, so our constraint is that $y^{\\mu}w^T\\pi(g)x^{\\mu}>0$ (the output of the perceptron is invariant) and if I understand correctly your point is that we could have perhaps instead imposed the stronger constraint that the output weights satisfy $w^T = w^T\\pi(g)$. As you say, combining the linear operation of...
[ -1, -1, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, -1, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "BYpOuQXWpBx", "roaxv-i8nw6", "iclr_2022__4GFbtOuWq-", "iclr_2022__4GFbtOuWq-", "Khadu9LIRK6", "aFEs0MySs_G", "7X45C6oHTR", "s0VB_fZXifG", "oKbHttUVJuL", "bInfj9kwJ8g", "jPtH9QYiCrR", "cxpt2aNMyl", "sBJSDvpObh", "tuW3fD5AIL6", "PkmwlqshFy2", "5UrdIg1RSZT", "CiWL8_puZmk", "byCtbPXpf...
iclr_2022_YiBa9HKTyXE
Permutation-Based SGD: Is Random Optimal?
A recent line of ground-breaking results for permutation-based SGD has corroborated a widely observed phenomenon: random permutations offer faster convergence than with-replacement sampling. However, is random optimal? We show that this depends heavily on what functions we are optimizing, and the convergence gap betwe...
Accept (Poster)
The paper considers the effect of permutations in SGD - exploring the question of can we go beyond random permutations (which themselves have shown to be better than with replacement sampling)? The paper studies these questions from multiple viewpoints - showing that there is a one dimensional function for which the op...
train
[ "H_13s9PHuWU", "lcvJY3I6dPo", "LPeCKmWzFLN", "8VplMe82DTi", "7Vme1Z8xsDU", "vPf_vRFEg7L", "3fAXxaBmlQ1", "g10r6w_qpBN", "VM5cFbF3LEt", "griLa-zApx6", "vW3F00xFD1" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your review and additional comments, we will incorporate your suggestions into the paper. Please let us know if you have any other concerns.", "# === Update ===\n\nI have decided to maintain my score. This is very strong submission in my opinion. A brief overview of my thoughts is the following:\n...
[ -1, 10, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, 2, 4, 3 ]
[ "LPeCKmWzFLN", "iclr_2022_YiBa9HKTyXE", "3fAXxaBmlQ1", "VM5cFbF3LEt", "vW3F00xFD1", "griLa-zApx6", "lcvJY3I6dPo", "iclr_2022_YiBa9HKTyXE", "iclr_2022_YiBa9HKTyXE", "iclr_2022_YiBa9HKTyXE", "iclr_2022_YiBa9HKTyXE" ]
iclr_2022_0kPL3xO4R5
Fast topological clustering with Wasserstein distance
The topological patterns exhibited by many real-world networks motivate the development of topology-based methods for assessing the similarity of networks. However, extracting topological structure is difficult, especially for large and dense networks whose node degrees range over multiple orders of magnitude. In this ...
Accept (Poster)
Initially, we had some borderline scores for this paper. After the (indeed very convincing!) rebuttal and a the end of the discussion phase, however, all reviewers agreed that this is a very solid piece of work, with significant methodological and practical contributions. I fully share this positive impression of the p...
train
[ "EZikhdhWZz", "yWYf3oAVvtf", "esleCMt9Bz", "rDLTUlTHUq0", "swBjk4V_zc3", "V9bqzxCEG5g", "Y-ILu20dqfJ", "h-f42T2Ba3o", "F0wqqKntl-O", "w9yzZATT4J7", "nAsYjOs-fy5", "S6OaTd9uKqh", "DLx60HQYVPi", "rU3nq3ZW5R", "AUM8ilwIEZa", "-nv3XPoZ4BV", "9YDjWgIXrRO" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "public" ]
[ " _The new version addresses all my previous concerns. I improved my score accordingly._\n\nWe thank the reviewer for improving the score.\n\n_Minor remarks:_\n\n_\"The topological patterns exhibited by many real-world networks motivate THE development of topology-based methods for assessing the similarity of netwo...
[ -1, -1, 8, 6, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, 5, 4, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "yWYf3oAVvtf", "F0wqqKntl-O", "iclr_2022_0kPL3xO4R5", "iclr_2022_0kPL3xO4R5", "DLx60HQYVPi", "nAsYjOs-fy5", "iclr_2022_0kPL3xO4R5", "9YDjWgIXrRO", "w9yzZATT4J7", "S6OaTd9uKqh", "rU3nq3ZW5R", "esleCMt9Bz", "-nv3XPoZ4BV", "AUM8ilwIEZa", "Y-ILu20dqfJ", "rDLTUlTHUq0", "iclr_2022_0kPL3xO4...
iclr_2022_rJvY_5OzoI
Multi-Critic Actor Learning: Teaching RL Policies to Act with Style
Using a single value function (critic) shared over multiple tasks in Actor-Critic multi-task reinforcement learning (MTRL) can result in negative interference between tasks, which can compromise learning performance. Multi-Critic Actor Learning (MultiCriticAL) proposes instead maintaining separate critics for each task...
Accept (Poster)
In this paper, the authors investigate a multi-task RL actor-critic technique, where a single actor is used while multiple critics are trained (one per task, where each task corresponds to a different reward function). Experiments on several environments demonstrate that this method works quite well in practice. All r...
val
[ "f009QISRgTm", "oTv-Pa6apcL", "gqYQlv36pSR", "d7faUXxx0Cn", "cnsNpUQiN4", "nOXH6l4WDFn", "Vz2VtiV1s76", "1Ri5kkZxNSk", "snrnUGig83p", "Wxm6CKVBpfk", "Iok-_RuFHCY", "c6wMb8f2--", "5VGLHCsXmm", "U-KF3ciFJEz", "F9XPA6SMGU", "CAopozF9yy8", "mzWM_-EDBqq", "ayFmf9Gl2gi", "ZaHF7xoMcR7",...
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", ...
[ "This paper wants to propose a simple method to deal with multi-task (style) problems. strengths:\n\n1. The method is straightforward, extending the existing deep RL method via multiple value networks.\n\nweaknesses:\n\n1. The method can not generate different styles under the same environment, or put another way, ...
[ 6, -1, -1, 8, -1, 6, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 5, -1, -1, 4, -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_rJvY_5OzoI", "gqYQlv36pSR", "Iok-_RuFHCY", "iclr_2022_rJvY_5OzoI", "nOXH6l4WDFn", "iclr_2022_rJvY_5OzoI", "iclr_2022_rJvY_5OzoI", "iclr_2022_rJvY_5OzoI", "c6wMb8f2--", "U-KF3ciFJEz", "iBdkxsMQkxE", "Wxm6CKVBpfk", "CAopozF9yy8", "ayFmf9Gl2gi", "f009QISRgTm", "lDR40Uw4HU1", ...
iclr_2022_PDYs7Z2XFGv
Omni-Scale CNNs: a simple and effective kernel size configuration for time series classification
The size of the receptive field has been one of the most important factors for One Dimensional Convolutional Neural Networks (1D-CNNs) on time series classification tasks. Large efforts have been taken to choose the appropriate receptive field size, for it has a huge influence on the performance and differs significant...
Accept (Poster)
The paper presents a simple and effective solution to tune the receptive field of CNNs for 1D time series classification. The reviewers think the idea is original and elegant but would appreciate more theoretical insights into the solution.
train
[ "QJS1R3MCmVV", "-A9524bbjzz", "IvxAC5TQhtV", "C4S1canCZQ7", "y-R_bWVsw_5", "0bvoheonaCT", "kTLoKtc7hQA", "LiUzxBgvu5", "UksWavC6op3" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ "The paper proposes an original trick to tune the receptive field kernels for 1D ConvNets to tackle time series problems. The strengths of the papers are as follow :\n\n1- The paper is well structured and easy to follow\n\n2- Tables and figures are very clear.\n\n3- The authors have conducted extensive experiments...
[ 8, 6, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2022_PDYs7Z2XFGv", "iclr_2022_PDYs7Z2XFGv", "0bvoheonaCT", "y-R_bWVsw_5", "LiUzxBgvu5", "UksWavC6op3", "QJS1R3MCmVV", "-A9524bbjzz", "iclr_2022_PDYs7Z2XFGv" ]
iclr_2022_bYGSzbCM_i
Online Adversarial Attacks
Adversarial attacks expose important vulnerabilities of deep learning models, yet little attention has been paid to settings where data arrives as a stream. In this paper, we formalize the online adversarial attack problem, emphasizing two key elements found in real-world use-cases: attackers must operate under partial...
Accept (Poster)
This paper opens the area of adversarial-attack research on streaming data (e.g., real-world settings such as self-driving cars and robotic visual tasks for a robot). For instance, online adversaries can focus their attack on a small subset of the streamed/online data, but still cause much damage to downstream models. ...
val
[ "qYnqL47CD3j", "T4AI4iN6ASt", "KGfWEgBfyAH", "EuP9VPvb8y2", "3YWRjjoP-P", "iPUPX0SyNdl", "wFbmCSxUiS7", "m3HSDrvwtP7", "bS_D904hbbg", "8HlXvPduf81", "RFVf1iBbhMc", "fAc7UjGBJqo", "3AaOF6y0_Ps", "apNfV72LF1", "vzBiJKIFEJX", "nE7-v9VObnZ", "q_Ml9tFIHqI", "iPutVTNStM8", "x_qVO1PX7wT...
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "officia...
[ " Dear Reviewer,\n\nWe are fast approaching the end of the discussion period and we would like to thank you for responding to our original rebuttal. We have responded to your post-rebuttal comments and were wondering if these new comments have assuaged your concerns. Please note that Reviewers 82UN and VskD upgrade...
[ -1, -1, 6, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ -1, -1, 3, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "dxA3m9ZqjgL", "6-ajaie3CV", "iclr_2022_bYGSzbCM_i", "6-ajaie3CV", "KGfWEgBfyAH", "iclr_2022_bYGSzbCM_i", "bS_D904hbbg", "iclr_2022_bYGSzbCM_i", "apNfV72LF1", "dxA3m9ZqjgL", "6-ajaie3CV", "KGfWEgBfyAH", "m3HSDrvwtP7", "vzBiJKIFEJX", "nE7-v9VObnZ", "dxA3m9ZqjgL", "6-ajaie3CV", "x_qV...
iclr_2022_T8wHz4rnuGL
RotoGrad: Gradient Homogenization in Multitask Learning
Multitask learning is being increasingly adopted in applications domains like computer vision and reinforcement learning. However, optimally exploiting its advantages remains a major challenge due to the effect of negative transfer. Previous works have tracked down this issue to the disparities in gradient magnitudes a...
Accept (Spotlight)
The paper addresses the problem of inconsistent gradients in multi-task learning, proposing ways to handle both their magnitude nd direction. Gradient directions are aligned by introducing a rotation layer between the shared backbone and task-specific branches. Reviewers appreciated the technical approach, higlighting...
val
[ "O4r3-bDW0Fc", "eEhSzcPxgm0", "B1YIHtU2Box", "mDIymUojE7E", "wmeSSOgWFY7", "afD4XQE-4cq", "XxZn7unA67i", "JQxf3q4zeEd", "HaN37bjSnS", "xVSNsyUAh3z", "PasF-RPNEJ0", "UcPvyp-dvrs", "38ap85kdh9z", "NiRBcZ_NcaB", "ASq2SCwaBVl" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your answer. I've increased my score to 8 as I think the paper should be accepted as a poster for the conference.", "- This paper proposes a new way of dealing with negative transfer or catastrophic interference in multitask learning. The main idea is to avoid conflicting gradients by making them ...
[ -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, 8 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "xVSNsyUAh3z", "iclr_2022_T8wHz4rnuGL", "XxZn7unA67i", "iclr_2022_T8wHz4rnuGL", "iclr_2022_T8wHz4rnuGL", "iclr_2022_T8wHz4rnuGL", "JQxf3q4zeEd", "PasF-RPNEJ0", "ASq2SCwaBVl", "eEhSzcPxgm0", "NiRBcZ_NcaB", "38ap85kdh9z", "iclr_2022_T8wHz4rnuGL", "iclr_2022_T8wHz4rnuGL", "iclr_2022_T8wHz4r...
iclr_2022_OWZVD-l-ZrC
Reward Uncertainty for Exploration in Preference-based Reinforcement Learning
Conveying complex objectives to reinforcement learning (RL) agents often requires meticulous reward engineering. Preference-based RL methods are able to learn a more flexible reward model based on human preferences by actively incorporating human feedback, i.e. teacher's preferences between two clips of behaviors. Howe...
Accept (Poster)
This is a borderline paper. The scores were initially below the bar. The novelty of the work is limited and there are strong claims in the paper that should be revised. The authors can also do a better job in positioning their work with respect to the existing results. However, the authors managed to address several qu...
train
[ "b8STiK2ksTa", "PCkwcwMLc2S", "K4N7F9v_GFt", "Q9cZpkKhY0", "GJ9fJ9zlqLR", "G7aB2dcqr4f", "keVUDiYGKqv", "P4jMh9WsgWy", "232ZI-0lkt", "aaoleOAfz8-", "uXS3DiN_hvO", "xnYVFAIFs1C", "t_5TyrBOJu", "23-zCMzIifh", "TmNBKAkcqut", "gJU_Bg2Stl", "yEAiU1c5bPR", "hormMlf9slE", "KiXOHCraBO", ...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", ...
[ " We are happy to hear that our rebuttal response, additional experiments, and improvements in figures addressed your concerns well.\n\nWe will further improve visualizations of learning curves and structures of presenting results based on your suggestions.\n\nThank you again for the valuable suggestions and commen...
[ -1, 6, -1, -1, -1, 5, -1, 6, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, -1, -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ "K4N7F9v_GFt", "iclr_2022_OWZVD-l-ZrC", "b9Urr5fUAFl", "232ZI-0lkt", "keVUDiYGKqv", "iclr_2022_OWZVD-l-ZrC", "gJU_Bg2Stl", "iclr_2022_OWZVD-l-ZrC", "nmpfaGF-Irs", "23-zCMzIifh", "PCkwcwMLc2S", "P4jMh9WsgWy", "G7aB2dcqr4f", "yEAiU1c5bPR", "iclr_2022_OWZVD-l-ZrC", "G7aB2dcqr4f", "TmNBK...
iclr_2022_w01vBAcewNX
On Covariate Shift of Latent Confounders in Imitation and Reinforcement Learning
We consider the problem of using expert data with unobserved confounders for imitation and reinforcement learning. We begin by defining the problem of learning from confounded expert data in a contextual MDP setup. We analyze the limitations of learning from such data with and without external reward and propose an adj...
Accept (Poster)
The authors consider the problem of using expert data with unobserved confounders for both imitation and reinforcement learning settings. They showed how latent confounders negatively affect the learning process and proposed a sampling algorithm that mitigates the impact and delivers good empirical results. I agree ...
train
[ "damFBz_DYH1", "99xexD5y2F4", "z1ZGMeTcR-v", "uR7LyIXMYZc", "PNaDtU5gvW1", "CUNAXL1FGMh", "twSsPTv6WiL", "lnx6wEMge2F", "MFRw-OB08G", "eJvd9tVQlM5", "iFkwXXImNlw", "ZvOwSbpE4p", "bBfYuLTk3YJ", "74XOrY4aaMx", "ejo4SZ-w3Qr" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the authors' response which addresses my concerns, so I keep my positive score.", " Thanks for addressing my comments. I agree with the authors that analyzing the sample complexity might be out of the scope of this paper. I decided to keep my positive score.", " Thank you for your response. We appr...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "eJvd9tVQlM5", "MFRw-OB08G", "uR7LyIXMYZc", "lnx6wEMge2F", "twSsPTv6WiL", "ejo4SZ-w3Qr", "lnx6wEMge2F", "74XOrY4aaMx", "bBfYuLTk3YJ", "ZvOwSbpE4p", "iclr_2022_w01vBAcewNX", "iclr_2022_w01vBAcewNX", "iclr_2022_w01vBAcewNX", "iclr_2022_w01vBAcewNX", "iclr_2022_w01vBAcewNX" ]
iclr_2022_AmUhwTOHgm
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
In NLP, a large volume of tasks involve pairwise comparison between two sequences (e.g. sentence similarity and paraphrase identification). Predominantly, two formulations are used for sentence-pair tasks: bi-encoders and cross-encoders. Bi-encoders produce fixed-dimensional sentence representations and are computation...
Accept (Poster)
For pairs of pieces of text, the central idea of this paper is to combine the approaches of using bi-encoders (where a vector is formed from each text then compared), which are easily trained in an unsupervised manner, with cross-encoders (where the two texts are related at the token level), which are normally trained ...
train
[ "O-qdFlkY16b", "-481_CMQ3jA", "qRW7EzWgK_", "lO-wMZXQfHH", "vd54BTKrHy", "iT1-lbA-hKx", "HQ4buXt1zZm", "bjCrSo-v_JV", "tpFJS_PRTUI", "gamhc_4C1KC", "p8ID7YaNVFU" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " thank you for reading our response!", " - I reviewed Table 10 and Appendix 3. The training time is reasonable. \n- Moreover, the training stability (using different random seeds) is encouraging. \n- The revised appendix A.2 (comparing between BCE and MSE losses) is interesting. ", " *Reviewer DZEu*: **...The ...
[ -1, -1, -1, -1, -1, -1, -1, 5, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "-481_CMQ3jA", "qRW7EzWgK_", "p8ID7YaNVFU", "tpFJS_PRTUI", "gamhc_4C1KC", "iclr_2022_AmUhwTOHgm", "bjCrSo-v_JV", "iclr_2022_AmUhwTOHgm", "iclr_2022_AmUhwTOHgm", "iclr_2022_AmUhwTOHgm", "iclr_2022_AmUhwTOHgm" ]
iclr_2022_WPI2vbkAl3Q
Learning Curves for SGD on Structured Features
The generalization performance of a machine learning algorithm such as a neural network depends in a non-trivial way on the structure of the data distribution. To analyze the influence of data structure on test loss dynamics, we study an exactly solveable model of stochastic gradient descent (SGD) on the square loss wh...
Accept (Poster)
The work presented in this study gives a theoretical finite-sample generalisation performance of stochastic gradient descent on linear models, for different batch-sizes and feature structures. This approach enable the authors to predict the training and test losses of neural networks on real data. While there were som...
test
[ "UQKf85syqhB", "oYjX-Fd7yT0", "rcVzw6n_lkJ", "QYICz2YBbBO", "lVZuf_SFvP9", "2vwwoPtFz0K", "qIRsoyijCG0", "tb5nNr8iNa8", "FF9dDZexq87", "Scza0yTBwmP", "ZNBqBuHB8k", "FrUOjdX9Eyo", "nMXn-JMDp0D", "w-mxgcyZZk2", "xB80qJ5nyL", "v-PgC9vNHa", "qvWbSUngTfE", "plvrO_oSXWQ", "dJtJq1zutB",...
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_r...
[ " I thank the authors for carefully addressing my concerns and for providing some useful clarifications. I have appreciated the effort in improving the manuscript, that is now more solid and, in my opinion, worth publication.", "This paper addresses the problem of characterising analytically the dynamics of stoch...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 8, 5 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "qvWbSUngTfE", "iclr_2022_WPI2vbkAl3Q", "ZNBqBuHB8k", "lVZuf_SFvP9", "2vwwoPtFz0K", "qIRsoyijCG0", "tb5nNr8iNa8", "FF9dDZexq87", "Scza0yTBwmP", "ZNBqBuHB8k", "FrUOjdX9Eyo", "nMXn-JMDp0D", "xB80qJ5nyL", "iclr_2022_WPI2vbkAl3Q", "_1B7LVaPIQe", "wchgphMhsdS", "oYjX-Fd7yT0", "oYjX-Fd7y...
iclr_2022_SwIp410B6aQ
On the Role of Neural Collapse in Transfer Learning
We study the ability of foundation models to learn representations for classification that are transferable to new, unseen classes. Recent results in the literature show that representations learned by a single classifier over many classes are competitive on few-shot learning problems with representations learned by sp...
Accept (Poster)
Based on the previously observed neural collapse phenomenon that the features learned by over-parameterized classification networks show an interesting clustering property, this paper provides an explanation for this behavior by studying the transfer learning capability of foundation models for few-shot downstream task...
test
[ "ejOoMg6cyf", "Lrch8U5XLEf", "8R_ltdtt3b3", "jNGvie15_ZV", "B-3WaD3m61", "DVfVh2HpESN", "G9e_zAZF-VN", "MUIRNjFTLkJ", "L3ljQ-V5Ajp", "jhbBFNUbS_x", "Cnl1fr8KKrx", "J-hTo4299ZY", "rM04TeXwkI8", "DXUwIhUYcJv" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Large pretrained models that can be effectively adapted to various tasks via transfer learning have been recently characterized as foundation models. (Papyan et al.) recently introduced the idea of neural collapse(NC) in deep networks. It identifies the training dynamics of deep neural networks for classification ...
[ 8, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "iclr_2022_SwIp410B6aQ", "iclr_2022_SwIp410B6aQ", "jhbBFNUbS_x", "G9e_zAZF-VN", "G9e_zAZF-VN", "DXUwIhUYcJv", "MUIRNjFTLkJ", "Lrch8U5XLEf", "rM04TeXwkI8", "ejOoMg6cyf", "J-hTo4299ZY", "iclr_2022_SwIp410B6aQ", "iclr_2022_SwIp410B6aQ", "iclr_2022_SwIp410B6aQ" ]
iclr_2022_EcGGFkNTxdJ
Trust Region Policy Optimisation in Multi-Agent Reinforcement Learning
Trust region methods rigorously enabled reinforcement learning (RL) agents to learn monotonically improving policies, leading to superior performance on a variety of tasks. Unfortunately, when it comes to multi-agent reinforcement learning (MARL), the property of monotonic improvement may not simply apply; this is bec...
Accept (Poster)
The submission proposes a new approach to deriving a policy gradient type algorithm for multi agent RL (MARL) where the agents are interested in a common objective but with potentially different action spaces. It extends the monotone improvement property for single agent trust-region based methods like TRPO to a multi ...
train
[ "KrhDslsavfB", "Msq4cdsLMC", "nB5552y8UP", "CNz60Hhhl7q", "gcwWjHfWIB", "V2QBxs8nCHx", "IQwrMYbmjh", "e8ivYcxu3Kd", "uAdAhX_ToPe", "AwWJfSZPw_4", "u-eM9raUaq", "rEEWXGO7bm8", "AS0yC0mL6VX", "uwmTa6WG4rU", "e1cbXUNZq1", "JGoneDwtvxk", "M5wwEaAIkam" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for proposing this interesting idea. Although we have not managed to develop and test a method that would assure an affirmative answer to this question, we have made some conceptual progress. Below, we describe the results of our efforts.\n\nWe tried to formulate a meta-learning objective of which solut...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 6 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "V2QBxs8nCHx", "e8ivYcxu3Kd", "iclr_2022_EcGGFkNTxdJ", "AwWJfSZPw_4", "nB5552y8UP", "uwmTa6WG4rU", "iclr_2022_EcGGFkNTxdJ", "e1cbXUNZq1", "e1cbXUNZq1", "M5wwEaAIkam", "M5wwEaAIkam", "nB5552y8UP", "nB5552y8UP", "JGoneDwtvxk", "iclr_2022_EcGGFkNTxdJ", "iclr_2022_EcGGFkNTxdJ", "iclr_202...
iclr_2022_noaG7SrPVK0
Counterfactual Plans under Distributional Ambiguity
Counterfactual explanations are attracting significant attention due to the flourishing applications of machine learning models in consequential domains. A counterfactual plan consists of multiple possibilities to modify a given instance so that the model's prediction will be altered. As the predictive model can be upd...
Accept (Poster)
The paper provides a neat idea about explaining (linear) predictors based on designing ways of perturbing parameters. It is focused on linear models (which can still lead to non-linear classifiers), but it is a relevant case, particularly for explainability.
test
[ "5Uev6jppXAi", "-OKgHBXZQmc", "-shqmRZDyCd", "0nM-9T8PpqB", "CjabbfzRXkM", "LQmdqw6U6LP", "FVirXzSuMDb", "FoZHn9Cti_", "AvAiEVX4wac", "SKlAbQ5QxlW", "4ENB9dhxEIo", "-sZEte8hPxC", "t6FJcEWI9ly", "d8zVONQ5rnG" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer,\n\nWe thank the reviewer again for your insightful reviews and thoughtful suggestions. We have done our best to address the questions raised in your review. The discussion period is coming to a close within a day and we remain open to discussing any remaining concerns you may have until the very en...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 8 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "CjabbfzRXkM", "4ENB9dhxEIo", "iclr_2022_noaG7SrPVK0", "t6FJcEWI9ly", "LQmdqw6U6LP", "FVirXzSuMDb", "d8zVONQ5rnG", "SKlAbQ5QxlW", "-shqmRZDyCd", "AvAiEVX4wac", "-sZEte8hPxC", "0nM-9T8PpqB", "iclr_2022_noaG7SrPVK0", "iclr_2022_noaG7SrPVK0" ]
iclr_2022_TXsjU8BaibT
Trigger Hunting with a Topological Prior for Trojan Detection
Despite their success and popularity, deep neural networks (DNNs) are vulnerable when facing backdoor attacks. This impedes their wider adoption, especially in mission critical applications. This paper tackles the problem of Trojan detection, namely, identifying Trojaned models – models trained with poisoned data. One ...
Accept (Poster)
This paper proposes a diversity loss and a topological prior to not only increase the chances of finding the appropriate triggers but also improve the quality of the found triggers. These loss terms significantly improve the efficiency in finding trojaned triggers. The experiments results show that the proposed method ...
train
[ "bkMeBFyDKEC", "-tydZqQW-qk", "W9h6MaZv1XH", "Rw9KjJLHFe0", "535fHs6AYab", "JTfCNB3DWtc", "l01dYy0UxE4", "9f-xUftCQSV", "Quw6ggMgAfe", "NeK24dacqPc", "PCr0FhgiksS", "cy-Fex9kLgG", "QeS0QnckJEc", "YtXYhjd8kuW", "S3n34YHxucM" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer,\n\nThanks very much for your feedback! We really appreciate your going through our response and agreeing to update your recommendation. However, looking at your initial review, we still couldn't see your score being updated. Would you please kindly take a look?\n\nSincerely,\n\nAuthors", " Thanks...
[ -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, 8, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "-tydZqQW-qk", "535fHs6AYab", "535fHs6AYab", "JTfCNB3DWtc", "S3n34YHxucM", "NeK24dacqPc", "iclr_2022_TXsjU8BaibT", "iclr_2022_TXsjU8BaibT", "l01dYy0UxE4", "YtXYhjd8kuW", "QeS0QnckJEc", "iclr_2022_TXsjU8BaibT", "iclr_2022_TXsjU8BaibT", "iclr_2022_TXsjU8BaibT", "iclr_2022_TXsjU8BaibT" ]
iclr_2022_6yVvwR9H9Oj
On Non-Random Missing Labels in Semi-Supervised Learning
Semi-Supervised Learning (SSL) is fundamentally a missing label problem, in which the label Missing Not At Random (MNAR) problem is more realistic and challenging, compared to the widely-adopted yet naive Missing Completely At Random assumption where both labeled and unlabeled data share the same class distribution. Di...
Accept (Poster)
Addressed semi-supervised learning with the MNAR setting. Well written paper. Several additional experiments were reported in response to the reviewer questions. General agreement amongst reviewers.
train
[ "RenAnqHWH8Q", "7wwomzUpb4-", "v8C5TbLVQeo", "pZFsbiDlblC", "8fXZGl41OI0", "CMzyJvPoP6y", "Z3VEwHexE6A" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We gratefully thank all the reviewers for their valuable and constructive comments. We are encouraged that they find our topic interesting and important (Reviewer 9j41), our view for semi-supervised learning clear and theoretically decent (Reviewer D3Mi), our method probabilistically rigorous, theoretically effec...
[ -1, -1, -1, -1, 8, 6, 6 ]
[ -1, -1, -1, -1, 4, 3, 2 ]
[ "iclr_2022_6yVvwR9H9Oj", "Z3VEwHexE6A", "8fXZGl41OI0", "CMzyJvPoP6y", "iclr_2022_6yVvwR9H9Oj", "iclr_2022_6yVvwR9H9Oj", "iclr_2022_6yVvwR9H9Oj" ]
iclr_2022_FndDxSz3LxQ
Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks
Despite the recent success of Graph Neural Networks (GNNs), training GNNs on large graphs remains challenging. The limited resource capacities of the existing servers, the dependency between nodes in a graph, and the privacy concern due to the centralized storage and model learning have spurred the need to design an ef...
Accept (Poster)
Dear Authors, The paper was received nicely and discussed during the rebuttal period. The current discussions mostly lie on the acceptance side. Some prons of the paper include: - Timely topic: This paper deals with the problem of distributed training of GNNs. - New algorithm: this method captures the idea of tran...
train
[ "DR7xfW6NY8N", "pryNLuEhzhr", "MaLbpWiaZoY", "6-_7v6ZWjmO", "SCFUknZaVZY", "9WKd7bnnAgT", "PKLe28OHoLE", "LtziSzl1_Uz", "evxBDuGrSa", "BExoJ8kQ_z2", "tuWHivB5b1h", "t88e-8kOHMq", "t8qaFIvhjR", "OSSVD0mivM5", "h83sMWfPCj-", "wGcSE8K3lEC", "iK0_ZedP0Gm", "jNTQGaiMpz0", "RB7TUkWou0L...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Many thanks for taking the time and checking our responses. We agree with the reviewer that claiming about the irreducibility of the error might be too strong. \n\n1. With regards to the improvement in upper bound obtained by Theorem 2 compared to Theorem 1, we note that $\\mathcal{O}(t\\_2) = \\mathcal{O}(\\kapp...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 2 ]
[ "6-_7v6ZWjmO", "SCFUknZaVZY", "LtziSzl1_Uz", "wGcSE8K3lEC", "t88e-8kOHMq", "iclr_2022_FndDxSz3LxQ", "iclr_2022_FndDxSz3LxQ", "tuWHivB5b1h", "iclr_2022_FndDxSz3LxQ", "h83sMWfPCj-", "jNTQGaiMpz0", "9WKd7bnnAgT", "OSSVD0mivM5", "iK0_ZedP0Gm", "wGcSE8K3lEC", "RB7TUkWou0L", "iclr_2022_Fnd...
iclr_2022_oh4TirnfSem
PF-GNN: Differentiable particle filtering based approximation of universal graph representations
Message passing Graph Neural Networks (GNNs) are known to be limited in expressive power by the 1-WL color-refinement test for graph isomorphism. Other more expressive models either are computationally expensive or need preprocessing to extract structural features from the graph. In this work, we propose to make GNNs u...
Accept (Poster)
This paper presents a neural version of individual-refinement (IR) architecture for improving the expressiveness of GNN in terms of isomorphism tests. As IR is the dominant approach of practical graph isomorphism tests, adapting IR to GNN is a novel and important idea. As IR suffers from the exponential number of branc...
train
[ "MP24dFBYgBk", "Ykr70dkj2ma", "KW6RS5_8ELu", "G6-ECoAsvVr", "naugMu1ECjK", "GHcqPRQoS2", "-g43CVCBKaX", "Mo4XbD6gzs", "E-xtBf0yqkV", "ERhEV4pkwp", "naLzr_iShEN", "F6cKlDxP1S6", "gbm631HKOAF", "plqJlU2RnP", "JPAFuHDs7nK", "xSfEsSdH9Z7", "QBMFMMwwZaf", "cA7T83Or5w3", "07TL6E3szba",...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", ...
[ " Thank you very much for your positive opinion on our work! We appreciate your thoughtful comments and pertinent suggestions during the discussion. Best regards.", " As a reviewer I appreciate the author's additional work on partially addressing these problems. I encourage the author to continue improve the pap...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 2 ]
[ "Ykr70dkj2ma", "KW6RS5_8ELu", "G6-ECoAsvVr", "GHcqPRQoS2", "-g43CVCBKaX", "naugMu1ECjK", "cA7T83Or5w3", "E-xtBf0yqkV", "07TL6E3szba", "F6cKlDxP1S6", "iclr_2022_oh4TirnfSem", "JPAFuHDs7nK", "xSfEsSdH9Z7", "iclr_2022_oh4TirnfSem", "QBMFMMwwZaf", "dpN7Lc2lcq", "NekvnHLKjku", "U3vOlaU8...
iclr_2022_8la28hZOwug
Prototypical Contrastive Predictive Coding
Transferring representational knowledge of a model to another is a wide-ranging topic in machine learning. Those applications include the distillation of a large supervised or self-supervised teacher model to a smaller student model or self-supervised learning via self-distillation. Knowledge distillation is an origina...
Accept (Poster)
This paper proposes a prototypical contrastive predictive coding by combining the prototypical method and contrastive learning, and presents its efficient implementation for three distillation tasks: supervised model compression, self-supervised model compression, and self-supervised learning via self-distillation. The...
train
[ "T5PaQ4fcYah", "Bz1adNdGRgQ", "OCS76cqBFiL", "cDmFLXK7Sbk", "BfA2rl715MM", "hiRRGo0VQL_", "-_QsUT2BSe4", "-dgE1CVTO71", "wJQip4XXAHm", "_BAhSjrpqNb", "cmHZow-5NrG", "DQwsuyACiiZ", "GYJJlkgI4Zu" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " It is nice that you appreciate our new paper title and responses to the reviews. Also, we sincerely appreciate that you understand the novelty of our work and raising the score. \n\nIf there is any further questions we'll be pleased to answer. Thank you!\n", " Thanks for your response to my questions and those ...
[ -1, -1, 6, 6, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, -1, 4, 3, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "Bz1adNdGRgQ", "hiRRGo0VQL_", "iclr_2022_8la28hZOwug", "iclr_2022_8la28hZOwug", "iclr_2022_8la28hZOwug", "OCS76cqBFiL", "GYJJlkgI4Zu", "DQwsuyACiiZ", "GYJJlkgI4Zu", "DQwsuyACiiZ", "cDmFLXK7Sbk", "iclr_2022_8la28hZOwug", "iclr_2022_8la28hZOwug" ]
iclr_2022_Z7Lk2cQEG8a
The Hidden Convex Optimization Landscape of Regularized Two-Layer ReLU Networks: an Exact Characterization of Optimal Solutions
We prove that finding all globally optimal two-layer ReLU neural networks can be performed by solving a convex optimization program with cone constraints. Our analysis is novel, characterizes all optimal solutions, and does not leverage duality-based analysis which was recently used to lift neural network training into...
Accept (Oral)
A conceptually and technically highly innovative paper which reinforces an existing powerful connection between the critical set of two-layer ReLU networks and suitable convex programs with cone constraints. The reviewers are in strong consensus that the paper is sound and has merits for publication.
train
[ "fDbiD8q760i", "j_VpbX7AJ7d", "eEywZdVwhe4", "Ji2JMPva5F3", "dOU_HD6U4W_", "ZKryGI2DMD7", "RyapASf-dR1", "UHHFi-ey4t6" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Along the line of Pilanci & Ergen (2020), this submission deals learning two-layer ReLU neural networks through convex optimization. It introduces a number of new notions such as (nearly) minimal neural networks, developing a set of interesting tools, and draws connections between the minimal neural networks and t...
[ 8, -1, -1, -1, -1, 8, 8, 8 ]
[ 4, -1, -1, -1, -1, 3, 4, 3 ]
[ "iclr_2022_Z7Lk2cQEG8a", "fDbiD8q760i", "UHHFi-ey4t6", "RyapASf-dR1", "ZKryGI2DMD7", "iclr_2022_Z7Lk2cQEG8a", "iclr_2022_Z7Lk2cQEG8a", "iclr_2022_Z7Lk2cQEG8a" ]
iclr_2022_9SDQB3b68K
DARA: Dynamics-Aware Reward Augmentation in Offline Reinforcement Learning
Offline reinforcement learning algorithms promise to be applicable in settings where a fixed dataset is available and no new experience can be acquired. However, such formulation is inevitably offline-data-hungry and, in practice, collecting a large offline dataset for one specific task over one specific environment is...
Accept (Poster)
**Summary** This paper proposes a method to do a sample-efficient offline domain adaptation method where the setting requires one to have abundant amount of data from the source domain but limited amount of data available in the target domain. The proposed approach DARA achieves that by accounting for the the dynamics...
test
[ "CLzpWBXFI16", "tL78nHas6H", "9rm7uDGsHKL", "8ONw62nkPIM", "9tQktj21a8f", "cd2mZUFYhwp", "ZPOqFyoVJNL", "asm6--Tx5-B", "CNh6LWSfum", "nlZCeor_euj", "pvCG8bAIYS", "N6mxhlAdf8", "_FRDpp-qkT1", "Dcjk-f24pF", "26t1Z29fA7r" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the reply! Here, we will address your remaining concerns. \n\n**(1) Compared with Eysenbach et al. 2021** \n\nIn addition to the differences in problem setup, derivation, experiment, and research background (as mentioned in the above response), we provide two detailed differences from *updating the ...
[ -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 5, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, 5, 4, 3 ]
[ "tL78nHas6H", "9rm7uDGsHKL", "_FRDpp-qkT1", "ZPOqFyoVJNL", "cd2mZUFYhwp", "pvCG8bAIYS", "iclr_2022_9SDQB3b68K", "nlZCeor_euj", "ZPOqFyoVJNL", "N6mxhlAdf8", "26t1Z29fA7r", "Dcjk-f24pF", "iclr_2022_9SDQB3b68K", "iclr_2022_9SDQB3b68K", "iclr_2022_9SDQB3b68K" ]
iclr_2022_XVPqLyNxSyh
Salient ImageNet: How to discover spurious features in Deep Learning?
Deep neural networks can be unreliable in the real world especially when they heavily use {\it spurious} features for their predictions. Focusing on image classifications, we define {\it core features} as the set of visual features that are always a part of the object definition while {\it spurious features} are the on...
Accept (Poster)
The paper tackles the important problem of spurious feature detection in deep neural networks. Specifically, it proposes a framework to identify core and spurious features by investigating the activation maps with human supervision. Then, it produces an annotated version of the ImageNet dataset with core and spurious f...
train
[ "T88zDM-nfvM", "CPo6F_7q2hM", "jBBwwaxeO9n", "FP6bjPD0Ic", "CE6-qhkqBBr", "EoLVxo5MR5", "6Nsmt6p6qmA", "2hzkk-eMVbka", "7bGJyxTkkB", "f_56RroNT4G", "KuUrFT5kmIZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This work proposes a scheme to identify 'causal' and 'spurious' features with human supervision. More specifically, the method presented decomposes the activations that map to the logits of a supervised DNN into neural features. The neural features are then ranked and visualized with the CAM feature attribution me...
[ 8, -1, 6, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 4, -1, 3, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_XVPqLyNxSyh", "f_56RroNT4G", "iclr_2022_XVPqLyNxSyh", "jBBwwaxeO9n", "T88zDM-nfvM", "2hzkk-eMVbka", "T88zDM-nfvM", "KuUrFT5kmIZ", "jBBwwaxeO9n", "T88zDM-nfvM", "iclr_2022_XVPqLyNxSyh" ]
iclr_2022_QJWVP4CTmW4
Ada-NETS: Face Clustering via Adaptive Neighbour Discovery in the Structure Space
Face clustering has attracted rising research interest recently to take advantage of massive amounts of face images on the web. State-of-the-art performance has been achieved by Graph Convolutional Networks (GCN) due to their powerful representation capacity. However, existing GCN-based methods build face graphs mainly...
Accept (Poster)
All reviewers agree that the presented ADA-Nets approach is very interesting and sufficiently novel, addressing the degradation problem in face clustering. The reviewers are satisfied with the presented experimental studies in most cases. The rebuttal addressed a large majority of additionally raised questions. I disag...
train
[ "1pZaW-nhFbn", "5B5rQwH2Xyk", "8c6XbGqB4_d", "l1CiN7StJBJ", "2T5X9liz-qz", "wgHJfRnFuzJ", "jEWoySflVY2", "ij7alRtmj6B" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a new way of constructing a graph for Graph Convolutional Networks (GCNs) for face clustering that is claimed to alleviate the problem of having a significant amount of noisy edges in the graph as in previous methods for face clustering. Their method consists of two innovations: firstly, facial...
[ 3, 8, -1, -1, -1, -1, 6, 6 ]
[ 4, 4, -1, -1, -1, -1, 5, 4 ]
[ "iclr_2022_QJWVP4CTmW4", "iclr_2022_QJWVP4CTmW4", "5B5rQwH2Xyk", "ij7alRtmj6B", "1pZaW-nhFbn", "jEWoySflVY2", "iclr_2022_QJWVP4CTmW4", "iclr_2022_QJWVP4CTmW4" ]
iclr_2022_lnEaqbTJIRz
The Inductive Bias of In-Context Learning: Rethinking Pretraining Example Design
Pretraining Neural Language Models (NLMs) over a large corpus involves chunking the text into training examples, which are contiguous text segments of sizes processable by the neural architecture. We highlight a bias introduced by this common practice: we prove that the pretrained NLM can model much stronger dependenci...
Accept (Spotlight)
This paper presents a novel framing of what's at stake when selecting/segmenting text for use in language model pretraining. Four reviewers with experience working with these models agreed that the conceptual and theoretical work here is insightful and worth sharing. The empirical work is fairly small-scale and does no...
train
[ "VluDZlu8XBI", "_S_aVV1gek", "sUPa1qI4yF", "921tlMcxv4l", "xykRokzk90y", "1OhTy4F6tsS", "1TjToumF3U", "nBcO-lrR20", "3OV5GIsT14", "c5UKHIS17U8", "Bkcb9UjoM-U", "cRlTb5H_pze", "VcwEJUgaJN6", "LXeq0Y-Vpqr", "HSq8gUS5Rju", "O_F_Z7yCpXv", "S5GhA_Db7Fv" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper provides an argument for reconsidering how we construct a single training example during LM (pre)training. It gives a formal proof that dependencies between two texts are vanishingly weak when the texts are presented in separate training examples (but not when they are in the same training example). Int...
[ 8, -1, 8, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ 3, -1, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3 ]
[ "iclr_2022_lnEaqbTJIRz", "1OhTy4F6tsS", "iclr_2022_lnEaqbTJIRz", "iclr_2022_lnEaqbTJIRz", "nBcO-lrR20", "c5UKHIS17U8", "S5GhA_Db7Fv", "VcwEJUgaJN6", "sUPa1qI4yF", "Bkcb9UjoM-U", "LXeq0Y-Vpqr", "O_F_Z7yCpXv", "921tlMcxv4l", "VluDZlu8XBI", "iclr_2022_lnEaqbTJIRz", "iclr_2022_lnEaqbTJIRz"...
iclr_2022_VBZJ_3tz-t
The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training
Random pruning is arguably the most naive way to attain sparsity in neural networks, but has been deemed uncompetitive by either post-training pruning or sparse training. In this paper, we focus on sparse training and highlight a perhaps counter-intuitive finding, that random pruning at initialization can be quite powe...
Accept (Poster)
### Summary This paper builds on previous work on sparse training that shows the many modern sparse training techniques do no better than a random pruning technique that selects layer wise rations, but otherwise randomly selects which weights within a layer to remove. The key difference in this work is to take these ...
train
[ "0HousP5yX4x", "KxeAdivGKtW", "OaQ20aWkF8F", "U_J7CgRO7-S", "cdm1WXOimoB", "NvgrpNA_odU", "H-ei37M-4Ra", "q4vJTvPV-ka", "NANU73y_ejU", "MFK4dT8u7sP", "jZaTNr54tpB", "M-oFkpCjXA0", "JyDDl0--TkT", "FENN3QYjXo", "HkizZ0rj5lW", "MKRjh9yyUPX", "r6sbO7cSc9R", "J6zqp-YGcUP", "_XTvUvdKzu...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " **Q2: Frankle et al. 2020b has quite a lot of overlap with the paper.** \n- Thanks for pointing out this important and insightful prior work which we are actually quite familiar with it. While there are indeed some overlaps existing in these two papers, we argue with confidence that our main paper differs from Fr...
[ -1, -1, 6, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ -1, -1, 5, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "OaQ20aWkF8F", "U_J7CgRO7-S", "iclr_2022_VBZJ_3tz-t", "0HousP5yX4x", "r6sbO7cSc9R", "OaQ20aWkF8F", "NANU73y_ejU", "iclr_2022_VBZJ_3tz-t", "HkizZ0rj5lW", "FENN3QYjXo", "OaQ20aWkF8F", "iclr_2022_VBZJ_3tz-t", "iclr_2022_VBZJ_3tz-t", "q4vJTvPV-ka", "q4vJTvPV-ka", "_XTvUvdKzu1", "J6zqp-YG...
iclr_2022_RLtqs6pzj1-
Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity
The success of deep ensembles on improving predictive performance, uncertainty estimation, and out-of-distribution robustness has been extensively studied in the machine learning literature. Albeit the promising results, naively training multiple deep neural networks and combining their predictions at inference leads t...
Accept (Poster)
#### Summary The goal of this work is to reduce the costs of inference in ensembled models by ensembling sparse models. The paper also aims to reduce the costs of training these ensembles as well. The proposed techniques (DST and EDST) each these goals, respectively. #### Discussion As noted by the reviewers, the p...
val
[ "OxSytTyV8N", "coDInvmtt-S", "S7QCfx20y7r", "LjH1XcKLLo9", "03ql5Fpcd6I", "7SjN6quCyq", "Rww0DdeTy8", "6GrVx75k_CS", "DiCtzuXb3a5", "ao6kL3KHfE0", "q3ur4oKLQtG", "7eNAiL5X4N", "RVgsEYr6qZH", "U2Q3PwTlUF", "xL0lnVPD6PF", "kHsYxG7rlji", "FIhDKZuEbma", "1MjiQJYmJCk", "NKc7dAgZp0i", ...
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer"...
[ " Dear AC and all Reviewers,\n\n \nWe sincerely appreciate AC and all reviewers’ time and insightful comments, which help a lot in further improving our paper. We are so glad that the merits of our work have been unanimously recognized by reviewers **4rju**, **HPxD**, and **hJ1v**, who gracefully responded to us an...
[ -1, -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2022_RLtqs6pzj1-", "UN-q2pyoa6", "iclr_2022_RLtqs6pzj1-", "S7QCfx20y7r", "UN-q2pyoa6", "iclr_2022_RLtqs6pzj1-", "S7QCfx20y7r", "UN-q2pyoa6", "FIhDKZuEbma", "iclr_2022_RLtqs6pzj1-", "7SjN6quCyq", "7SjN6quCyq", "iclr_2022_RLtqs6pzj1-", "7SjN6quCyq", "7SjN6quCyq", "WEniRlr5E9-", "...
iclr_2022_GugZ5DzzAu
Permutation Compressors for Provably Faster Distributed Nonconvex Optimization
In this work we study the MARINA method of Gorbunov et al (ICML, 2021) -- the current state-of-the-art distributed non-convex optimization method in terms of theoretical communication complexity. Theoretical superiority of this method can be largely attributed to two sources: a carefully engineered biased stochastic gr...
Accept (Poster)
The reviewers have agreed that the paper is in borderline. Although the reviewers are not really convinced about the authors’ responses, they still acknowledge that the paper is interesting and developed some new techniques for the analysis of distributed optimization. The following concerns are raised by the reviewe...
train
[ "MHXvM2HRbE", "0b8wkib7x6a", "eIOkd77n_mV", "_Eo-uOKr4L", "aay5xOXMZcI", "Gutygv5_cmi", "jdIdmyae72w", "ckQfSvrn2cB", "pm1jhu_bP4-", "9oiHaUkG2-N", "jUg9RhSi3i", "Uirpg0m4GI7", "iwDc5YO_TvT", "qXSeunNlBVr", "tPIoqfWUijW", "UT-97EQuZ6T", "RIzvdKUFkFg", "KpLALTOnKOi", "hnv4rPhrs6z"...
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer...
[ " Dear reviewers,\n\nThanks again for your reviews!\n\n**Our work offers a new SOTA in theoretical communication complexity for distributed optimization/training in the smooth nonconvex regime, which is a key problem in modern machine learning (neural network training always leads to nonconvex problems, and for mos...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 2 ]
[ "iclr_2022_GugZ5DzzAu", "2MLO7wA-ODh", "7ZfF3nqxXG", "iclr_2022_GugZ5DzzAu", "Hgc27zoDfz", "Hgc27zoDfz", "GF-64hkp4B", "FyOO-wsVSqG", "FyOO-wsVSqG", "FyOO-wsVSqG", "2MLO7wA-ODh", "2MLO7wA-ODh", "2MLO7wA-ODh", "2MLO7wA-ODh", "2MLO7wA-ODh", "2MLO7wA-ODh", "yiPS5Yz-GXG", "yiPS5Yz-GXG"...
iclr_2022_6Q52pZ-Th7N
Pseudo-Labeled Auto-Curriculum Learning for Semi-Supervised Keypoint Localization
Localizing keypoints of an object is a basic visual problem. However, supervised learning of a keypoint localization network often requires a large amount of data, which is expensive and time-consuming to obtain. To remedy this, there is an ever-growing interest in semi-supervised learning (SSL), which leverages a smal...
Accept (Poster)
This paper proposes a pseudo-labeled data selection method for semi-supervised pose estimation. The investigated task in this paper is practical and useful. The framework is well designed and reasonable, and extensive ablation studies are conducted to test the efficacy of the method. After discussion, all the reviewers...
train
[ "2c39dslJsVm", "RVDTeVf6sXP", "-QVHhLpx5Ag", "a5xmaTyGmzy", "O5FoHoGcwW5", "beV8KT-jZO", "aRSHRCQ7G9I", "p8u25RtCWXX", "SjMEmgCFKiK", "C108q_pp8_T" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for responding to my concerns and questions.", " Thank you for your answers.", "1. This paper introduces Curriculum Learning to semi-supervised keypoint localization, which is an automatic pseudo-labeled data selection method. The method uses reinforcement learning to learns a series of dynamic thre...
[ -1, -1, 6, -1, -1, -1, -1, -1, 5, 8 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, 4, 4 ]
[ "beV8KT-jZO", "a5xmaTyGmzy", "iclr_2022_6Q52pZ-Th7N", "O5FoHoGcwW5", "SjMEmgCFKiK", "C108q_pp8_T", "p8u25RtCWXX", "-QVHhLpx5Ag", "iclr_2022_6Q52pZ-Th7N", "iclr_2022_6Q52pZ-Th7N" ]
iclr_2022__jMtny3sMKU
Generalizing Few-Shot NAS with Gradient Matching
Efficient performance estimation of architectures drawn from large search spaces is essential to Neural Architecture Search. One-Shot methods tackle this challenge by training one supernet to approximate the performance of every architecture in the search space via weight-sharing, thereby drastically reducing the searc...
Accept (Poster)
All reviewers give acceptance scores. One reviewer also commented that they would like to increase their score from 6 to 7 (which isn't possible in the system). I encourage the authors to add the substantial new results generated during the rebuttal into the paper.
train
[ "ttQ09Vup-Ae", "myAs0zNRj5c", "uKM_CwTaixZ", "dcPyvn8UgyS", "coRHdedgEIo", "EWXlsOS-rb", "iGTnJdEeFyC", "EWWAnrQuDq", "ToEszxdu6RM", "6DZDQ1BNeBA", "yWJ5nHelIje", "IAcsyPXZa9l", "YV9W-tnjbRH", "2mku83VXFpb" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear reviewers,\n\nWe thank all reviewers for their participation during the discussion period. And we are delighted that your concerns and questions are properly addressed and that your ratings are further improved. We will incorporate the valuable discussions here in our final draft. Moreover, we will be here t...
[ -1, -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, 4, -1, -1, 3, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2022__jMtny3sMKU", "ToEszxdu6RM", "iclr_2022__jMtny3sMKU", "uKM_CwTaixZ", "2mku83VXFpb", "iclr_2022__jMtny3sMKU", "iclr_2022__jMtny3sMKU", "uKM_CwTaixZ", "2mku83VXFpb", "YV9W-tnjbRH", "EWXlsOS-rb", "iclr_2022__jMtny3sMKU", "iclr_2022__jMtny3sMKU", "iclr_2022__jMtny3sMKU" ]
iclr_2022_m8bypnj7Yl5
Neural Solvers for Fast and Accurate Numerical Optimal Control
Synthesizing optimal controllers for dynamical systems often involves solving optimization problems with hard real-time constraints. These constraints determine the class of numerical methods that can be applied: computationally expensive but accurate numerical routines are replaced by fast and inaccurate methods, trad...
Accept (Poster)
The authors propose a novel hypersolver framework for solving numerical optimal control problems, learning a low order ODE and a neural network based residual dynamics. They compare their framework with traditional optimal control solvers on a number of control tasks and demonstrate superior performance. The reviewers...
train
[ "QdYdiiajG-8", "DQ118GRM-7", "ECAA0_RJz3q", "RY2aR8fPCal", "C9q3ek2Xhrq", "sPSbrz97MjC", "fjwhKXsEvIB", "bQ2jOuROyLa", "1FKPFQovJUs", "Q3bU9juT56h", "Jl7-q5678ku" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " Thank you for your engagement. However, we disagree with the reviewer's personal stance on the relevance of this work to the ICLR community. We elaborate on this point below:\n\n> When re-reading the paper, I asked myself if the paper was primarily solving a control problem relevant to the ICLR community or demon...
[ -1, 6, -1, 8, -1, -1, -1, -1, -1, -1, 5 ]
[ -1, 4, -1, 3, -1, -1, -1, -1, -1, -1, 4 ]
[ "ECAA0_RJz3q", "iclr_2022_m8bypnj7Yl5", "sPSbrz97MjC", "iclr_2022_m8bypnj7Yl5", "fjwhKXsEvIB", "Jl7-q5678ku", "RY2aR8fPCal", "1FKPFQovJUs", "DQ118GRM-7", "iclr_2022_m8bypnj7Yl5", "iclr_2022_m8bypnj7Yl5" ]
iclr_2022_O-r8LOR-CCA
Open-World Semi-Supervised Learning
A fundamental limitation of applying semi-supervised learning in real-world settings is the assumption that unlabeled test data contains only classes previously encountered in the labeled training data. However, this assumption rarely holds for data in-the-wild, where instances belonging to novel classes may appear at ...
Accept (Poster)
This paper is proposed to address a novel but practical setting that the test set consists of both seen and unseen classes of the training set. To tackle the crucial challenge of distribution mismatch between the inlier and outlier features, the authors proposed a new method named ORCA by grouping similar instances to ...
test
[ "PRhqo8ef-I9", "Yzfe__8Vl-", "nxqAnA_id8-", "m4U6iMLDocu", "ua7fIuVxbMI", "LJrpjmjSuJo", "daG8kKEczUJ", "2GuAyRwa6lS", "h9T7IDa5lbq", "agGm9fFHp2I", "EA5Hf4UPJB", "HgIzRWSG3D", "3WCKKTIQzbH", "TunQBUiDiZZ", "cbIrogbvX6", "I9NMReLrX4" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the suggestion. We use the OpenSelfSup toolbox (https://github.com/open-mmlab/OpenSelfSup) and we set all hyperparameters to default for both CIFAR and ImageNet datasets. We pretrain the backbone on the whole dataset for each experiment, and we additionally include the experiment in whic...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 5, 4 ]
[ "nxqAnA_id8-", "iclr_2022_O-r8LOR-CCA", "EA5Hf4UPJB", "I9NMReLrX4", "cbIrogbvX6", "m4U6iMLDocu", "TunQBUiDiZZ", "h9T7IDa5lbq", "agGm9fFHp2I", "3WCKKTIQzbH", "HgIzRWSG3D", "Yzfe__8Vl-", "iclr_2022_O-r8LOR-CCA", "iclr_2022_O-r8LOR-CCA", "iclr_2022_O-r8LOR-CCA", "iclr_2022_O-r8LOR-CCA" ]
iclr_2022_vEZyTBRPP6o
Actor-critic is implicitly biased towards high entropy optimal policies
We show that the simplest actor-critic method — a linear softmax policy updated with TD through interaction with a linear MDP, but featuring no explicit regularization or exploration — does not merely find an optimal policy, but moreover prefers high entropy optimal policies. To demonstrate the strength of this bias, t...
Accept (Poster)
The paper makes a significant contribution in the rather sparse and challenging field of convergence analyses of actor-critic style algorithms, under the linear MDP structural assumption, showing that there is a natural bias towards being high-entropy. As one of the reviewers points out, although it is unlikely that th...
val
[ "QGDj8VZc_fH", "YMta4pYLVX", "vQgLQRNKo_k", "MjvMpS6x-fw", "ZpPCamksDf7", "iuTBy5Q94O0", "gUrFayuP2JH", "J3UTGAPjd2", "B18pa6RDK55", "wLxAx_sOOEH", "VhRR2RB7nYE", "qw-hiCEKQ1g", "uBzxsB2gbHC", "b5GjDiBeNEt", "iL8VUYdx3bO", "gyiVcxyOYlR", "oLZYMHiz-Na", "uakpXN4jqJI", "bxdqxoX8BBS...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_r...
[ " Thank you for taking another look, and for clarifying! This particular point about convergence to a specific policy (and not just the high entropy set of policies) is very interesting to us, and in our revisions we will try to recoup space so that we can discuss it in the open problems. Based on both preliminar...
[ -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2 ]
[ "YMta4pYLVX", "uakpXN4jqJI", "MjvMpS6x-fw", "ZpPCamksDf7", "B18pa6RDK55", "J3UTGAPjd2", "iclr_2022_vEZyTBRPP6o", "VhRR2RB7nYE", "wLxAx_sOOEH", "uBzxsB2gbHC", "qw-hiCEKQ1g", "iL8VUYdx3bO", "iL8VUYdx3bO", "bxdqxoX8BBS", "gyiVcxyOYlR", "gUrFayuP2JH", "me_yYOy1xcq", "ppYHlmYHYuo", "i...
iclr_2022_4p6_5HBWPCw
Graph-less Neural Networks: Teaching Old MLPs New Tricks Via Distillation
Graph Neural Networks (GNNs) are popular for graph machine learning and have shown great results on wide node classification tasks. Yet, they are less popular for practical deployments in the industry owing to their scalability challenges incurred by data dependency. Namely, GNN inference depends on neighbor nodes mult...
Accept (Poster)
This paper proposes a very simple procedure to accelerate the inference time of graph-structured Neural Networks, by distilling knowledge of a GNN into a node-wise MLP. Despite some concerns about the novelty of the methodology (which borrows heavily from previous KD works), reviewers generally found this empirical wo...
train
[ "vGMxaatgLg7", "FLTrGk6Wg7G", "cvKsku_S3KE", "rnl-47CXMx", "aKy6qDoyRw2", "Bs-5zaVxlC", "V_UgEDTKhn", "PwXfxl37M1", "GbM3eaHcMh0", "6QSUaJdq7sC", "D2y0moqcffF", "jMw4isnwGSU", "mkyJnK1U14_", "KudQzIyD_TJ", "d16tjaSJsMd", "yiFER-Gm0EH", "0nbwnLePlU0", "HXN1l6H5Rzb", "6LoQAr_tUy3",...
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_r...
[ " Dear Reviewer 5DSq,\n\nWe appreciate your detailed comments. We want to clarify some misunderstandings that caused some of your concerns.\n\n* **Comparison to MLP + C&S**\n\nWe agree C&S is a strong model, and “MLP+C&S can already achieve 100 times faster for training and inference time, and has 137 times fewer p...
[ -1, -1, -1, -1, -1, -1, -1, 10, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 8, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ "FLTrGk6Wg7G", "0nbwnLePlU0", "HXN1l6H5Rzb", "GbM3eaHcMh0", "V_UgEDTKhn", "6QSUaJdq7sC", "mkyJnK1U14_", "iclr_2022_4p6_5HBWPCw", "D2y0moqcffF", "jMw4isnwGSU", "PwXfxl37M1", "EmmjOoLH_u", "6LoQAr_tUy3", "6LoQAr_tUy3", "HXN1l6H5Rzb", "HXN1l6H5Rzb", "HXN1l6H5Rzb", "iclr_2022_4p6_5HBWP...
iclr_2022_X8cLTHexYyY
Learning-Augmented $k$-means Clustering
$k$-means clustering is a well-studied problem due to its wide applicability. Unfortunately, there exist strong theoretical limits on the performance of any algorithm for the $k$-means problem on worst-case inputs. To overcome this barrier, we consider a scenario where ``advice'' is provided to help perform clustering....
Accept (Spotlight)
One might assume that the k-means problem has already been beaten to death, but this paper shows there are still remaining questions. And rather interesting ones at that, with a novel angle of having additional help from a prediction algorithm of cluster memberships. This connects to learning-augmented algorithms resea...
train
[ "zlHGdoMXSf2", "tChLFBOyviZ", "7KkTmXJ0rl", "oEwfwPs_AZw" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer" ]
[ "This paper considers the problem of k-means clustering with the aid of a predictor which supplies a proxy to the optimal clustering subject to some possible errors. The motivation for this setting is the inherent computational issues with solving the vanilla k-means clustering problem. For this model, the authors ...
[ 8, 8, -1, 6 ]
[ 3, 5, -1, 4 ]
[ "iclr_2022_X8cLTHexYyY", "iclr_2022_X8cLTHexYyY", "iclr_2022_X8cLTHexYyY", "iclr_2022_X8cLTHexYyY" ]
iclr_2022_XWODe7ZLn8f
Contrastive Fine-grained Class Clustering via Generative Adversarial Networks
Unsupervised fine-grained class clustering is a practical yet challenging task due to the difficulty of feature representations learning of subtle object details. We introduce C3-GAN, a method that leverages the categorical inference power of InfoGAN with contrastive learning. We aim to learn feature representations th...
Accept (Spotlight)
All the reviewers liked the paper. The proposed method contains novel ideas of learning feature representation to maixmize the mutral informatio nbetween the latent code and its corresponding observation for fine-grained class clustering. The model seems to successfully avoid mode collapse while training generators and...
train
[ "m9rAeKs1zz5", "7paYF53jrpS", "J8JTvw9Fnw", "q2QJ3qc5PXv", "PJimima1cv7", "8hEw-iDO0y0", "WndwiPWR5zt", "BGhe_bOmoLm", "9LES2OD8WeP", "O09DNOGpUWW", "9Is1K-jmgEJ", "vFPMpBh9us2", "a0FvrPRXdo" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "-- The authors propose C3-GAN, a method that learns \"clustering-friendly\" feature representation for fine-grained clustering (main goal), by learning features of cluster centroids (latent codes) using contrastive loss on the mutual information of image-latent code pairs.\n\n-- The method should improve the GAN's...
[ 6, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2022_XWODe7ZLn8f", "iclr_2022_XWODe7ZLn8f", "WndwiPWR5zt", "BGhe_bOmoLm", "m9rAeKs1zz5", "iclr_2022_XWODe7ZLn8f", "7paYF53jrpS", "vFPMpBh9us2", "vFPMpBh9us2", "m9rAeKs1zz5", "a0FvrPRXdo", "iclr_2022_XWODe7ZLn8f", "iclr_2022_XWODe7ZLn8f" ]
iclr_2022_vUH85MOXO7h
A Neural Tangent Kernel Perspective of Infinite Tree Ensembles
In practical situations, the tree ensemble is one of the most popular models along with neural networks. A soft tree is a variant of a decision tree. Instead of using a greedy method for searching splitting rules, the soft tree is trained using a gradient method in which the entire splitting operation is formulated in ...
Accept (Poster)
The paper investigates the neural tangent kernel NTK of infinitely wide ensembles of soft trees having a particular soft decision functions in their internal nodes. A closed form of the NTK is presented as well as a result bounding the changes of the NTK during training. Implications for practical training procedures ...
train
[ "v7JltVzQeqh", "Yoez9zPuAQ", "OhOsLBAfSVA", "J8reDjeeX63", "Oqx8h-modK-", "GijynEU2Ioa", "MWujIGVOCuF" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper presents an extension of a neural network analytic tool\ncalled Neural Tangent Kernels to its use for decision trees and\nforests. The focus is a previously proposed notion of soft trees.\nA few analytical results are presented about the properties of the\nextended notion, called Tree Neural Tangent Ker...
[ 3, -1, -1, -1, -1, 8, 8 ]
[ 2, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2022_vUH85MOXO7h", "OhOsLBAfSVA", "v7JltVzQeqh", "MWujIGVOCuF", "GijynEU2Ioa", "iclr_2022_vUH85MOXO7h", "iclr_2022_vUH85MOXO7h" ]
iclr_2022_0rcbOaoBXbg
Neural Spectral Marked Point Processes
Self- and mutually-exciting point processes are popular models in machine learning and statistics for dependent discrete event data. To date, most existing models assume stationary kernels (including the classical Hawkes processes) and simple parametric models. Modern applications with complex event data require more g...
Accept (Poster)
This paper proposes a self-exciting temporal point process model with a non-stationary triggering kernel to model complex dependencies in temporal and spatio-temporal event data. The kernel is represented by its finite rank decomposition and a set of neural basis functions (feature functions). The proposed model has su...
train
[ "gaoKmxpqXtW", "9DvoA9C8y_U", "gpPAit7oaiB", "gylzbFEEBwE", "Ap_l41uMM4", "Pks9M1jH-Zn", "zdIqAfSnvl9" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewers for the valuable feedback and constructive comments. In below, we address the questions to each reviewer. We have also revised the manuscript based on the comments, where the major revisions are marked in blue.", " We thank the reviewer's positive comments. \n\n(1) We also want to clarify...
[ -1, -1, -1, -1, 6, 8, 3 ]
[ -1, -1, -1, -1, 4, 3, 3 ]
[ "iclr_2022_0rcbOaoBXbg", "Pks9M1jH-Zn", "zdIqAfSnvl9", "Ap_l41uMM4", "iclr_2022_0rcbOaoBXbg", "iclr_2022_0rcbOaoBXbg", "iclr_2022_0rcbOaoBXbg" ]
iclr_2022_Z1Qlm11uOM
Learning Audio-Visual Speech Representation by Masked Multimodal Cluster Prediction
Video recordings of speech contain correlated audio and visual information, providing a strong signal for speech representation learning from the speaker’s lip movements and the produced sound. We introduce Audio-Visual Hidden Unit BERT (AV-HuBERT), a self-supervised representation learning framework for audio-visual s...
Accept (Poster)
PAPER: This paper introduces an extension of the HuBERT audio-only model for the audio-visual setting, allowing for self-supervised pre-training of multimodal model which also performs well on the unimodal tasks (lip-reading and ASR). The paper applies the idea of modality dropout to their multimodal pre-training setup...
train
[ "Ckj3aUvKQj", "8635Ei-uh8I", "D5jOb9EO31l", "rmSZafqC4q8", "0Oi3LnPrgg", "6lxGKMAusq0", "zdzijl6TlvW", "x1VoXrSSCDQ", "4dw4MzUyjq", "KMthOJQHVr6", "kk3jyT92MI2", "gAca2R-IFSV", "kKu-JLu8koB", "8cNVmnMv3fE", "BXMW44UlMwm" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I keep the original score with a positive rating. The arrows in the figure 1 are not regularly arranged, which can be confusing to the reader. For example, some arrows disappear after the transformer layer or appear after passing the modality dropout layer. Authors can fix this on the final submission. ", " I h...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4, 5 ]
[ "6lxGKMAusq0", "4dw4MzUyjq", "x1VoXrSSCDQ", "KMthOJQHVr6", "kKu-JLu8koB", "zdzijl6TlvW", "gAca2R-IFSV", "BXMW44UlMwm", "8cNVmnMv3fE", "0Oi3LnPrgg", "iclr_2022_Z1Qlm11uOM", "iclr_2022_Z1Qlm11uOM", "iclr_2022_Z1Qlm11uOM", "iclr_2022_Z1Qlm11uOM", "iclr_2022_Z1Qlm11uOM" ]
iclr_2022_7I8LPkcx8V
Differentially Private Fractional Frequency Moments Estimation with Polylogarithmic Space
We prove that $\mathbb{F}_p$ sketch, a well-celebrated streaming algorithm for frequency moments estimation, is differentially private as is when $p\in(0, 1]$. $\mathbb{F}_p$ sketch uses only polylogarithmic space, exponentially better than existing DP baselines and only worse than the optimal non-private baseline by a...
Accept (Poster)
The reviewers agreed that this is a technically novel and interesting paper with results for a very natural problem and all voted for acceptance. The paper gives more evidence for the wide-ranging compatibility between the goals of sketching and of privacy.
train
[ "1Fo1HOJl94K", "zgzPvZk7DB1", "eAJAxwhoWnu", "nUhlO4KanZ2", "iifbldvV5L", "yoJBbQ6qV41", "n9ZWSSibW9m", "qSm9YO4YYI4", "XgxUKeJgRo", "piQJX-c0NTd", "vVWU4gc2Vby", "cfCPqzZjfeD", "dycXvz7UG4u", "1aRS0HULhB", "syWjsLzmGxT", "4P4js9uYAsy", "v6-eUqcUoo8" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors show that the $\\mathbb{F}_p$ sketch for frequency moments estimation is differentially private, as is,\nfor $p\\in (0, 1]$. Other special cases for frequency moments estimation have been studied previously in the\nliterature (e.g., Choi et al., 2020, Smith et al., 2020, who study cases for $p = 0, 1, ...
[ 6, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2022_7I8LPkcx8V", "iclr_2022_7I8LPkcx8V", "nUhlO4KanZ2", "yoJBbQ6qV41", "n9ZWSSibW9m", "dycXvz7UG4u", "cfCPqzZjfeD", "XgxUKeJgRo", "vVWU4gc2Vby", "1Fo1HOJl94K", "v6-eUqcUoo8", "zgzPvZk7DB1", "zgzPvZk7DB1", "v6-eUqcUoo8", "4P4js9uYAsy", "iclr_2022_7I8LPkcx8V", "iclr_2022_7I8LPkc...
iclr_2022_pWBNOgdeURp
An Operator Theoretic View On Pruning Deep Neural Networks
The discovery of sparse subnetworks that are able to perform as well as full models has found broad applied and theoretical interest. While many pruning methods have been developed to this end, the naïve approach of removing parameters based on their magnitude has been found to be as robust as more complex, state-of-th...
Accept (Poster)
The paper used the Koopman operator theory to explain and guide the DNN pruning. All the reviewers deemed that such a viewpoint is novel (but at different levels). However, the paper still had some issues, including unclear technical details, vague/overselling statements, being computation and memory expensive, etc. Th...
val
[ "J7_Hn0rrlEZ", "nCgFtFl9rwi", "Hg3cf66oRm", "-f8uvMT72N5", "SZN0UGAku08", "GXe7ghWEvdu", "fTKaqcb2Ixp", "06JO_4zTpSx", "lKRsKHzuXmF", "OROKKFLdQDJ", "g-Cg9FCA6LG", "KZxjDSGysBB", "ffcojGZEms", "16c1KFgJKc", "OjXivi31zsS", "gQ5jHQ_rPPO", "Ka6pqQ1RPt-", "Jh9w7eR3kb", "w1m8kq86af3",...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "aut...
[ " Thank you for clarifying both equivalences. I've raised my score.", "The authors studied network pruning from the perspective of dynamical system theory. They show that a new type of pruning method, named Koopman pruning, unifies magnitude pruning and gradient-based pruning to a degree. It also clarifies aspect...
[ -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, 4, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3 ]
[ "lKRsKHzuXmF", "iclr_2022_pWBNOgdeURp", "SZN0UGAku08", "iclr_2022_pWBNOgdeURp", "Jh9w7eR3kb", "fTKaqcb2Ixp", "Ka6pqQ1RPt-", "iclr_2022_pWBNOgdeURp", "OROKKFLdQDJ", "gQ5jHQ_rPPO", "nCgFtFl9rwi", "KgWa5sSoOs", "-f8uvMT72N5", "iclr_2022_pWBNOgdeURp", "8YmnG4ORtTR", "nCgFtFl9rwi", "KgWa5...
iclr_2022_Qaw16njk6L
NASViT: Neural Architecture Search for Efficient Vision Transformers with Gradient Conflict aware Supernet Training
Designing accurate and efficient vision transformers (ViTs) is a highly important but challenging task. Supernet-based one-shot neural architecture search (NAS) enables fast architecture optimization and has achieved state-of-the-art (SOTA) results on convolutional neural networks (CNNs). However, directly applying the...
Accept (Poster)
This paper tackles a very timely problem. Scores of 5,6,6,8 put it in the borderline region, but in the private discussion the more negative reviewer noted that they would also be OK with the paper being accepted. I therefore recommend acceptance. Going through the paper I missed any mention of available source code....
train
[ "rtt_Xoq8nYk", "2R8M734h8bf", "U_tJ_AD4zC", "G4NtUSeN0Zk", "Uwk1rqRVg4N", "zbytaDdl-m", "wGkeQj8Yct8", "qbvLAPlo1fJ", "0tB1Y0_VPG", "lxgJNL-O8JQ", "fFPgIssYkKt", "RuBytihEhTH", "HHCe6STWbn", "aX0PPVfWwOW", "XQs0thbeWRG", "fV-QTazAZcJ" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Reviewer GnU6, thanks for your comments. We will add the explanation about 'groups' in the caption!", " Thanks for the authors' responses which have resolved my concerns. I think it would be better if the authors can supplement the explaination about \"Group\" in Table 6's caption in their final revision.", "...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 8, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 3, 5 ]
[ "2R8M734h8bf", "fFPgIssYkKt", "aX0PPVfWwOW", "Uwk1rqRVg4N", "qbvLAPlo1fJ", "U_tJ_AD4zC", "fV-QTazAZcJ", "0tB1Y0_VPG", "RuBytihEhTH", "XQs0thbeWRG", "HHCe6STWbn", "iclr_2022_Qaw16njk6L", "iclr_2022_Qaw16njk6L", "iclr_2022_Qaw16njk6L", "iclr_2022_Qaw16njk6L", "iclr_2022_Qaw16njk6L" ]
iclr_2022_5LXw_QplBiF
Learning Hierarchical Structures with Differentiable Nondeterministic Stacks
Learning hierarchical structures in sequential data -- from simple algorithmic patterns to natural language -- in a reliable, generalizable way remains a challenging problem for neural language models. Past work has shown that recurrent neural networks (RNNs) struggle to generalize on held-out algorithmic or syntactic ...
Accept (Spotlight)
This paper advances the long running thread of sequence modelling research focussed on differentiable instantiations of stack based models. In particular it builds upon recent work on the Nondeterministic Stack RNN (NS-RNN) by introducing three extensions. The first is to relax the need for a normalised distribution ov...
train
[ "NZ5igKEKx7J", "qX86kWdll7", "mw8RdymOySM", "83zw0h6nxhH", "AKrXMwI4sYx", "-DdT0rCWtSq", "LPKv62CrM2W", "hg-uspPhnFr", "YzMXsJrVIpo", "IqtpFkeuCyi", "aYpd6_7WBZP", "uioMgibG6oS", "oYgGCe-NufB", "PZY3DCi7Fa5", "5Z3YxvMGRge", "Nwc7HUgrQVm" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks for your clarification. The author addressed my question on capacity. Considering the opinion of other reviewers, I acknowledge that the contributions of this paper are interesting for researchers in this field. I have adjusted my recommendation score to 6.", "The paper proposes a new stack-augmented RNN...
[ -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ -1, 3, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2 ]
[ "LPKv62CrM2W", "iclr_2022_5LXw_QplBiF", "hg-uspPhnFr", "iclr_2022_5LXw_QplBiF", "PZY3DCi7Fa5", "Nwc7HUgrQVm", "qX86kWdll7", "5Z3YxvMGRge", "83zw0h6nxhH", "oYgGCe-NufB", "Nwc7HUgrQVm", "qX86kWdll7", "5Z3YxvMGRge", "83zw0h6nxhH", "iclr_2022_5LXw_QplBiF", "iclr_2022_5LXw_QplBiF" ]
iclr_2022_sRZ3GhmegS
CoBERL: Contrastive BERT for Reinforcement Learning
Many reinforcement learning (RL) agents require a large amount of experience to solve tasks. We propose Contrastive BERT for RL (COBERL), an agent that combines a new contrastive loss and a hybrid LSTM-transformer architecture to tackle the challenge of improving data efficiency. COBERL enables efficient and robust lea...
Accept (Spotlight)
This paper introduces a new transformer architecture for representation learning in RL. The key ingredients of the proposed architectures are a novel combination of existing methods: (1) the use of LSTMs to reduce the need for large transformers and (2) a contrastive learning procedure that doesn't require human data a...
train
[ "SUXMNseGUrk", "YttyEUpAnf1", "OEokiQw4dCn", "l90YdV6-mtA", "ZDv8BuhVjbA", "VGr_70iE-1V", "8ryi3_Op4ax", "3h_2Wqkj7Io", "J_lQ1sncmf0", "N7QypRBUUXt", "XooojLHIBQK" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ " While other reviewers raised some concerns about clarity I still feel that these are not severe enough to prevent the publication of this paper. Especially after the changes made during the rebuttal I stand by my score.", " Thank you for clarifying some of the points I've raised in my review. \nThe validation o...
[ -1, -1, -1, 8, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, 3, 3 ]
[ "3h_2Wqkj7Io", "8ryi3_Op4ax", "ZDv8BuhVjbA", "iclr_2022_sRZ3GhmegS", "VGr_70iE-1V", "l90YdV6-mtA", "XooojLHIBQK", "N7QypRBUUXt", "iclr_2022_sRZ3GhmegS", "iclr_2022_sRZ3GhmegS", "iclr_2022_sRZ3GhmegS" ]
iclr_2022_cOtBRgsf2fO
Label Leakage and Protection in Two-party Split Learning
Two-party split learning is a popular technique for learning a model across feature-partitioned data. In this work, we explore whether it is possible for one party to steal the private label information from the other party during split training, and whether there are methods that can protect against such attacks. Spec...
Accept (Poster)
All reviewers concur on the fact that the paper contains solid ideas. The discussion helped clarify the case of class-imbalance and no major concerns remained after discussion phase. I thank the authors for the additional details on execution time / complexity. On a separate note and perhaps to dig further in the pape...
test
[ "PQo-AXT4B9t", "Oq6l7Gcm2-D", "7j-4UaRiZZw", "mcayAxG9g8", "TQy3zOy7jr8", "Eg0I7BS11lv", "634QuWGN03X", "jlmUAba6xjs", "lCC6g-L-Ioy", "Cs7FXsxOQyL", "lw3Qpmi8kj", "wTbDUanWlXB", "jHd3ffOgsf", "8J90gpkqzsI", "6gvF5m3I-_9", "S_IBpJDVuZS", "-prtfn2C2nH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " I'd like to thank the authors again for the clear reply; I now understand why it is not easy to go from LDP bound on KL to the population bound as defined in the paper, and I agree that this is an interesting future direction.", " Thank you, the responses addressed most of my comments so I increased my score. R...
[ -1, -1, 8, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 2 ]
[ "mcayAxG9g8", "6gvF5m3I-_9", "iclr_2022_cOtBRgsf2fO", "TQy3zOy7jr8", "Eg0I7BS11lv", "lCC6g-L-Ioy", "8J90gpkqzsI", "iclr_2022_cOtBRgsf2fO", "wTbDUanWlXB", "7j-4UaRiZZw", "-prtfn2C2nH", "-prtfn2C2nH", "jlmUAba6xjs", "jlmUAba6xjs", "7j-4UaRiZZw", "iclr_2022_cOtBRgsf2fO", "iclr_2022_cOtB...
iclr_2022_YevsQ05DEN7
Understanding Dimensional Collapse in Contrastive Self-supervised Learning
Self-supervised visual representation learning aims to learn useful representations without relying on human annotations. Joint embedding approach bases on maximizing the agreement between embedding vectors from different views of the same image. Various methods have been proposed to solve the collapsing problem where ...
Accept (Poster)
The theory and results presented in this paper provide a new method to avoid collapse in contrastive learning. All but one reviewer recommend acceptance. The lone negative reviewer is concerned with the limited experiments, but the other reviewers, and the AC, find the experimentation convincing enough to warrant acc...
test
[ "Rex33t07Ut", "3rf6KmsWR2e", "E6Sd1yFmB9e", "B2J4PuPrfFi", "WKV9FYNhv_o", "gpIJAiYcOyh", "Pput3MGJFWZ", "2huD00zumoc", "gFOfv_HPGxF", "l63FiG8YWM", "SzeqOUM-Gmy", "ztuBmaop5I-", "vKJ_wePnERz", "YYeO4FerSvF", "K_CKQ0mfOq3", "Bry1ZzIhzQ1", "piQgxy3zgTs", "F4WS49vVgvm", "bDV_wmbujO"...
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_...
[ " > “In 4.2 GRADIENT FLOW DYNAMICS, what is the meaning of X in Eq. 6?”\n\nAccording to *Lemma 2*, $X$ is defined by the weighted “data distribution” covariance matrix subtracting the weighted “augmentation distribution” covariance matrix. According to *Lemma 1*, the gradient on weight matrix W is proportional to...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 6, 6 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "fdkNCzKim30", "iclr_2022_YevsQ05DEN7", "2huD00zumoc", "WKV9FYNhv_o", "E6Sd1yFmB9e", "Pput3MGJFWZ", "vKJ_wePnERz", "3rf6KmsWR2e", "fdkNCzKim30", "UItjZiRQq8x", "ztuBmaop5I-", "3rf6KmsWR2e", "bDV_wmbujO", "WjKJr82cd7", "Rex33t07Ut", "piQgxy3zgTs", "F4WS49vVgvm", "iclr_2022_YevsQ05DE...
iclr_2022_nZeVKeeFYf9
LoRA: Low-Rank Adaptation of Large Language Models
An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Using GPT-3 175B as an example -- deploying indepen...
Accept (Poster)
This paper introduces a new method for fine-tuning large language models, which is lightweight since it only adds a small amount of parameters, while keeping the original parameters frozen. The main idea is to add a low rank matrix which is learned during fine-tuning to the original weight matrices of the model, which ...
train
[ "HVwpvt4s3u2", "1w_2mbXaFFK", "P-aFfvwWIWq", "-7fUFarlyFj", "uuVRGXXAmj2", "O-w8k6-wuHW", "7DDpP1Cezpx", "xdraVP9ExCF", "aqNRb3Fae2", "o76Uwymv5x", "wTNBjpop9q6", "VCBDvGIEJzy", "IyKwCkDw_Ow", "3yKoW0WWv99", "ycUvlECbk4f", "E_nu4P7EeC", "Qx_SNb6d8xu" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper presents LoRA, a method aiming to improve the efficiency of fine-tuning large language models. Specifically, it fixes the underlying pretrained model, and learn low-rank (by construction) parameter matrices to update the model. In this way, the amount of parameters to finetune is significantly reduced i...
[ 6, -1, -1, 8, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 4, -1, -1, 4, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2022_nZeVKeeFYf9", "P-aFfvwWIWq", "uuVRGXXAmj2", "iclr_2022_nZeVKeeFYf9", "O-w8k6-wuHW", "7DDpP1Cezpx", "E_nu4P7EeC", "iclr_2022_nZeVKeeFYf9", "3yKoW0WWv99", "VCBDvGIEJzy", "iclr_2022_nZeVKeeFYf9", "IyKwCkDw_Ow", "Qx_SNb6d8xu", "xdraVP9ExCF", "-7fUFarlyFj", "HVwpvt4s3u2", "iclr...
iclr_2022_zrW-LVXj2k1
On the benefits of maximum likelihood estimation for Regression and Forecasting
We advocate for a practical Maximum Likelihood Estimation (MLE) approach towards designing loss functions for regression and forecasting, as an alternative to the typical approach of direct empirical risk minimization on a specific target metric. The MLE approach is better suited to capture inductive biases such as pri...
Accept (Poster)
This paper has been independently assessed by three expert reviewers. The results place it at the borderline of acceptance decision: while one of the reviewers gave it a straight accept evaluation, two others assessed it as marginally rejectable, even after discussion with the authors. All of the reviewers agreed that ...
train
[ "A50HYs8UBIe", "7jmcqxHpHSI", "IVHxzgRASV", "mWSbEDbEHf", "RK_cqRTjnWG", "aA--HEq8EV", "_f-ek1ldLjW", "JbiZIWd9Uu2", "AZpPSAU_vC2", "SZL-7j70hED", "aCkK65KHdrt" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for your detailed responses to my comments and questions. I do not have additional questions.", " We thank you for your insightful comments and hope that we have addressed your concerns above. Please let us know if you have further questions, so that we have a chance to reply to them before the discus...
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, 8, 5 ]
[ -1, -1, -1, 3, -1, -1, -1, -1, -1, 3, 2 ]
[ "7jmcqxHpHSI", "_f-ek1ldLjW", "AZpPSAU_vC2", "iclr_2022_zrW-LVXj2k1", "iclr_2022_zrW-LVXj2k1", "SZL-7j70hED", "aCkK65KHdrt", "mWSbEDbEHf", "JbiZIWd9Uu2", "iclr_2022_zrW-LVXj2k1", "iclr_2022_zrW-LVXj2k1" ]
iclr_2022_FRxhHdnxt1
Amortized Tree Generation for Bottom-up Synthesis Planning and Synthesizable Molecular Design
Molecular design and synthesis planning are two critical steps in the process of molecular discovery that we propose to formulate as a single shared task of conditional synthetic pathway generation. We report an amortized approach to generate synthetic pathways as a Markov decision process conditioned on a target molec...
Accept (Spotlight)
After much back and forth about prior work, 3 reviewers score this paper as an 8 and one scores it as a 3. Other reviewers have written to the 3 and told them they believe that their review is now too harsh, in light of clarifications w.r.t. related work. I tend to agree, though I must admit that I am not an expert on ...
train
[ "3JGlo-lUPFw", "O3LB6VserHO", "b5hF9pACN6R", "cYJBebhbt3t", "21uwx8YOrCR", "w0ATbVb2FvH", "y2-AVxNtzv", "wuiwBIF4ip", "AuCHgeNXf0r", "V6UuOfD6mIg", "k6CAvMi8rD_", "pi8AV175_Zh", "l5JJvHGoS1", "5zEusLc58F", "i6nZ59dlEn", "ALc6DQ6LdHu", "zzC6VEIjP7e", "UetRCKH3yHt", "qvtMNEnZX_K", ...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_review...
[ " While I agree that a paper does not have to be complete, I consider a paper has to describe its contributions consistently and correctly. My concerns are twofold. First, I don't still understand why the proposed method, \"eliminating the need for two-stage pipelines of generation and filtering\" (cited from the c...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, -1, 8, -1, -1, -1, -1, -1, -1, 8, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, 5, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "O3LB6VserHO", "21uwx8YOrCR", "w0ATbVb2FvH", "AuCHgeNXf0r", "wuiwBIF4ip", "AuCHgeNXf0r", "wuiwBIF4ip", "V6UuOfD6mIg", "V6UuOfD6mIg", "ALc6DQ6LdHu", "UetRCKH3yHt", "iclr_2022_FRxhHdnxt1", "qvtMNEnZX_K", "iclr_2022_FRxhHdnxt1", "7olP5fpPLoV", "7RxVX6nJSo2", "63-TgIwSj7V", "pi8AV175_Z...
iclr_2022_kZ0UYdhqkNY
Variational methods for simulation-based inference
We present Sequential Neural Variational Inference (SNVI), an approach to perform Bayesian inference in models with intractable likelihoods. SNVI combines likelihood-estimation (or likelihood-ratio-estimation) with variational inference to achieve a scalable simulation-based inference approach. SNVI maintains the flexi...
Accept (Spotlight)
The authors propose a well-presented approach to likelihood-free inference. The reviewers are all in alignment in recommending this paper for acceptance. There was a healthy discussion between authors and reviewers, where the authors have already incorporated many of their recommendations. The potential for this method...
train
[ "DHeQpuH29hi", "5SBuZaGImR", "G-9EyjMcWD", "Gcwpj485y5B", "1MZFmcR_Exr", "8QhwTRBplHr", "huCgTDjcOwg", "v1eufFbMWFv", "i3VMtHFBC06", "-XeNW6bUEn", "9QiE5UCMfJO", "YbQApRi4SuY", "MyUheCMQROE", "MUQF0tmDRW-" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ " We thank all reviewers for engaging with our paper and providing helpful feedback which substantially strengthened our submission. We have just updated the manuscript with the promised changes. Thank you!\n\nBest wishes\nThe authors", "This paper presents a form of variational inference (VI) approach for likeli...
[ -1, 6, -1, 6, -1, -1, 8, -1, 8, -1, -1, -1, -1, -1 ]
[ -1, 3, -1, 4, -1, -1, 4, -1, 3, -1, -1, -1, -1, -1 ]
[ "iclr_2022_kZ0UYdhqkNY", "iclr_2022_kZ0UYdhqkNY", "MyUheCMQROE", "iclr_2022_kZ0UYdhqkNY", "9QiE5UCMfJO", "-XeNW6bUEn", "iclr_2022_kZ0UYdhqkNY", "YbQApRi4SuY", "iclr_2022_kZ0UYdhqkNY", "huCgTDjcOwg", "Gcwpj485y5B", "i3VMtHFBC06", "5SBuZaGImR", "iclr_2022_kZ0UYdhqkNY" ]
iclr_2022_w_drCosT76
Differentiable Scaffolding Tree for Molecule Optimization
The structural design of functional molecules, also called molecular optimization, is an essential chemical science and engineering task with important applications, such as drug discovery. Deep generative models and combinatorial optimization methods achieve initial success but still struggle with directly modeling di...
Accept (Poster)
This paper was a tough call. The key contribution of the paper is a genuinely useful technique for generating chemical compounds satisfying desired properties. However, there are some key issues with paper. Reviewer *BjiD* found out that baselines are weak. Most importantly, he run thorough experiments with GraphGA, o...
train
[ "1nQiyEZpt5P", "PLcnmyczNm-", "nzMYjuAYGTr", "sEc28Myl2_j", "RrYq5RuFUo3", "J55hthNEdcc", "m4eDlZ4jFYK", "q300skhpizv", "joqv7NA1w3W", "o9mQVDJGaLl", "Y7dPWYq5k2_", "oJbV7O7-ONp", "lEeouVXmzTX", "K1VILzthkAU", "E7gXc51SJS7", "ML3VaCnKPjS", "_ee_EWQYK8w", "Rvk1KxcZne_", "4DzP2jH-B...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "a...
[ " I downgrade my score because Reviewer BjiD showed that GraphGA could be better than the proposed method. But after the explanation from the author and the new results from Reviewer BjiD that DST still outperforms graphGA. Therefore, I would like to keep my original score.", "The paper designs a novel model to o...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 8, 10 ]
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "nzMYjuAYGTr", "iclr_2022_w_drCosT76", "sEc28Myl2_j", "RrYq5RuFUo3", "J55hthNEdcc", "m4eDlZ4jFYK", "q300skhpizv", "joqv7NA1w3W", "o9mQVDJGaLl", "lEeouVXmzTX", "oJbV7O7-ONp", "ML3VaCnKPjS", "K1VILzthkAU", "4DzP2jH-BHs", "iclr_2022_w_drCosT76", "PLcnmyczNm-", "XhxvgB_qLFt", "byeclmTp...
iclr_2022_VqzXzA9hjaX
Optimizer Amalgamation
Selecting an appropriate optimizer for a given problem is of major interest for researchers and practitioners. Many analytical optimizers have been proposed using a variety of theoretical and empirical approaches; however, none can offer a universal advantage over other competitive optimizers. We are thus motivated to ...
Accept (Poster)
This paper introduces a meta-learning approach to "amalgamate" optimizers. The reviewers all found the idea interesting and unanimously found it to be acceptable for publication. In particular, I appreciate that the authors expanded their results to include more larger problems. One of the outstanding questions that wo...
train
[ "yFldv8G49Pt", "5vSxzmATQd3", "s0Y8Bzhws6-", "F6cSWeZK00S", "QCUCb_04Ouw", "rTv4oZUgLZ", "-dOezm6S4oe", "BIu10Lw5aD", "Naw3nzO5hWM", "r-yYoP2Q3CZ", "KnHLPgoj1C", "PHOYWEWW6mK", "kjnbevDOz3O", "Q5poQhB4IJS", "KSVd64kaj7", "64HOm4Iq-c1", "8JZNHz5saUD" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Many thanks for your further constructive feedback. We try our best to deliver some extra analysis and experiments before the deadline; more thorough results and analyses will be added in the final version, which we will update when the portal reopens.\n\nWe trained optimal choice optimizers using the small pool ...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "5vSxzmATQd3", "PHOYWEWW6mK", "iclr_2022_VqzXzA9hjaX", "QCUCb_04Ouw", "s0Y8Bzhws6-", "s0Y8Bzhws6-", "s0Y8Bzhws6-", "iclr_2022_VqzXzA9hjaX", "s0Y8Bzhws6-", "s0Y8Bzhws6-", "8JZNHz5saUD", "KSVd64kaj7", "64HOm4Iq-c1", "iclr_2022_VqzXzA9hjaX", "iclr_2022_VqzXzA9hjaX", "iclr_2022_VqzXzA9hjaX...
iclr_2022_anbBFlX1tJ1
Boosted Curriculum Reinforcement Learning
Curriculum value-based reinforcement learning (RL) solves a complex target task by reusing action-values across a tailored sequence of related tasks of increasing difficulty. However, finding an exact way of reusing action-values in this setting is still a poorly understood problem. In this paper, we introduce the conc...
Accept (Poster)
The paper proposes a novel curriculum learning method for RL based on the concept of boosting. The proposed method builds on the curriculum value-based RL framework and uses boosting to reuse action-values from previous tasks when solving the current task. The method is analyzed theoretically in terms of approximation ...
train
[ "uqS98VD6llH", "6lAv646tKMW", "QzAY3q9acvm", "rCMVsRQcr1", "Opw3A1kSY6Y", "1SzNS6uGskO", "nG92n_GuLGE", "5irzj3KQh8c", "6NxuIy_FRF1", "igatAFq0c1s", "bhVHkfJXlV8", "lT7EaBOjmg", "pTtMsngmZd6", "6zpYmaRlTHW", "-Piij6mmyqJ", "IuZYk7gnMEi", "-8SJNo-gbI", "__wMcOZZ0Ec", "iP-RKeD53QF"...
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", ...
[ " Thanks for the revisions and updates! LSC (Section 6.3) is clear now. After following the discussions, and the revisions, I will keep my score. ", " We agree with the reviewer that the additional experiment shows interesting results, and we thank him/her for proposing it. As suggested, we will incorporate a dis...
[ -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, 4, 4 ]
[ "5irzj3KQh8c", "QzAY3q9acvm", "rCMVsRQcr1", "6NxuIy_FRF1", "1SzNS6uGskO", "bhVHkfJXlV8", "iclr_2022_anbBFlX1tJ1", "igatAFq0c1s", "lT7EaBOjmg", "-8SJNo-gbI", "pTtMsngmZd6", "iP-RKeD53QF", "6zpYmaRlTHW", "-Piij6mmyqJ", "y85ti_l2YEz", "N41YqMd852T", "fvi8FLPLEWF", "fB5JiNI02zm", "ic...
iclr_2022_X0nrKAXu7g-
HyperDQN: A Randomized Exploration Method for Deep Reinforcement Learning
Randomized least-square value iteration (RLSVI) is a provably efficient exploration method. However, it is limited to the case where (1) a good feature is known in advance and (2) this feature is fixed during the training. If otherwise, RLSVI suffers an unbearable computational burden to obtain the posterior samples. I...
Accept (Poster)
This paper extends Randomized least-square value iteration (RLSVI), which is a method for exploration-exploitation tradeoff that is suitable for linear FA, to the deep RL setting. A key component is using Hypermodels of Dwaracherla et al. (ICLR, 2020) to generate the weights of last layer of the DNN. This generates a l...
val
[ "xnD3DoA8Rrs", "ADdFfGf7QxR", "Ldu7fasfBHB", "gkLCKJAjzJ", "CIFGP0qu-UV", "EZfsEz_bLDs", "UpU659TmzNR", "8gSkpV_vjP7", "5gypY7LgQ0k", "VQVrt1XL_Jn", "VT24NtBvf4", "PSylK3jZWn", "zrcxD_B8QxQ", "IcYIS_ELSv", "37AwdbW-fAl", "y0JB3p37mxR", "zgv6PstXF-l", "F2BK9h5gIEf", "PMd6uG44oM6",...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", ...
[ " Dear Reviewer TPoj,\n\nWe have added additional 6 experiments: alien, amidar, battle zone, mario-3-1, mario-3-2, and mario-3-3. The final performance of learned policies on 18 benchmarks is summarized in the following table. See Figure 22 and Figure 23 in Appendix F.4 for the learning curves.\n\n| | alie...
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 8 ]
[ -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "ADdFfGf7QxR", "7y5uyLcHrwv", "F2BK9h5gIEf", "PSylK3jZWn", "iclr_2022_X0nrKAXu7g-", "iclr_2022_X0nrKAXu7g-", "F2BK9h5gIEf", "7y5uyLcHrwv", "T3njMhocKf1", "ViRWbwWAJRE", "ViRWbwWAJRE", "CIFGP0qu-UV", "PSylK3jZWn", "CIFGP0qu-UV", "7y5uyLcHrwv", "T3njMhocKf1", "PMd6uG44oM6", "iclr_202...
iclr_2022_R8sQPpGCv0
Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation
Since the introduction of the transformer model by Vaswani et al. (2017), a fundamental question has yet to be answered: how does a model achieve extrapolation at inference time for sequences that are longer than it saw during training? We first show that extrapolation can be enabled by simply changing the position rep...
Accept (Poster)
This submission proposes a simple, efficient, and effective position representation method for the Transformer architecture called ALiBi. ALiBi enables better extrapolation and performance (in terms of efficiency and task performance). The submission also includes careful analysis and extensive experiments, and notably...
train
[ "bzE8DpDDRx", "QeCzYq5SPAg", "_ijtyuno9J", "UKAXabUsD9E", "pPd6EU_o60C", "P2Zcxu5mTVE", "3K_6szq4I8X", "flbQ-m7acVQ", "G78KOZEy_Hi", "TqR8ABkgyJ6", "LNJGi19j0We", "EFkVYJ2_Ex_", "F6ECvzSYDXu" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Most of the answers addressed my concerns and unclear parts. ", "The submission proposed an effective approach to allow pre-trained transformer-based language models to extrapolate beyond the maximum length used in training, which potentially reduces the training time as extrapolation is empirically guaranteed....
[ -1, 6, -1, -1, 8, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "TqR8ABkgyJ6", "iclr_2022_R8sQPpGCv0", "G78KOZEy_Hi", "QeCzYq5SPAg", "iclr_2022_R8sQPpGCv0", "LNJGi19j0We", "iclr_2022_R8sQPpGCv0", "pPd6EU_o60C", "QeCzYq5SPAg", "F6ECvzSYDXu", "EFkVYJ2_Ex_", "iclr_2022_R8sQPpGCv0", "iclr_2022_R8sQPpGCv0" ]
iclr_2022_JXhROKNZzOc
SQuant: On-the-Fly Data-Free Quantization via Diagonal Hessian Approximation
Quantization of deep neural networks (DNN) has been proven effective for compressing and accelerating DNN models. Data-free quantization (DFQ) is a promising approach without the original datasets under privacy-sensitive and confidential scenarios. However, current DFQ solutions degrade accuracy, need synthetic data to...
Accept (Poster)
The authors propose a data-free quantization method that can be applied post-training quantization without backpropagation. The method takes advantage of approximate Hessian information in a certain scalable approximate way. Based on the assumptions and deductions in the paper, SQuant tries to optimize constrained abs...
train
[ "UTCj5-P2cXD", "1huEk3QiGnJ", "oQPtXKBxLKy", "Nz2F95JHeZj", "MJh8ndmlnAB", "ShnxR5qstNi", "l71WZZ0Im6o", "APnCac9qNWz" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "In the manuscript, the authors propose SQuant, which is a data-free quantization method that can apply post-training quantization (PTQ) without any backpropagation.\n\nSpecifically, SQuant is taking advantage of approximated Hessian information. Based on the assumptions and deductions in the paper, SQuant tries to...
[ 6, -1, -1, -1, -1, 8, 6, 6 ]
[ 4, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2022_JXhROKNZzOc", "APnCac9qNWz", "l71WZZ0Im6o", "ShnxR5qstNi", "UTCj5-P2cXD", "iclr_2022_JXhROKNZzOc", "iclr_2022_JXhROKNZzOc", "iclr_2022_JXhROKNZzOc" ]
iclr_2022_N1WI0vJLER
Parallel Training of GRU Networks with a Multi-Grid Solver for Long Sequences
Parallelizing Gated Recurrent Unit (GRU) is a challenging task, as the training procedure of GRU is inherently sequential. Prior efforts to parallelize GRU have largely focused on conventional parallelization strategies such as data-parallel and model-parallel training algorithms. However, when the given sequences are ...
Accept (Poster)
This paper presents a way of using multigrid techniques to parallelize GRU networks across the time dimension. Reviewers are uniformly in favor of accepting the paper. The main strength is that the paper provides a new perspective on dealing with long input sequences by parallelizing RNNs across time. The main weaknes...
train
[ "zB-k-BlZ76P", "eTJj-ZngFTG", "Clt9YNJ8zx", "Yr-AIT089Wx", "Rcessd-LYT3", "uUQgXuxoC4o", "xUi4HD_0PS4", "CMAuWhutKCp", "Wbid4SPhRZF", "MQGQCPuslwM", "kJFgDCIFdu" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The main focus/goal of the submitted paper is the parallelization of the Gated Recurrent Unit (GRU) (the authors focus on classification problems). The authors describe the incorporation of a multigrid reduction in time (MGRIT) solver to speed-up and better parallelize the \napplication of forward and back propaga...
[ 6, -1, -1, -1, -1, -1, -1, -1, 8, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3 ]
[ "iclr_2022_N1WI0vJLER", "iclr_2022_N1WI0vJLER", "kJFgDCIFdu", "MQGQCPuslwM", "zB-k-BlZ76P", "zB-k-BlZ76P", "zB-k-BlZ76P", "Wbid4SPhRZF", "iclr_2022_N1WI0vJLER", "iclr_2022_N1WI0vJLER", "iclr_2022_N1WI0vJLER" ]
iclr_2022_C54V-xTWfi
MonoDistill: Learning Spatial Features for Monocular 3D Object Detection
3D object detection is a fundamental and challenging task for 3D scene understanding, and the monocular-based methods can serve as an economical alternative to the stereo-based or LiDAR-based methods. However, accurately locating objects in the 3D space from a single image is extremely difficult due to the lack of spat...
Accept (Poster)
This paper received 5 quality reviews, with 3 of them rated 8, 1 rated 6, and 1 rated 5. In general, while there are minor concerns, the reviewers acknowledge the contribution of applying Knowledge distillation to the problem of monocular 3D object detection, and appreciate the SOTA performance on the KITTI validation ...
train
[ "2FgkBgSZQ2P", "rlPPU1B4oSt", "HiZiA_xDzHV", "Jv_8iF_7_UM", "t4XHWyEViLK", "X8uf4CdSgmh", "H9Clqnq1GQQ", "8N0YYM6dhTg", "cvZzyqU9_6q", "Xoh1divDS2O", "gwXeG3w7THB", "XqcEp_vUfWY", "4QX-iVhGoG", "BeuGfNwTsX_", "T5PDlTNNkZO", "tbMWftPItmr", "ewRp3EtoxVX" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes to distill the features from a LiDAR teacher model to a monocular-based student model. To align the feature maps between the teacher and student model, the teacher model uses the same networks as the student and the only difference is that the teacher model takes the sparse/dense depth map as i...
[ 6, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, 5 ]
[ 5, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "iclr_2022_C54V-xTWfi", "X8uf4CdSgmh", "cvZzyqU9_6q", "8N0YYM6dhTg", "iclr_2022_C54V-xTWfi", "Xoh1divDS2O", "gwXeG3w7THB", "t4XHWyEViLK", "T5PDlTNNkZO", "XqcEp_vUfWY", "XqcEp_vUfWY", "2FgkBgSZQ2P", "ewRp3EtoxVX", "tbMWftPItmr", "iclr_2022_C54V-xTWfi", "iclr_2022_C54V-xTWfi", "iclr_20...
iclr_2022_CALFyKVs87
Dynamics-Aware Comparison of Learned Reward Functions
The ability to learn reward functions plays an important role in enabling the deployment of intelligent agents in the real world. However, $\textit{comparing}$ reward functions, for example as a means of evaluating reward learning methods, presents a challenge. Reward functions are typically compared by considering the...
Accept (Spotlight)
The paper proposes a new pseudometric, DARD, for comparing reward functions that avoid policy optimization. DARD builds on a recent work by Gleave et al. 2020 where the pseudometric EPIC was proposed. In contrast to EPIC, DARD operates on an approximate transition model and evaluates reward functions only on transition...
train
[ "10C2ceFXj1F", "obR4RgcaF8", "QNppun44IUP", "avJhcOgh5Hn", "AFlcaGANzDe", "D42bBGWJ1ie", "rgZkS56lDy", "sh92QD7Tgnb", "QiPbtzCm-W9", "OAaHTvtnx8G", "apxlH-r_rsLr", "lthdUAB2OVJ", "bynti6Gq1pM", "rD_pwl-26XE", "Kw_xh1k_RYq", "RTD-9udJhwu", "HjosULUNrgE" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for following up and for the additional suggestion! We will add a \"Limitations\" heading to the paragraph describing the Point Maze results if the paper is accepted (because we can not submit further revisions at this point in the review process).", " Thank you for following up and for the additional...
[ -1, -1, 8, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 8 ]
[ -1, -1, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "D42bBGWJ1ie", "AFlcaGANzDe", "iclr_2022_CALFyKVs87", "iclr_2022_CALFyKVs87", "sh92QD7Tgnb", "bynti6Gq1pM", "bynti6Gq1pM", "apxlH-r_rsLr", "rD_pwl-26XE", "Kw_xh1k_RYq", "avJhcOgh5Hn", "iclr_2022_CALFyKVs87", "HjosULUNrgE", "RTD-9udJhwu", "QNppun44IUP", "iclr_2022_CALFyKVs87", "iclr_2...
iclr_2022_tD7eCtaSkR
Improved deterministic l2 robustness on CIFAR-10 and CIFAR-100
Training convolutional neural networks (CNNs) with a strict Lipschitz constraint under the $l_{2}$ norm is useful for provable adversarial robustness, interpretable gradients and stable training. While $1$-Lipschitz CNNs can be designed by enforcing a $1$-Lipschitz constraint on each layer, training such networks requi...
Accept (Spotlight)
The paper provides a procedure for certifying L2 robustness in image classification. The paper shows that the technique indeed works in practice by demonstrating it's accuracy on CIFAR-10 and CIFAR-100 datasets. The reviewers are positive about the paper. Please do incorporate feedback, especially around experimental...
train
[ "_6A7McJI144", "VQLEOIMCZq", "IcFLTRFgYm-", "d68Eczi5qs", "htfIMVQOjO8", "Gfdtxo0o5S7", "JypNBVDbMIA", "L9OA7EurwyC", "Os15GPkB9bA", "u3P1i92hIys", "J4Whe7uUqdj", "ONwloA3hObi" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The detailed responses are clear and have addressed my major concerns. It is helpful to understand why LLN is more efficient than the original robustness certificate when the number of classes K is large. Great contributions. \nThe rating remains unchanged.", " Ratings adjusted based on the new numbers.", "T...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, 8, 6, 8 ]
[ -1, -1, 5, -1, -1, -1, -1, -1, -1, 4, 4, 2 ]
[ "Os15GPkB9bA", "d68Eczi5qs", "iclr_2022_tD7eCtaSkR", "htfIMVQOjO8", "JypNBVDbMIA", "ONwloA3hObi", "IcFLTRFgYm-", "J4Whe7uUqdj", "u3P1i92hIys", "iclr_2022_tD7eCtaSkR", "iclr_2022_tD7eCtaSkR", "iclr_2022_tD7eCtaSkR" ]
iclr_2022_9Nk6AJkVYB
Audio Lottery: Speech Recognition Made Ultra-Lightweight, Noise-Robust, and Transferable
Lightweight speech recognition models have seen explosive demands owing to a growing amount of speech-interactive features on mobile devices. Since designing such systems from scratch is non-trivial, practitioners typically choose to compress large (pre-trained) speech models. Recently, lottery ticket hypothesis reveal...
Accept (Poster)
The paper presents a comprehensive analysis of lottery tickets hypothesis (LTH) on automatic speech recognition. The authors verified the existence of highly sparse “winning tickets” in ASR task, and analyzed its robustness to noise, transferable to other datasets, and supported with structured sparsity. As agreed wit...
train
[ "7UAfn770V_y", "AouJqRVvREF", "ci3fZsp0ZVN", "8PFNCFmaENn", "UadF6oEfvl8", "ECHB8Oic6do", "MpjdwnoQszq", "pLFy_EHQFVP", "IZuzSj_vaSm", "qr2II2clpN7", "0K285felo4D", "uYm1zbJaz_D", "IfReSaN7aDZ", "0iSRKKjYGMI", "oII44esNI_4", "OPVqqK_5MlH", "N0O4HGWpxpC", "Li9Yj2YdSP", "wluhLwLMhg...
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_re...
[ " Dear reviewer 5kPn,\n\nThanks a lot for your previous suggestions and discussions on this manuscript! As you recognized earlier, our response has clarified your concerns and problems, and we have also added the clarifications to the revised manuscript accordingly, which makes the manuscript stronger.\n\nCurrently...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, 6 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, 4 ]
[ "CZxFz3DmP7M", "iclr_2022_9Nk6AJkVYB", "iclr_2022_9Nk6AJkVYB", "UadF6oEfvl8", "ECHB8Oic6do", "ci3fZsp0ZVN", "ci3fZsp0ZVN", "iclr_2022_9Nk6AJkVYB", "iclr_2022_9Nk6AJkVYB", "0K285felo4D", "z1BqeUl8Pki", "ci3fZsp0ZVN", "iclr_2022_9Nk6AJkVYB", "OPVqqK_5MlH", "iclr_2022_9Nk6AJkVYB", "Li9Yj2...
iclr_2022_ajXWF7bVR8d
Meta-Learning with Fewer Tasks through Task Interpolation
Meta-learning enables algorithms to quickly learn a newly encountered task with just a few labeled examples by transferring previously learned knowledge. However, the bottleneck of current meta-learning algorithms is the requirement of a large number of meta-training tasks, which may not be accessible in real-world sce...
Accept (Oral)
Current meta-learning algorithms suffer from the requirement of a large number of tasks in the meta-training phase, which may not be accessible in real-world environment. This paper addresses this bottleneck, introducing a cross-task interpolation in addition to the existing intra-task interpolation. The main idea is v...
train
[ "zWW4HFBt1b", "w7-gkFIwX5r", "jS0qHrWY3Io", "X_pX09m0vkP", "M9vqERT_hRbr", "WCIvF0JXoOn", "gWVEjT33p-vp", "O4jHvJ00Txa", "BLVtOsLbPIe", "LALyMos1b_nM", "Zk2J7_LefjRX", "vIzxfmpPLpt", "J5Zl_zJWzF", "nNjgSOo2qcL" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ "In this paper the authors propose a meta-learning method for few-shot learning. The propose approach, MLTI, creates new (artificial) tasks by interpolating two (existing) tasks form the training set during (meta-)training. The new tasks are generated by interpolating features/labels of two sampled tasks from the t...
[ 8, 8, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ 3, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2022_ajXWF7bVR8d", "iclr_2022_ajXWF7bVR8d", "iclr_2022_ajXWF7bVR8d", "nNjgSOo2qcL", "WCIvF0JXoOn", "jS0qHrWY3Io", "zWW4HFBt1b", "vIzxfmpPLpt", "iclr_2022_ajXWF7bVR8d", "w7-gkFIwX5r", "J5Zl_zJWzF", "iclr_2022_ajXWF7bVR8d", "iclr_2022_ajXWF7bVR8d", "iclr_2022_ajXWF7bVR8d" ]
iclr_2022_YigKlMJwjye
Generalized Demographic Parity for Group Fairness
This work aims to generalize demographic parity to continuous sensitive attributes while preserving tractable computation. Current fairness metrics for continuous sensitive attributes largely rely on intractable statistical independence between variables, such as Hirschfeld-Gebelein-Renyi (HGR) and mutual information. ...
Accept (Poster)
This paper offers an alternative formulation of demographic parity, named GDP, which makes it amenable to easier computation when the sensitive attribute is continuous. Analytically, the paper relates GDP to other notions, offers ways to estimate GDP from data, and establishes the convergence of these estimators. Exper...
train
[ "C9jQjxiIhpz", "hBNAlgJulqu", "wB7MmILQe0I", "IsprImTd0-l", "X0I9PCSXowj", "GzGo1dwW5zH", "1prUZQb2hX", "sxbEkDMb4Fm", "z9fsMlh1gf", "VSX8I4ta8ik", "cltXapP0UY", "_puMas7u72b", "ug3JhdNqpc9", "vdvgMpGDait", "ZRuCB1zxUs", "UB4HET3e_Dn", "CPkvn19HLiZ", "-Ra4rp4K-O", "f1pewkU6-KN", ...
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official...
[ " Thank you for your detailed response to my comments. My positive impression of the paper has only been reinforced. At least for now, I will maintain my current score.", "This paper proposes a generalized demographic parity for group fairness which is computationally feasible for both continuous and discrete pro...
[ -1, 6, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, 3, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "eapvNKskPjH", "iclr_2022_YigKlMJwjye", "VSX8I4ta8ik", "VSX8I4ta8ik", "hBNAlgJulqu", "eapvNKskPjH", "cltXapP0UY", "iclr_2022_YigKlMJwjye", "cltXapP0UY", "H_c4Sbp-cFW", "vdvgMpGDait", "iclr_2022_YigKlMJwjye", "H_c4Sbp-cFW", "sxbEkDMb4Fm", "iclr_2022_YigKlMJwjye", "H_c4Sbp-cFW", "eapvN...
iclr_2022_1ugNpm7W6E
Cold Brew: Distilling Graph Node Representations with Incomplete or Missing Neighborhoods
Graph Neural Networks (GNNs) have achieved state-of-the-art performance in node classification, regression, and recommendation tasks. GNNs work well when rich and high-quality connections are available. However, their effectiveness is often jeopardized in many real-world graphs in which node degrees have power-law dist...
Accept (Poster)
The reviewers agree that the paper studies an important and interesting problem and presents a good solution which is theoretically sound. The paper can be further improved by looking into more applications such as cold-start recommendations.
train
[ "-3MJCPjTJeH", "hsb7UhM144", "HTDOSA9S4_b", "u5EBSE2cJcZ", "eq4CodSng8M", "3QsiXtIcP45", "VvKm-ZxAlJ", "ciTU-I6NqZy", "58LZD2SSGMb", "EbtiA5D7y1o", "sgx95WKtOax", "qD72OixlkIn", "7vX3VpUSIND", "wq9DKpQWTf6", "BmBbKaREyRo", "5MEgEpN5eGf", "er3J_-Aw5rz", "HYVzmeO67Pk", "GU4qbFWsALL...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", ...
[ " Dear AC panel and reviewers:\n\nWe are glad that the merits of our work have been recognized by all reviewers. We have responded to all reviewers point-by-point, and everyone has now acknowledged a positive assessment after reading our response.\n\nWe are summarizing the major points about the paper (post-rebutta...
[ -1, -1, 6, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6 ]
[ -1, -1, 3, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_1ugNpm7W6E", "u5EBSE2cJcZ", "iclr_2022_1ugNpm7W6E", "5MEgEpN5eGf", "BmBbKaREyRo", "ciTU-I6NqZy", "iclr_2022_1ugNpm7W6E", "er3J_-Aw5rz", "ra1gfEWeYs1", "GU4qbFWsALL", "8BbbnrtXjaB", "7vX3VpUSIND", "HYVzmeO67Pk", "iclr_2022_1ugNpm7W6E", "ra1gfEWeYs1", "GU4qbFWsALL", "rnE3sJ5...
iclr_2022_Q76Y7wkiji
Boosting the Certified Robustness of L-infinity Distance Nets
Recently, Zhang et al. (2021) developed a new neural network architecture based on $\ell_\infty$-distance functions, which naturally possesses certified $\ell_\infty$ robustness by its construction. Despite the novel design and theoretical foundation, so far the model only achieved comparable performance to conventiona...
Accept (Poster)
This paper is a follow-up paper of Zhang et al. (2021), that proposed a new network architecture for adversarial robustness, l_\infty distance net. Although the l_\infty network is provably 1-Lipschitz w.r.t. the l_\infty distance, its training procedure exploits the l_p relaxation to overcome the non-smoothness of the...
test
[ "J37eJtXZUgg", "srV4oHIbl8R", "Hw1TmCHAZCH", "EYMQpCQSjaZ", "JHfSED2Ev3", "JxgpPvl5UP6", "hBZE2O6SZT6", "uLHXKBFxKTX", "Fvv-wl3O_ks", "c-JgPQO6Hm", "M5aT9Fn76GC", "0jFsTlBsdhq", "YaB2BLsZrZN", "n5SQz5Ovdb1", "aDueVLhTBYd", "7r6mKxKpT-v", "sGe8Ia8gFp7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper proposed a new loss to improve the performance of Linf distance network a new network for Linf robustness and achieved impressive empirical results. The paper is well-written and the improvement on empirical result is impressive. \nMy main concern is on the experiments. \n* The paper proposed multiple mo...
[ 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 8 ]
[ 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2022_Q76Y7wkiji", "JHfSED2Ev3", "aDueVLhTBYd", "iclr_2022_Q76Y7wkiji", "JxgpPvl5UP6", "YaB2BLsZrZN", "J37eJtXZUgg", "7r6mKxKpT-v", "EYMQpCQSjaZ", "iclr_2022_Q76Y7wkiji", "sGe8Ia8gFp7", "J37eJtXZUgg", "J37eJtXZUgg", "7r6mKxKpT-v", "EYMQpCQSjaZ", "iclr_2022_Q76Y7wkiji", "iclr_202...
iclr_2022_KmtVD97J43e
Synchromesh: Reliable Code Generation from Pre-trained Language Models
Large pre-trained language models have been used to generate code, providing a flexible interface for synthesizing programs from natural language specifications. However, they often violate syntactic and semantic rules of their output language, limiting their practical usability. In this paper, we propose Synchromesh: ...
Accept (Poster)
The paper gives a new method for code generation from natural language queries using pretrained models. The approach follows two steps: (1) given a query, it selects a set of similar training examples using a method called Target Similarity Tuning, and (2) it then uses a method called Constrained Semantic Decoding (bui...
train
[ "xWCWMY97joN", "qThFVj8IwpG", "HPZ4zE_Bf8U", "q4eyzfe2qw1", "kcFmDji1VNE", "rCVW15ZVHYf", "Hs1w8WZqN_fu", "ALDT3I6Jt93", "2BLpOg_Tf3k", "LqiJVIWZraU", "-36GsuKWbmo", "XCrkXXeT4QS0", "kUgOjyKsgm5", "LQlNMSjPtJd", "T4mPyYB1Ej", "Gcce-Sg8YdU", "kCdXG_A3CtF", "7qr_vS4U2nX" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "author", "author", "author", "author", "author", "author", "author", "author", "public", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The response addressed my minor concerns, thanks!", "This paper presents a framework for more reliable code generation via in-context learning of GPT-3/CodeX. The paper is motivated by the finding that GPT-3/CodeX often generate programs with syntactic and semantic errors. To resolve this problem, the authors p...
[ -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 2 ]
[ "ALDT3I6Jt93", "iclr_2022_KmtVD97J43e", "-36GsuKWbmo", "rCVW15ZVHYf", "LQlNMSjPtJd", "T4mPyYB1Ej", "7qr_vS4U2nX", "Gcce-Sg8YdU", "kCdXG_A3CtF", "kCdXG_A3CtF", "qThFVj8IwpG", "qThFVj8IwpG", "iclr_2022_KmtVD97J43e", "iclr_2022_KmtVD97J43e", "iclr_2022_KmtVD97J43e", "iclr_2022_KmtVD97J43e...
iclr_2022_XctLdNfCmP
Predicting Physics in Mesh-reduced Space with Temporal Attention
Auto-regressive sequence models for physics prediction are often restricted to low-dimensional systems, as memory cost increases with both spatial extents and sequence length. On the other hand, graph-based next-step prediction models have recently been very successful in modeling complex high-dimensional physical syst...
Accept (Poster)
This paper proposes a model to predict the spatiotemporal dynamics of physical simulations on irregular meshes. The observations are modeled as a sequence of graph representations, each graph corresponding to a snapshot of the observation sequence at time t. This model uses two components, a graph encoder-decoder to co...
train
[ "M8H33WGFeMO", "OMMy86db3Xn", "8pWxq6GWFHp", "P_2KZnp4uBZ", "l3vIjV6VH4-", "sP0AWlrCxTV", "KRUptjCFfXz", "BtvO5dxMMtF", "fU1_Y6tDT54", "Ve19piOekxb", "L1i8ESQ4j3", "0WT3Q0Tgqf-", "FQHKO877Ftc", "520YFZ0pzhg", "BsDgwXCR7nn", "eal_XSyhCKa", "yM427EN4sEV" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ " Thanks so much for your feedback and help! ", " Thanks for the rebuttal. I think it answers my questions well and I am happy to raise my score to 6.", "This work proposes a new algorithm combining graph-neural-network (GNN) and auto-regressive sequence models for physics prediction problems. The authors first...
[ -1, -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "OMMy86db3Xn", "L1i8ESQ4j3", "iclr_2022_XctLdNfCmP", "l3vIjV6VH4-", "Ve19piOekxb", "iclr_2022_XctLdNfCmP", "iclr_2022_XctLdNfCmP", "BsDgwXCR7nn", "sP0AWlrCxTV", "sP0AWlrCxTV", "8pWxq6GWFHp", "yM427EN4sEV", "yM427EN4sEV", "eal_XSyhCKa", "iclr_2022_XctLdNfCmP", "iclr_2022_XctLdNfCmP", ...
iclr_2022_I2Hw58KHp8O
Improving Non-Autoregressive Translation Models Without Distillation
Transformer-based autoregressive (AR) machine translation models have achieved significant performance improvements, nearing human-level accuracy on some languages. The AR framework translates one token at a time which can be time consuming, especially for long sequences. To accelerate inference, recent work has been e...
Accept (Poster)
This paper proposes a modification of the training objective of non-autoregressive MT which claims most of the improvements that other approaches obtain only through knowledge distillation (KD) from an autoregressive teacher. The strategy has been largely appreciated as simple and the results suggest that it's rather...
test
[ "LC5XZJ6EAbS", "QiKtz3L3eMh", "xtPhS9auWz7", "3nmyC90cl6u", "VjXTOsXgzgVD", "xPKJhFkZIjo", "QnfxRcGaawU", "cYSIb1nAed", "oQUPDo5OaTp", "dXeqH0t4_0" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents two techniques that improve iterative non-autoregressive machine translation (NAR). The first technique modifies the conditional masked language modeling objective from prior work in a way that makes training more similar to iterative inference. The other improvement is the way positional infor...
[ 8, -1, -1, -1, -1, -1, -1, 8, 8, 3 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2022_I2Hw58KHp8O", "xtPhS9auWz7", "dXeqH0t4_0", "cYSIb1nAed", "oQUPDo5OaTp", "LC5XZJ6EAbS", "iclr_2022_I2Hw58KHp8O", "iclr_2022_I2Hw58KHp8O", "iclr_2022_I2Hw58KHp8O", "iclr_2022_I2Hw58KHp8O" ]