paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2022_KhLK0sHMgXK
NASPY: Automated Extraction of Automated Machine Learning Models
We present NASPY, an end-to-end adversarial framework to extract the networkarchitecture of deep learning models from Neural Architecture Search (NAS). Existing works about model extraction attacks mainly focus on conventional DNN models with very simple operations, or require heavy manual analysis with lots of domain ...
Accept (Spotlight)
All reviewers agree on acceptance and I agree with them. I recommend a spotlight.
train
[ "FNvpyQe6CB", "zTsVR0R2Pd_", "qVL2rvcGHtB", "aukKNsWJkd", "qcZDwz9a1xr", "XZw8Z8nO8IK", "sJhsLOjEt8I", "6rFPRAuaem-", "bxXCaJwgCIT", "uK0rDaqnnL0", "yoN7IUCXjsY", "PZEyzSPvZ8z", "2qoY94iKVRl" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would clarify that our claims in 4 and 9 are actually not contradicted. In response 4, “Just training a network does not work” means that training an independent model does not work under the threat model of DNN model extraction attacks, which aims to steal a victim’s proprietary model rather than getting a go...
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 8, 8, 6 ]
[ -1, -1, -1, -1, 2, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "zTsVR0R2Pd_", "XZw8Z8nO8IK", "aukKNsWJkd", "uK0rDaqnnL0", "iclr_2022_KhLK0sHMgXK", "2qoY94iKVRl", "2qoY94iKVRl", "PZEyzSPvZ8z", "yoN7IUCXjsY", "qcZDwz9a1xr", "iclr_2022_KhLK0sHMgXK", "iclr_2022_KhLK0sHMgXK", "iclr_2022_KhLK0sHMgXK" ]
iclr_2022_ek9a0qIafW
Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners
Large-scale pre-trained language models have contributed significantly to natural language processing by demonstrating remarkable abilities as few-shot learners. However, their effectiveness depends mainly on scaling the model parameters and prompt design, hindering their implementation in most real-world applications....
Accept (Poster)
The paper presents a prompt learning method for few-shot learning in NLP. In particular, they proposed DART, a new soft prompt tuning method, to optimize the label representations and template. Overall, the paper is well-written and well-motivated. The proposed approach is interesting. The experiments were well just...
train
[ "ls2airr7Xoj", "XZZLuWhRTgn", "06LBxSy_Bqi", "JILmuSjrcNk", "sMzpIczaUXP", "YmbCkB7yjxN", "hFRR3QfNNnr", "yIazCZQaKNH", "fN0mpr3ShR", "YyFIWwaDP-2", "jiUwp09W1Ep", "zMCCAwyozPJ" ]
[ "official_reviewer", "official_reviewer", "author", "public", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a new few-shot learning method for NLP problems by incorporating a simple,effective framework. This method is extensively validation and shows compelling performance. Strengths:\n - The paper is generally well-written, with excellent motivation and empirical setup/analysis\n - The overall strat...
[ 8, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2022_ek9a0qIafW", "yIazCZQaKNH", "JILmuSjrcNk", "iclr_2022_ek9a0qIafW", "jiUwp09W1Ep", "ls2airr7Xoj", "zMCCAwyozPJ", "YyFIWwaDP-2", "iclr_2022_ek9a0qIafW", "iclr_2022_ek9a0qIafW", "iclr_2022_ek9a0qIafW", "iclr_2022_ek9a0qIafW" ]
iclr_2022_yfe1VMYAXa4
OntoProtein: Protein Pretraining With Gene Ontology Embedding
Self-supervised protein language models have proved their effectiveness in learning the proteins representations. With the increasing computational power, current protein language models pre-trained with millions of diverse sequences can advance the parameter scale from million-level to billion-level and achieve remark...
Accept (Poster)
This paper introduce a protein pretraining framework that enhances representations learnt from protein language modeling with knowledge graph embeddings. The new framework, OntoProtein, optimizes jointly a masked Protein objective and a Knowledge Graph Embedding objective producing knowledge-aware protein embeddings. T...
train
[ "M11kX3Tw5Ae", "yqGwXjFX23r", "8CAQO8FI1AA", "mNx6TPPdWYs", "HhtWBtd7pkM", "EvqkeAImeiX", "eJ3dPzEze_m", "JqjUOYwn1DV" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper introduces a method to enrich the representations that are learnt by protein language models with knowledge encapsulated in gene ontologies. To do so, it curates a knowledge graph (ProteinKG25) and applies existing methods in multi-relational data embedding (Bordes et al.) to jointly train knowledge emb...
[ 6, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2022_yfe1VMYAXa4", "8CAQO8FI1AA", "M11kX3Tw5Ae", "JqjUOYwn1DV", "eJ3dPzEze_m", "iclr_2022_yfe1VMYAXa4", "iclr_2022_yfe1VMYAXa4", "iclr_2022_yfe1VMYAXa4" ]
iclr_2022_F5Em8ASCosV
Causal Contextual Bandits with Targeted Interventions
We study a contextual bandit setting where the learning agent has the ability to perform interventions on targeted subsets of the population, apart from possessing qualitative causal side-information. This novel formalism captures intricacies in real-world scenarios such as software product experimentation where target...
Accept (Poster)
This paper considers a new setting of contextual bandits where the learning agent has the ability to perform interventions on targeted subsets of the population. The problem is motivated from software product experimentation but with more general applicability. The paper provides a method under this setting, with both ...
train
[ "A6dVbmbWbw", "EwpayOen8kM", "i82SkoT1GG", "_5_KZ6pUlYJ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper studies a contextual bandit setting with two unique features: (1) the learning agent has ability to perform targeted interventions during the learning phase (ability to select target sub-populations or context) and (2) it also has access to and integrates casual information in the setting. The key motiva...
[ 6, 6, 5, 5 ]
[ 3, 3, 3, 3 ]
[ "iclr_2022_F5Em8ASCosV", "iclr_2022_F5Em8ASCosV", "iclr_2022_F5Em8ASCosV", "iclr_2022_F5Em8ASCosV" ]
iclr_2022_TW7d65uYu5M
VOS: Learning What You Don't Know by Virtual Outlier Synthesis
Out-of-distribution (OOD) detection has received much attention lately due to its importance in the safe deployment of neural networks. One of the key challenges is that models lack supervision signals from unknown data, and as a result, can produce overconfident predictions on OOD data. Previous approaches rely on rea...
Accept (Poster)
This paper proposes to synthetize virtual outliers by sampling from low-likelihood regions of the feature space of a class conditional distribution, in order to make more robust predictions via a regularization loss term. In the reviewing phase certain criticisms were raised by reviewers: namely that i) the paper was ...
val
[ "ZAh8nTUh7RZ", "wZeA5sO0OjE", "CfYzgABM1Rl", "ROglELDnL0w", "xXXL-KNJu5x", "dbHCMzchFz0", "qN4C1RirVa", "qGFdYa8e8uK", "ELwZKiG2gW9", "Q4KFhxH3Dqr", "XG2y6jqr75e", "eGVluGK6BAZ", "XEXdtJ3qo9S", "eO8-k7vumuz" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer rXCn,\n\nThank you very much for carefully reading our response and increasing your score! we are glad to hear that our clarification solved your concerns!\n\nBest,\n\nAuthors\n\n", "This paper proposed a novel unknown-aware learning framework dubbed VOS (Virtual Outlier Synthesis), which optimize...
[ -1, 8, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, 3, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3 ]
[ "Q4KFhxH3Dqr", "iclr_2022_TW7d65uYu5M", "xXXL-KNJu5x", "iclr_2022_TW7d65uYu5M", "XG2y6jqr75e", "qGFdYa8e8uK", "ROglELDnL0w", "XEXdtJ3qo9S", "eO8-k7vumuz", "wZeA5sO0OjE", "qN4C1RirVa", "iclr_2022_TW7d65uYu5M", "iclr_2022_TW7d65uYu5M", "iclr_2022_TW7d65uYu5M" ]
iclr_2022_KntaNRo6R48
L0-Sparse Canonical Correlation Analysis
Canonical Correlation Analysis (CCA) models are powerful for studying the associations between two sets of variables. The canonically correlated representations, termed \textit{canonical variates} are widely used in unsupervised learning to analyze unlabeled multi-modal registered datasets. Despite their success, CCA m...
Accept (Poster)
Canonical correlation analysis is a method for studying associations between two sets of variables. However these methods lose their effectiveness when the number of variables is larger than the number of samples. This paper proposes a method, based on stochastic gating, for solving a $\ell_0$-CCA problem where the goa...
train
[ "0eHcbK5t-z2", "kMrsHifAGfW", "uL_xFy_oROK", "FSp0oiuWeXz", "oNxJL4NTiw", "DuIXftRPWX8", "YOmDUE2Bkfd", "4WNJNelIpOU", "MF3I5vwiNKI", "u8V6Se-OGHx", "TP3j3r6ls_-", "AOdt88evV-f", "B1GusF06rqJ" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear authors,\n\nThanks for your responses to my questions. I am satisfied with your answers and have no further questions.", " Dear authors,\n\nThanks for your responses to my concerns and I am satisfied with your answers.", " We thank all reviewers for spending valuable time reading our paper and for provid...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5, 4 ]
[ "MF3I5vwiNKI", "4WNJNelIpOU", "iclr_2022_KntaNRo6R48", "YOmDUE2Bkfd", "u8V6Se-OGHx", "B1GusF06rqJ", "DuIXftRPWX8", "AOdt88evV-f", "TP3j3r6ls_-", "iclr_2022_KntaNRo6R48", "iclr_2022_KntaNRo6R48", "iclr_2022_KntaNRo6R48", "iclr_2022_KntaNRo6R48" ]
iclr_2022_TBpg4PnXhYH
SPIRAL: Self-supervised Perturbation-Invariant Representation Learning for Speech Pre-Training
We introduce a new approach for speech pre-training named SPIRAL which works by learning denoising representation of perturbed data in a teacher-student framework. Specifically, given a speech utterance, we first feed the utterance to a teacher network to obtain corresponding representation. Then the same utterance is...
Accept (Poster)
This paper proposed a self-supervised speech pre-training approach, by the name of SPIRAL, to learning perturbation-invariant representations in a teacher-student setting. The authors introduced a variety of techniques to improve the performance and stabilize the training. Compared to the popular unsupervised learnin...
train
[ "Q9081kyrY8B", "yhdPifNC7M", "wGcV4xvQKec", "MmEf8woSYwk", "clSOF8gMZeU", "atQWpo9uro2", "qI-PVJbzR66", "3vrE6ueQpN", "wIM6lEm8pNW", "a0eWZFHwQG-", "_1tbsd6_3uM", "IdUQ69aKXTa", "BnXDDpKpteG", "ubBDL_okAf", "AFKa8ljRP-V", "EPaiaS64Di", "TgcS6NUWAj", "wtkHRFXppW" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper presents a novel approach to self-supervised speech representation learning, which promises to be simpler than existing methods such as wav2vec 2.0. The approach is inspired by the BYOL approach from CV, and is shown to be indeed largely as effective as wav2vec 2.0, while being significantly more efficie...
[ 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, 6, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3, 4 ]
[ "iclr_2022_TBpg4PnXhYH", "iclr_2022_TBpg4PnXhYH", "iclr_2022_TBpg4PnXhYH", "ubBDL_okAf", "IdUQ69aKXTa", "BnXDDpKpteG", "EPaiaS64Di", "TgcS6NUWAj", "Q9081kyrY8B", "iclr_2022_TBpg4PnXhYH", "AFKa8ljRP-V", "wtkHRFXppW", "wtkHRFXppW", "wtkHRFXppW", "iclr_2022_TBpg4PnXhYH", "iclr_2022_TBpg4P...
iclr_2022_PzcvxEMzvQC
GeoDiff: A Geometric Diffusion Model for Molecular Conformation Generation
Predicting molecular conformations from molecular graphs is a fundamental problem in cheminformatics and drug discovery. Recently, significant progress has been achieved with machine learning approaches, especially with deep generative models. Inspired by the diffusion process in classical non-equilibrium thermodynamic...
Accept (Oral)
The authors focus on the conditional generation of molecular conformations (i.e. 3D cartesian atom positions) from a given molecular graph. They formulate the generation via diffusion probabilistic models. Conformations are generated by a reverse diffusion process from isotropic Gaussian noise to molecular conformatio...
test
[ "wbhOdu5IOB9", "aQ5VdomRqq6", "HPaIWkwpA1", "HJqAum4iBs", "oWq1pIZ_Lsl", "pWLDGd0e5yK", "6LSfTeOqv9", "xxW8SpZ-YqQ", "vL48helhaFK", "QDqqqk7H3C" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ " The authors clarified all unclear parts, accelerated the sampling process, and provided an estimate of the sampling time. I will raise my score from 6 to 8.", "The work introduces a novel **GeoDiff** model for conformation generation task, based on a promising diffusion model approach that shows state-of-the-ar...
[ -1, 8, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ -1, 5, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "pWLDGd0e5yK", "iclr_2022_PzcvxEMzvQC", "xxW8SpZ-YqQ", "iclr_2022_PzcvxEMzvQC", "QDqqqk7H3C", "aQ5VdomRqq6", "vL48helhaFK", "iclr_2022_PzcvxEMzvQC", "iclr_2022_PzcvxEMzvQC", "iclr_2022_PzcvxEMzvQC" ]
iclr_2022_6y2KBh-0Fd9
Revisiting flow generative models for Out-of-distribution detection
Deep generative models have been widely used in practical applications such as the detection of out-of-distribution (OOD) data. In this work, we aim to re-examine the potential of generative flow models in OOD detection. We first propose a simple combination of univariate one-sample statistical test (e.g., Kolmogorov-...
Accept (Poster)
The paper investigates the use of flow models for out-of-distribution detection. The paper proposes to use a combination of random projections in the latent space of flow models and one-sample / two-sample statistical tests for detecting OOD inputs. The authors present results on image benchmarks as well as non-image b...
train
[ "RNFKWpsIeWY", "bba6HqjCSm", "dF6ytmstlAm", "lvELoStMHKI", "dUg8PSdawzM", "2jwwkjSOOeV", "EKx2bxVwHP", "SacCQiPoo_p", "X56fugJm-sA", "wwGhz24elvD", "sGaFpwtLlqh", "qua9h54Aci", "oPv6Ubq4FN8", "sje3NF3KoO" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer" ]
[ " Thanks for your revisions, in my view they have improved the manuscript, therefore I increased the score to accept.", "The paper proposes evaluates using statistical tests on random 1d-projections in the latent space of a flow model for groupwise out-of-distribution (OOD) detection. Concretely, Komolgorov-Smirn...
[ -1, 8, -1, 8, -1, -1, -1, -1, -1, 8, -1, -1, -1, 6 ]
[ -1, 4, -1, 3, -1, -1, -1, -1, -1, 4, -1, -1, -1, 4 ]
[ "sGaFpwtLlqh", "iclr_2022_6y2KBh-0Fd9", "dUg8PSdawzM", "iclr_2022_6y2KBh-0Fd9", "lvELoStMHKI", "EKx2bxVwHP", "SacCQiPoo_p", "sje3NF3KoO", "iclr_2022_6y2KBh-0Fd9", "iclr_2022_6y2KBh-0Fd9", "bba6HqjCSm", "sje3NF3KoO", "wwGhz24elvD", "iclr_2022_6y2KBh-0Fd9" ]
iclr_2022_j-63FSNcO5a
Learning Disentangled Representation by Exploiting Pretrained Generative Models: A Contrastive Learning View
From the intuitive notion of disentanglement, the image variations corresponding to different generative factors should be distinct from each other, and the disentangled representation should reflect those variations with separate dimensions. To discover the generative factors and learn disentangled representation, pre...
Accept (Poster)
The paper proposes a framework, named Disentaglement via Contrast (DisCo), to learn disentangled representations via contrastive learning on well-pretrained generative models. The method aims at simultaneously discovering semantically meaningful directions in pretrained generative models and training and encoder to ext...
train
[ "Tqx4jJUm12F", "6Sk7Smv_omL", "Z5r53dxpx4", "MrMTD50d_DT", "ADgDEsdJzSj", "JosCiZhbAv", "h9BDBaL2lq", "ErJ7Bcl2lPU", "g-Z0xSzUCzA", "nCgWKooS2RM", "zMRwq79V9Nf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper presents a framework to model disentangled directions for pretrained models. Such an approach mitigates the problems with poor generation quality arising while training models with additional regularization terms to force disentanglement. The underlying idea is contrastive-based: similar image variation...
[ 6, 6, -1, 8, -1, -1, -1, -1, -1, -1, 8 ]
[ 4, 4, -1, 4, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2022_j-63FSNcO5a", "iclr_2022_j-63FSNcO5a", "6Sk7Smv_omL", "iclr_2022_j-63FSNcO5a", "6Sk7Smv_omL", "MrMTD50d_DT", "zMRwq79V9Nf", "6Sk7Smv_omL", "6Sk7Smv_omL", "Tqx4jJUm12F", "iclr_2022_j-63FSNcO5a" ]
iclr_2022_NH29920YEmj
Who Is Your Right Mixup Partner in Positive and Unlabeled Learning
Positive and Unlabeled (PU) learning targets inducing a binary classifier from weak training datasets of positive and unlabeled instances, which arise in many real-world applications. In this paper, we propose a novel PU learning method, namely Positive and unlabeled learning with Partially Positive Mixup (P3Mix), whic...
Accept (Poster)
Mixup is very helpful when the training sample is scarce or has weak supervision. The paper studies how to adapt mixup to positive and unlabeled (PU) learning, a representative weakly supervised learning problem. By studying the specific properties of PU learning, the authors propose the concept of marginal pseudo-nega...
train
[ "O1XgvfUnSFJ", "g1TskboRavk", "EYHXBiMFqK", "iNanTdElVDs", "6oEK_6k754r", "Mq8MYj0HUsT", "dEXqcnodyPL", "KdHwpfVvlwm", "mRzj_VnghoPb", "Wt4zUCX7nMU", "VyDyCiE21s", "BkR4TVBXYhF", "0fB2QHb00o7", "2I_ZzhvLDU" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your responses.\n\nMy questions have been clearly answered. I think this paper is above the acceptance threshold, and I would like to keep my score.\n\nBest Regards,\n\nReviewer fDKn", " Thanks! We are happy the response is helpful.", " Thank you for the response! My four questions have been clearl...
[ -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, 6, 8, 8 ]
[ -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "Wt4zUCX7nMU", "EYHXBiMFqK", "dEXqcnodyPL", "6oEK_6k754r", "VyDyCiE21s", "iclr_2022_NH29920YEmj", "0fB2QHb00o7", "0fB2QHb00o7", "2I_ZzhvLDU", "BkR4TVBXYhF", "Mq8MYj0HUsT", "iclr_2022_NH29920YEmj", "iclr_2022_NH29920YEmj", "iclr_2022_NH29920YEmj" ]
iclr_2022_5JdLZg346Lw
Generative Modeling with Optimal Transport Maps
With the discovery of Wasserstein GANs, Optimal Transport (OT) has become a powerful tool for large-scale generative modeling tasks. In these tasks, OT cost is typically used as the loss for training GANs. In contrast to this approach, we show that the OT map itself can be used as a generative model, providing comparab...
Accept (Poster)
The paper proposes a new method to learn OT maps, and reframes it in the GAN literature. The initial method works when computing maps between equal dimensions, through duality and an identity (10 - 11, amply discussed in the reviewing process). Lemma 4.1 provides the main result. While the discussion right below on the...
val
[ "8Ct1ftNv86_", "q_x7RNPJ0L", "Pwe3MtchbSO", "v_cP1d4cpon", "4xmJ5Qtl0V8", "cT4Ivvhd7Xh", "PqQ_cpdf7iP", "EoVHI2JF6Wx", "lcuFmpFBOL", "qGAAQz-zvO0", "5CWI5oHUDPo", "gLNp07KJJ_", "hx095AaRZ1", "ZE1q_bop21A", "Z_vhQxH4QSC", "1U5omoaYdp2", "SOfipqV0a4", "DF_pYrJRhbU", "02mcD7GRtnD" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear area chair and reviewers,\n\nBefore the discussion ends, we would like to summarize and share further empirical evidence that our method successfully computes OT maps in computer vision problems.\n\nIn the paper, we considered popular image generation and unpaired restoration tasks that already form a repres...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 8, 6 ]
[ -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "iclr_2022_5JdLZg346Lw", "4xmJ5Qtl0V8", "iclr_2022_5JdLZg346Lw", "hx095AaRZ1", "EoVHI2JF6Wx", "gLNp07KJJ_", "Z_vhQxH4QSC", "lcuFmpFBOL", "1U5omoaYdp2", "iclr_2022_5JdLZg346Lw", "iclr_2022_5JdLZg346Lw", "hx095AaRZ1", "Pwe3MtchbSO", "DF_pYrJRhbU", "02mcD7GRtnD", "SOfipqV0a4", "iclr_202...
iclr_2022_AXWygMvuT6Q
Retriever: Learning Content-Style Representation as a Token-Level Bipartite Graph
This paper addresses the unsupervised learning of content-style decomposed representation. We first give a definition of style and then model the content-style representation as a token-level bipartite graph. An unsupervised framework, named Retriever, is proposed to learn such representations. First, a cross-attention...
Accept (Poster)
This paper proposes a framework for learning disentangled representations of content and style in an unsupervised way, using a permutation invariant network. It adopts VQ network for content encoding, and Cross-Attention for Style and Linking Attention at decoder. It is shown to be domain agonistic, working well in ima...
train
[ "o1JSIQJMMhh", "VZgwIiEW7R", "Txc4N8NEZS9", "saC3Xc5edFX", "GfQBbMJMw6r", "Q5tdroJ_4Ym" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors would like to thank the reviewer for the positive comments on our work and the constructive suggestions. \n\n1. Clarification of some notations (W1).\n\nWe understand your concern and would like to clarify some notions here for better understanding. \n-\tPrototypes, linking keys: \n\nBoth the prototyp...
[ -1, -1, -1, 8, 6, 6 ]
[ -1, -1, -1, 4, 3, 3 ]
[ "Q5tdroJ_4Ym", "GfQBbMJMw6r", "saC3Xc5edFX", "iclr_2022_AXWygMvuT6Q", "iclr_2022_AXWygMvuT6Q", "iclr_2022_AXWygMvuT6Q" ]
iclr_2022_sNuFKTMktcY
Active Hierarchical Exploration with Stable Subgoal Representation Learning
Goal-conditioned hierarchical reinforcement learning (GCHRL) provides a promising approach to solving long-horizon tasks. Recently, its success has been extended to more general settings by concurrently learning hierarchical policies and subgoal representations. Although GCHRL possesses superior exploration ability by ...
Accept (Poster)
The paper proposes a new goal-conditioned hierarchical RL method aimed at improving performance on sparse reward tasks. Compared to prior work the novelty lies in a new way of improving the stability of goal representation learning and in an improved exploration strategy for proposing goals while taking reachability in...
train
[ "zRfcGhx1xzd", "15nBxhLTVq", "xsrdRPbFvMf", "QzCifjiL97A", "8g-PgsINm7D", "0ybD4OwWNUH", "eXgFTMKb3y3", "h8T8adZfMkI", "whRSduFn_2m", "9kPTcIW9m8P", "XNVMFOsfVOi", "g1d1w9o3q5o", "4bvBS43c8FQ", "Cm4oHS7anrM", "jUswc5BJi0", "zC6e34ybm0", "IEk6WFp9Ahu", "PBzO4_83_0" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I have gone over the rebuttal and am satisfied with the response -- my score is 8 (accept)", " I've gone over the authors' rebuttal. I acknowledge them and I confirm this is my final score/recommendation for this paper.", "The paper studies Goal-conditioned Hierarchical RL (GCHRL) and proposes a new algorithm...
[ -1, -1, 6, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ -1, -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "QzCifjiL97A", "PBzO4_83_0", "iclr_2022_sNuFKTMktcY", "iclr_2022_sNuFKTMktcY", "0ybD4OwWNUH", "whRSduFn_2m", "h8T8adZfMkI", "g1d1w9o3q5o", "9kPTcIW9m8P", "XNVMFOsfVOi", "xsrdRPbFvMf", "4bvBS43c8FQ", "PBzO4_83_0", "IEk6WFp9Ahu", "QzCifjiL97A", "XNVMFOsfVOi", "iclr_2022_sNuFKTMktcY", ...
iclr_2022_Lwr8We4MIxn
A Biologically Interpretable Graph Convolutional Network to Link Genetic Risk Pathways and Imaging Phenotypes of Disease
We propose a novel end-to-end framework for whole-brain and whole-genome imaging-genetics. Our genetics network uses hierarchical graph convolution and pooling operations to embed subject-level data onto a low-dimensional latent space. The hierarchical network implicitly tracks the convergence of genetic risk across we...
Accept (Poster)
In this paper, the authors present a method that combines genetic data (using a hierarchical, graph convolution approach) with imaging data to predict schizophrenia. The reviewers raised several concerns that the authors have addressed. Some of the concerns were relevant to writing, the authors have clarified these poi...
train
[ "q9B-RXddb3_", "jLzCzkypnDj", "PT9Gm4rFuLk", "igoJ8k29ICs", "x5qd6MeFZ_u", "Fd_WaK_MBG6", "KDXu9TZCF82", "aUoM7nhUmY", "eyjNhGOiX1", "0uy9e_HO55", "Wh5ZqfPEWY-", "zJnon31Mj0a", "obD6ECu5RBw", "XWHJZE3H0f6", "syD26CYU1n", "jeVofwj7KwR" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes GUIDE for integrating imaging and genetics data for predicting phenotypes. GUIDE uses prior info from gene ontology to restrict connections between genes and biological processes under a graph convolution network (GCN) framework. The genetic and imaging representations are combined for phenotyp...
[ 8, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4 ]
[ "iclr_2022_Lwr8We4MIxn", "iclr_2022_Lwr8We4MIxn", "iclr_2022_Lwr8We4MIxn", "x5qd6MeFZ_u", "aUoM7nhUmY", "eyjNhGOiX1", "jeVofwj7KwR", "jeVofwj7KwR", "obD6ECu5RBw", "jLzCzkypnDj", "q9B-RXddb3_", "q9B-RXddb3_", "q9B-RXddb3_", "syD26CYU1n", "iclr_2022_Lwr8We4MIxn", "iclr_2022_Lwr8We4MIxn" ...
iclr_2022_541PxiEKN3F
Acceleration of Federated Learning with Alleviated Forgetting in Local Training
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy by independently training local models on each client and then aggregating parameters on a central server, thereby producing an effective global model. Although a variety of FL algorithms have been proposed, the...
Accept (Poster)
The paper considers the problem of distributed optimization in the Federated Learning setting in particular when the data in the clients is non-i.i.d. The paper points to the problem of catastrophic forgetting during the local update stages to be a cause for the bad training of models and proposes to fix it via introdu...
train
[ "0NyX0hnzm7", "UWGincIIs0", "frmQK2shJPA", "T5TlH1u2ga", "pOEZC5ba6wv", "PoLnzqzLgBA", "MVGABA7g3cB", "xan--dBTEm", "r96d1AIlx_", "HP-mGcphvCP", "3IVYnZFxWuT", "t_nKpSmwzBe", "HaotwjHadQ5", "2eswa0kR2FN", "n9zlZokAT-h", "XP9e8IV7w0C", "Mon6qhZ3E2c", "_dXKZD-WPUl", "YUQF5hbGyma", ...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_re...
[ " Thanks so much once again for your follow-up. Since the notations $loss^{(t)}$ and $loss^{(t-1)}$ are only used in Figure 1 and supplementary section B.1, and they are clearly defined in these places, we are not sure if it would help to repeat them in the main text. Note that we talk about the forgetting issue ex...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, 3, 2 ]
[ "UWGincIIs0", "PoLnzqzLgBA", "Mon6qhZ3E2c", "HP-mGcphvCP", "HaotwjHadQ5", "MVGABA7g3cB", "xan--dBTEm", "3IVYnZFxWuT", "iclr_2022_541PxiEKN3F", "XP9e8IV7w0C", "2eswa0kR2FN", "iclr_2022_541PxiEKN3F", "n9zlZokAT-h", "SRbleR8nD9w", "t_nKpSmwzBe", "r96d1AIlx_", "YUQF5hbGyma", "2eswa0kR2...
iclr_2022_CS4463zx6Hi
Geometric Transformers for Protein Interface Contact Prediction
Computational methods for predicting the interface contacts between proteins come highly sought after for drug discovery as they can significantly advance the accuracy of alternative approaches, such as protein-protein docking, protein function analysis tools, and other computational methods for protein bioinformatics....
Accept (Poster)
This paper presents a novel neural network architecture to predict interacting residues among two interacting proteins, and evaluates its performance on benchmarks. While the reviews were initially mixed, there has been a productive discussion and significant improvements in the paper during the discussion, including i...
train
[ "_NDJnDJ5amw", "yRsF9q0Dmd", "A_6BAe7juv", "sSdTZyvogJ-", "4HhXi-xDBP", "I5Js-P8hsju", "plBIrAuD5k7", "fB2zwdqfp5", "l2e177X9kGN", "8FFykBSMbmd", "K9x9kFPMxXT", "ut-Vof7pbG", "JoFo9esjH3", "wtL1OsqsN43", "GRW-m61Qr_P", "Al5yPqClR-3", "xMUrHUSVy3W", "fEFGofbhnb", "po-dakeprX", "...
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official...
[ " Dear Reviewer rPiF,\n\nThank you very much for your kind remarks.\nWe thought it worth mentioning that your comment reminded us to update our original response to your initial review. Specifically, it previously displayed an older (unrevised) version of Equations 3, 4, and 5 from our paper. We have since updated ...
[ -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, -1, -1, 4, 5 ]
[ "A_6BAe7juv", "c2QqA9wHwMg", "yRsF9q0Dmd", "QoA9bV15us", "l2e177X9kGN", "iclr_2022_CS4463zx6Hi", "QoA9bV15us", "iclr_2022_CS4463zx6Hi", "8FFykBSMbmd", "GRW-m61Qr_P", "GRW-m61Qr_P", "GRW-m61Qr_P", "wtL1OsqsN43", "1yH4JO-CcaM", "1yH4JO-CcaM", "iclr_2022_CS4463zx6Hi", "cP3zDFvrsop", "...
iclr_2022_Wm3EA5OlHsG
Scene Transformer: A unified architecture for predicting future trajectories of multiple agents
Predicting the motion of multiple agents is necessary for planning in dynamic environments. This task is challenging for autonomous driving since agents (e.g., vehicles and pedestrians) and their associated behaviors may be diverse and influence one another. Most prior work have focused on predicting independent future...
Accept (Poster)
The paper shows interesting and discussion inspiring results on multi-agent trajectory prediction, as needed, for instance, in autonomous driving. Among the key technical ideas is a “conditional scene transformer” approach for flexible predictions for different agents. Results on two public benchmarks are impressive....
train
[ "OZ_1daaZSn1", "M5POLc_Csik", "jER0SvFpmrb", "utJl6LMcWe_", "HR5G6tjzxy5", "ihy-TSFIkq1", "MT1VC8VUI4v", "vB4Z05cOaZ", "Z9XF2KKSGWm", "qOt5vM-sAeb", "mooZKVTLef0", "x2O5Uv5pnxx", "qnTkz_TbCj7", "vrMZ9ERsuy1" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. It clarifies my questions.", "The paper proposes a new Transformer-based trajectory forecasting model that can predict multiple agents in a scene. It can be used for goal-directed trajectory forecasting as well. Experiments on Argoverse and Waymo with good results. Related work is s...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 8 ]
[ -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 5 ]
[ "HR5G6tjzxy5", "iclr_2022_Wm3EA5OlHsG", "utJl6LMcWe_", "Z9XF2KKSGWm", "ihy-TSFIkq1", "vrMZ9ERsuy1", "qnTkz_TbCj7", "x2O5Uv5pnxx", "qOt5vM-sAeb", "mooZKVTLef0", "M5POLc_Csik", "iclr_2022_Wm3EA5OlHsG", "iclr_2022_Wm3EA5OlHsG", "iclr_2022_Wm3EA5OlHsG" ]
iclr_2022_uYLFoz1vlAC
Efficiently Modeling Long Sequences with Structured State Spaces
A central goal of sequence modeling is designing a single principled model that can address sequence data across a range of modalities and tasks, particularly on long-range dependencies. Although conventional models including RNNs, CNNs, and Transformers have specialized variants for capturing long dependencies, they ...
Accept (Oral)
All reviewers agreed this was a very strong submission: it was clearly written, was theoretically and experimentally interesting, and had excellent motivation. A clear accept. Authors: you've already indicated that you've updated the submission to respond to reviewer changes, if you could double check their comments fo...
val
[ "yNUpjb3Obnt", "Njj9Mravgr", "8BRiy-2aHov", "DAJxielPBG", "GEMcGjA05x6", "eem9Q7MzIjE", "6AzKAwvex4c", "ZeQHfI3FedK", "AmB-grVm5j0", "XBv0jDVPyU", "-jIuhxDHIKC" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ " Thank you for your answers to the questions raised. The authors have addressed the main questions/concerns I raised. Remaining questions are sufficient to be addressed in future work which this paper enables. The addition of limitations and next steps is also appreciated. ", "This paper presents a novel par...
[ -1, 8, -1, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "GEMcGjA05x6", "iclr_2022_uYLFoz1vlAC", "6AzKAwvex4c", "AmB-grVm5j0", "-jIuhxDHIKC", "Njj9Mravgr", "XBv0jDVPyU", "iclr_2022_uYLFoz1vlAC", "iclr_2022_uYLFoz1vlAC", "iclr_2022_uYLFoz1vlAC", "iclr_2022_uYLFoz1vlAC" ]
iclr_2022_Iog0djAdbHj
Better Supervisory Signals by Observing Learning Paths
Better-supervised models might have better performance. In this paper, we first clarify what makes for good supervision for a classification problem, and then explain two existing label refining methods, label smoothing and knowledge distillation, in terms of our proposed criterion. To further answer why and how better...
Accept (Poster)
The authors made substantial improvements to the originally submitted manuscript; however, reviewers initially remained reluctant to support the paper for acceptance based on the degree to which they were confident in the underlying arguments / position taken by the authors and the evidence provided to support their p...
train
[ "fC_FadzfMNi", "RSlxiX1fvYk", "Jyx6FVByuvI", "5XJoWsLmI9U", "BSNH63nAku", "543ExrAZRIE", "NYkWQpGf02", "1V0ynEnP6PX", "N6q5hT4dtOU", "THZ-8CeKLx", "6bRDVMhCXZb", "WV04LGlhTBy", "r61AVctblSU", "zOqTSPXMegR", "iQ1TH1P8GWp", "jU70Sop5Gx4", "8prtSsJKc7y", "ETLSHQ-ffnQ", "4nBozEauGKg"...
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", ...
[ " We thank the reviewer for the insightful comments. We will update the discussion and experiments about the clean data in the next version.", "This paper proposes an explanation for the success of distillation. It first experiments with synthetic Gaussian data. On synthetic data, it shows that distillation works...
[ -1, 6, -1, -1, -1, -1, 8, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ -1, 4, -1, -1, -1, -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "Jyx6FVByuvI", "iclr_2022_Iog0djAdbHj", "5XJoWsLmI9U", "BSNH63nAku", "xnXsfgi8shd", "1V0ynEnP6PX", "iclr_2022_Iog0djAdbHj", "ETLSHQ-ffnQ", "6bRDVMhCXZb", "iclr_2022_Iog0djAdbHj", "r61AVctblSU", "iclr_2022_Iog0djAdbHj", "zOqTSPXMegR", "-ckvAL6jV_G", "THZ-8CeKLx", "DQqFbmKP-Yh", "RSlxi...
iclr_2022_iLHOIDsPv1P
PAC-Bayes Information Bottleneck
Understanding the source of the superior generalization ability of NNs remains one of the most important problems in ML research. There have been a series of theoretical works trying to derive non-vacuous bounds for NNs. Recently, the compression of information stored in weights (IIW) is proved to play a key role in NN...
Accept (Spotlight)
This paper revisits the information bottleneck principle, but in terms of the compression inherent in the weights of a neural network, rather than the representation. This gives the resulting IB principle a PAC-Bayes flavor. The key contribution is a generalization bound based on optimizing the objective dictated by th...
train
[ "_1qLUnJS5Tq", "twGNNdknnHZ", "nTenJXsZEH", "bcKVe_uSOc", "AarGeSYwTPO", "ax7_w41Yn4C", "m9HWnNvyQq6", "5NO24YUAb8i", "o5YiTsuUxHm", "fwuovDiDea", "l42ULqKkZ1", "oJTIlL2tsZZ", "4lhRPRXJVVf", "lNhZYBsPjkm", "ilmFyAucarL", "0COwuVECf47", "K-lOGqto8Hu" ]
[ "public", "author", "author", "author", "author", "author", "author", "author", "public", "author", "public", "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hi :)\n\nThank you (!) for engaging the conversation.\n\nI think ideally if these things you have described could make their way into the paper, it would help the readers to understand the ideas you're trying to connect in this work, and how. Xu & Raginsky (2017) is a great work and relevant to support the connec...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 10, 6, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 2, 3 ]
[ "twGNNdknnHZ", "o5YiTsuUxHm", "K-lOGqto8Hu", "K-lOGqto8Hu", "0COwuVECf47", "lNhZYBsPjkm", "ilmFyAucarL", "iclr_2022_iLHOIDsPv1P", "iclr_2022_iLHOIDsPv1P", "l42ULqKkZ1", "oJTIlL2tsZZ", "4lhRPRXJVVf", "iclr_2022_iLHOIDsPv1P", "iclr_2022_iLHOIDsPv1P", "iclr_2022_iLHOIDsPv1P", "iclr_2022_i...
iclr_2022_aBXzcPPOuX
Bundle Networks: Fiber Bundles, Local Trivializations, and a Generative Approach to Exploring Many-to-one Maps
Many-to-one maps are ubiquitous in machine learning, from the image recognition model that assigns a multitude of distinct images to the concept of “cat” to the time series forecasting model which assigns a range of distinct time-series to a single scalar regression value. While the primary use of such models is natura...
Accept (Poster)
The paper studies the problem of learning fiber distributions associated with a machine learning task, in which the goal is to predict Y, given X. One chooses a fiber space / distribution Z / D_Z, and learns a trivialization \varphi : (Y,Z) -> X. The proposed architecture first clusters the label space Y. Within each c...
train
[ "_jQ8b4vip-o", "k0CqIA5j9Et", "8HnYocnhzqa", "SfiOk1ZdAhG", "zaO9n0_0Ix", "O-PWJPQoQN", "AOuoYNxBqC1", "zMqY8AXCppj", "4S0v50rgxt", "5n7xCDf_utc", "udjPsICNh9", "Y-qjAkEmcSh", "6-uTfGjeb5", "lFHtphUC26t" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their added section and comments. After reading the other reviews I feel that perhaps I was a little hasty in my initial assessment. But I still think that a 6 would be too weak a score and will keep my recommendation fixed at 8.", "The paper introduces a new architecture for generative ...
[ -1, 6, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, 3, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "4S0v50rgxt", "iclr_2022_aBXzcPPOuX", "zMqY8AXCppj", "O-PWJPQoQN", "iclr_2022_aBXzcPPOuX", "5n7xCDf_utc", "lFHtphUC26t", "k0CqIA5j9Et", "6-uTfGjeb5", "udjPsICNh9", "zaO9n0_0Ix", "iclr_2022_aBXzcPPOuX", "iclr_2022_aBXzcPPOuX", "iclr_2022_aBXzcPPOuX" ]
iclr_2022_vh-0sUt8HlG
MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer
Light-weight convolutional neural networks (CNNs) are the de-facto for mobile vision tasks. Their spatial inductive biases allow them to learn representations with fewer parameters across different vision tasks. However, these networks are spatially local. To learn global representations, self-attention-based vision tr...
Accept (Poster)
This paper presents a light weight hybrid model using both convolutions and Transformer layers resulting in models with lower computational cost and good performance. Reviewers find the paper interesting and agree that the paper did a good job in presenting convincing experimental results. There were questions about ro...
val
[ "_dGaaHwivI", "c9mz5tw5Fkw", "mgQgrKU6To9", "4dm_X_B6xBl" ]
[ "public", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " 1. Regarding recent vision transformers [1,2,3] obtain significantly boosted performance (7+mIOU or mAP) on down stream tasks (COCO detection and ADE segmentation) than CNNs. It would be better that MobileViT is compared with them by controlling similar flops. For example, their codes are released, and it's not ...
[ -1, 8, 6, 5 ]
[ -1, 4, 4, 4 ]
[ "iclr_2022_vh-0sUt8HlG", "iclr_2022_vh-0sUt8HlG", "iclr_2022_vh-0sUt8HlG", "iclr_2022_vh-0sUt8HlG" ]
iclr_2022_bERaNdoegnO
Policy improvement by planning with Gumbel
AlphaZero is a powerful reinforcement learning algorithm based on approximate policy iteration and tree search. However, AlphaZero can fail to improve its policy network, if not visiting all actions at the root of a search tree. To address this issue, we propose a policy improvement algorithm based on sampling actions ...
Accept (Spotlight)
The paper presents improvements to AlphaZero and MuZero for settings where one is restricted in the number of rollouts. The initial response from reviewers was generally favorable but the reviewers wanted more details and clarifications of multiple parts of the paper, and further intuition about the Gumbel distributio...
train
[ "OA0CUWEV9YS", "_q_natytDbY", "QEz9Tt5X997", "klc3-kxaaGH", "VEpGlB7d3VK", "VDFICgv4XFu", "K9Mx_7agvHa", "l_Qy_65RLC", "QwYaewBYgne", "a52mjv5w03Z", "XJc8akI1F7b", "P4ILOuW_dQG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper proposes a number of principled algorithmic modifications to state-of-the-art planning algorithms (AlphaZero, MuZero) for improving performance in settings with many actions and a relatively small computation and / or sample budget. The main contributions are algorithmic and empirical. The key ideas incl...
[ 8, -1, 8, -1, 8, -1, -1, -1, -1, -1, -1, 6 ]
[ 3, -1, 3, -1, 3, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_bERaNdoegnO", "l_Qy_65RLC", "iclr_2022_bERaNdoegnO", "VDFICgv4XFu", "iclr_2022_bERaNdoegnO", "XJc8akI1F7b", "P4ILOuW_dQG", "QEz9Tt5X997", "iclr_2022_bERaNdoegnO", "OA0CUWEV9YS", "VEpGlB7d3VK", "iclr_2022_bERaNdoegnO" ]
iclr_2022_t5s-hd1bqLk
Conditioning Sequence-to-sequence Networks with Learned Activations
Conditional neural networks play an important role in a number of sequence-to-sequence modeling tasks, including personalized sound enhancement (PSE), speaker dependent automatic speech recognition (ASR), and generative modeling such as text-to-speech synthesis. In conditional neural networks, the output of a model is ...
Accept (Poster)
The authors propose a novel method for conditioning deep neural network. They replace the activation function with a linear combination of activation functions (e.g., ReLu). The weights for the activation functions are dynamically computed from the input during inference and training. The approach is evaluated on stand...
train
[ "XMkjo8a921", "heisvc0QT6", "3TwvvIEqMlc", "lABuXmdslQT", "8s0EzjLcIZe", "qDXzPNMDUb7", "S5Ykz4dDj1L", "E7bdE0d2sq_", "vRpYx_1PG0_", "oWVdFbeYffK" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a way of conditioning information on neural networks. In literature a common way to condition a neural with an input would be to either concatenate the conditioning vector to the input vector, or inject it before several layers (modulation approach in Figure 1-b). In this paper they instead pro...
[ 6, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2022_t5s-hd1bqLk", "iclr_2022_t5s-hd1bqLk", "vRpYx_1PG0_", "XMkjo8a921", "oWVdFbeYffK", "oWVdFbeYffK", "XMkjo8a921", "vRpYx_1PG0_", "iclr_2022_t5s-hd1bqLk", "iclr_2022_t5s-hd1bqLk" ]
iclr_2022_oVE1z8NlNe
Divergence-aware Federated Self-Supervised Learning
Self-supervised learning (SSL) is capable of learning remarkable representations from centrally available data. Recent works further implement federated learning with SSL to learn from rapidly growing decentralized unlabeled images (e.g., from cameras and phones), often resulted from privacy constraints. Extensive atte...
Accept (Poster)
The paper focuses on self-supervised learning (SSL) in the federated learning setting (FedSSL). Research in this area is timely and of significance. The authors phrase their work as primarily being an empirical study providing insights into the building blocks of FedSSL. The evaluation in the paper is quite thorough an...
train
[ "sGjkTJjE9Ro", "KUdlPLmlc5s", "ozCZNnVvaed", "I1w7vVNWOaH", "FjQIJV2nMk_", "3JKpHqxADpK", "SBjJaCId8pH", "ib4-C0tekz-", "xG7bKiFPI0K", "1Jc_S7FSWJn", "Sd6dg49OxMZ", "TB5tqYaToL", "o0eaIRb9kZy", "YuWbAQQOhdX", "bG1volRn5b", "b2_x_3usL2O", "zfiqQjmU-ZK", "aFQq9U-Uqm0", "hfTf5949HS6...
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "...
[ " **Q1**: \"Fundamental components of SSL methods, including predictor, stop-gradient, momentum update, online and target encoder\" --- There are not from your contribution. \"predictor, stop-gradient, momentum update, online and target encoder\" are all from BOYL paper.\n\n**A1**: These components are the building...
[ -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 5 ]
[ -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 2 ]
[ "ozCZNnVvaed", "1Jc_S7FSWJn", "xG7bKiFPI0K", "iclr_2022_oVE1z8NlNe", "AvRgCptzuXn", "bG1volRn5b", "Sd6dg49OxMZ", "xG7bKiFPI0K", "YuWbAQQOhdX", "TB5tqYaToL", "cGieChoS36V", "o0eaIRb9kZy", "YuWbAQQOhdX", "hfTf5949HS6", "aFQq9U-Uqm0", "AvRgCptzuXn", "UH5sYmcGDk4", "-XEfYn43JBR", "ic...
iclr_2022_pgir5f7ekAL
Generative Principal Component Analysis
In this paper, we study the problem of principal component analysis with generative modeling assumptions, adopting a general model for the observed matrix that encompasses notable special cases, including spiked matrix recovery and phase retrieval. The key assumption is that the first principal eigenvector lies near th...
Accept (Poster)
This paper studies PCA under a generative model setup. The authors analyze the projected power method in a range of natural settings. Moreover, experimental evaluation and comparison to other methods is performed on MNIST. The paper studies an important problem. Despite some initial concerns, the reviewers overall agre...
train
[ "jinRjZtXTo", "3zhbmbfPvCA", "WJUX2vmFW8", "4dl4GmXyf_", "KDIDRYwQFA4", "t4O01qcvnS", "XQ5jtVo1UE8", "pEtqv2TBBwC" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "Principal component analysis (PCA) is one of the most commonly used techniques for dimension reduction in data and for explaining variability of the data in terms of its eigenvalues/vectors. Unfortunately in high-dimensions, PCA is mathematically inconsistent: this is well-known from random matrix literature. Henc...
[ 6, -1, -1, -1, -1, -1, 5, 8 ]
[ 4, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2022_pgir5f7ekAL", "4dl4GmXyf_", "pEtqv2TBBwC", "jinRjZtXTo", "XQ5jtVo1UE8", "iclr_2022_pgir5f7ekAL", "iclr_2022_pgir5f7ekAL", "iclr_2022_pgir5f7ekAL" ]
iclr_2022_ibqTBNfJmi
Frequency-aware SGD for Efficient Embedding Learning with Provable Benefits
Embedding learning has found widespread applications in recommendation systems and natural language modeling, among other domains. To learn quality embeddings efficiently, adaptive learning rate algorithms have demonstrated superior empirical performance over SGD, largely accredited to their token-dependent learning ra...
Accept (Poster)
The paper provides a new learning technique for problems that require learning embeddings. In particular, the authors analyze a technique that takes into account the frequency of items in an embedding layer to modify the learning rate for each embedding. The paper provides a theoretical analysis of this approached and ...
train
[ "KXorQyQ-z2E", "EKLBUFU7GrE", "T8RHQCdM06N", "QVo4-3lboN_", "69dua9vqeA6", "XNbgR0BOQWA", "tUdkLvfx3A1", "gZYV1GGklpA", "YiIpwS8RyF", "j30c9oWO9X", "b6pLMPjTGcX", "4daxVoMN5Me", "OAlstV1JvhP" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We really appreciate your encouraging comments, on acknowledging our efforts in providing supplementary experiments, and providing additional constructive suggestions on our paper presentation. Further changes in the context based on your suggestions will be included in our future draft, we will also include a mo...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 6 ]
[ -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 3 ]
[ "T8RHQCdM06N", "iclr_2022_ibqTBNfJmi", "QVo4-3lboN_", "XNbgR0BOQWA", "iclr_2022_ibqTBNfJmi", "YiIpwS8RyF", "b6pLMPjTGcX", "OAlstV1JvhP", "EKLBUFU7GrE", "4daxVoMN5Me", "iclr_2022_ibqTBNfJmi", "iclr_2022_ibqTBNfJmi", "iclr_2022_ibqTBNfJmi" ]
iclr_2022_v3aeIsY_vVX
Chunked Autoregressive GAN for Conditional Waveform Synthesis
Conditional waveform synthesis models learn a distribution of audio waveforms given conditioning such as text, mel-spectrograms, or MIDI. These systems employ deep generative models that model the waveform via either sequential (autoregressive) or parallel (non-autoregressive) sampling. Generative adversarial networks ...
Accept (Poster)
This work proposes a hybrid autoregressive and adversarial model for sound synthesis (including but not limited to speech), conditioned on various types of control signals. Although recent adversarial approaches have gained favor over previously popular autoregressive approaches in this domain, because of their ability...
test
[ "v1dVbtwkHm", "Ga5YeM41j01", "Z1rdDLpXQOu", "xI94kTvCk2y", "de_8QAScaU", "TfNdCJ02ZH1", "ShGCu97O1G", "UEfHmSjFBDC", "RxB-IaODB1m", "omFclfSs1u2", "hAnIMEHrDQ", "KdOQayGbmGg", "tBYcqV6tqtN" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I read the response by the authors and the other reviews.\n\nThanks for the response.\n\nI am satisfied with the changes to the paper.", " Thank you for increasing your score! We are glad we were able to resolve your questions.\n\nA large kernel HiFi-GAN is an interesting case to consider. One of the reasons we...
[ -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "hAnIMEHrDQ", "xI94kTvCk2y", "iclr_2022_v3aeIsY_vVX", "RxB-IaODB1m", "ShGCu97O1G", "UEfHmSjFBDC", "omFclfSs1u2", "iclr_2022_v3aeIsY_vVX", "Z1rdDLpXQOu", "tBYcqV6tqtN", "KdOQayGbmGg", "iclr_2022_v3aeIsY_vVX", "iclr_2022_v3aeIsY_vVX" ]
iclr_2022_3wNcr5nq56
The Uncanny Similarity of Recurrence and Depth
It is widely believed that deep neural networks contain layer specialization, wherein networks extract hierarchical features representing edges and patterns in shallow layers and complete objects in deeper layers. Unlike common feed-forward models that have distinct filters at each layer, recurrent networks reuse the s...
Accept (Poster)
This paper show that in several different neural network architectures, recurrent networks that share parameters over iterations have comparable performance and similar features to feed-forward networks of the same "effective depth". Reviewers initially had some reservations about novelty and generalizability t...
train
[ "pE4wZBLmHz", "NoHvbL19cwD", "GWsJDgtyQ1v", "VmJTnGjkluj", "ibEFxHl9h1T", "axvOJ0BhTbq", "urGzrcCTBTe", "U_-bsaKbeEo", "MrTu_qk2E-h", "ACP-0VdlqPW", "a6AwoDXHRgr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "In this work, the authors compare and contrast models wherein layer depth is replaced with equivalent number of recurrent time steps. In particular, they explore feed-forward, CNN and residual deep network wherein the intermediate layers are replaced by an equivalent recurrent block. For each architecture, the aut...
[ 6, -1, 6, -1, 6, -1, -1, -1, -1, -1, 8 ]
[ 4, -1, 5, -1, 5, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_3wNcr5nq56", "U_-bsaKbeEo", "iclr_2022_3wNcr5nq56", "iclr_2022_3wNcr5nq56", "iclr_2022_3wNcr5nq56", "ACP-0VdlqPW", "a6AwoDXHRgr", "GWsJDgtyQ1v", "pE4wZBLmHz", "ibEFxHl9h1T", "iclr_2022_3wNcr5nq56" ]
iclr_2022_CzceR82CYc
Score-Based Generative Modeling with Critically-Damped Langevin Diffusion
Score-based generative models (SGMs) have demonstrated remarkable synthesis quality. SGMs rely on a diffusion process that gradually perturbs the data towards a tractable distribution, while the generative model learns to denoise. The complexity of this denoising task is, apart from the data distribution itself, unique...
Accept (Spotlight)
The paper develops a diffusion-process based generative model that perturbs the data using a critically damped Langevin diffusion. The diffusion is set up through an auxiliary velocity term like in Hamiltonian dynamics. The idea is that picking a process that diffuses faster will lead to better results.The paper then c...
train
[ "CukcVwz-JXm", "i_p47Jp0wCP", "kEHvTzQDE6y", "TNQA0dCafqp", "7Jgu_Un7TT0", "LAufUI4LCwU", "aW8wsRaCp6", "yEe3zB1D5ZV", "BI6QPlHER0", "FS-OmzfGwqU", "9Wa1wPkvi_U", "bHKuKHnTrxz", "lZim7VZM-MH", "fa8cgh27ZYv" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank the reviewer for the reply and for clarifying the suggested experiment. We have just updated the manuscript with the proposed experiment: In particular, Section F.1.1 now also includes an experiment that uses EM sampling for the CLD-based model with the analytically known score, as suggeste...
[ -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 10, 8 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3 ]
[ "kEHvTzQDE6y", "iclr_2022_CzceR82CYc", "7Jgu_Un7TT0", "iclr_2022_CzceR82CYc", "LAufUI4LCwU", "aW8wsRaCp6", "i_p47Jp0wCP", "BI6QPlHER0", "bHKuKHnTrxz", "lZim7VZM-MH", "fa8cgh27ZYv", "iclr_2022_CzceR82CYc", "iclr_2022_CzceR82CYc", "iclr_2022_CzceR82CYc" ]
iclr_2022_D6nH3719vZy
On Improving Adversarial Transferability of Vision Transformers
Vision transformers (ViTs) process input images as sequences of patches via self-attention; a radically different architecture than convolutional neural networks (CNNs). This makes it interesting to study the adversarial feature space of ViT models and their transferability. In particular, we observe that adversarial ...
Accept (Spotlight)
In this paper, the authors enhance the adversarial transferability of vision transformers by introducing two novel strategies specific to the architecture of ViT models: Self-Ensemble and Token Refinement method. Comprehensive experiments on various models (including CNN's and ViT's variants) and tasks (classification,...
train
[ "do-2UT-eUX", "AA99zhh2RrR", "0spBr3XAF57", "qiV4hibDuyJ", "QfUhRestkN", "7zUVziTCdX", "NhLvO4s_YMV", "JGmu85kT_LB", "7t8RNas7C-Q", "RPg7jIMm3j", "fI3tb3Zg3BK" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper looks at Vision Transforms (ViTs) models and transferability of adversarial examples, which is previously known to be challenging between ViTs to CNNs and vice versa. The paper leverages the discriminative information stored in the lower layers' tokens and proposes two methods that modify and fine-tune t...
[ 8, -1, 8, -1, 8, -1, -1, -1, -1, -1, 6 ]
[ 3, -1, 3, -1, 5, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_D6nH3719vZy", "NhLvO4s_YMV", "iclr_2022_D6nH3719vZy", "7t8RNas7C-Q", "iclr_2022_D6nH3719vZy", "0spBr3XAF57", "do-2UT-eUX", "fI3tb3Zg3BK", "QfUhRestkN", "iclr_2022_D6nH3719vZy", "iclr_2022_D6nH3719vZy" ]
iclr_2022_BmJV7kyAmg
Towards Understanding the Robustness Against Evasion Attack on Categorical Data
Characterizing and assessing the adversarial vulnerability of classification models with categorical input has been a practically important, while rarely explored research problem. Our work echoes the challenge by first unveiling the impact factors of adversarial vulnerability of classification models with categorical ...
Accept (Poster)
In this manuscript, the authors study the relatively unexplored problem of how to characterize and assess the adversarial vulnerability of classification models with categorical input. Even certifying the robustness of such classification models is intrinsically an NP-hard combinatorial problem, the authors show that t...
train
[ "6pKsz1S6gAG", "Y2wd44R7KXj", "UM1_fhvpoml" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes new methods to gauge model robustness to perturbations in categorical data. The experiments show the effectiveness of such new estimators and corroborates the intuition over model robustness v.s. mutual information over features.\n\n=============================================\nI acknowledge th...
[ 8, 6, 6 ]
[ 4, 3, 4 ]
[ "iclr_2022_BmJV7kyAmg", "iclr_2022_BmJV7kyAmg", "iclr_2022_BmJV7kyAmg" ]
iclr_2022_81e1aeOt-sd
On-Policy Model Errors in Reinforcement Learning
Model-free reinforcement learning algorithms can compute policy gradients given sampled environment transitions, but require large amounts of data. In contrast, model-based methods can use the learned model to generate new data, but model errors and bias can render learning unstable or suboptimal. In this paper, we pre...
Accept (Poster)
This paper presents a study of on-policy data in the context of model-based reinforcement learning and proposes a way to ameliorate the resulting model errors. This is a timely and interesting contribution, and all reviewers agree on the quality of the manuscript. Please incorporate all the remaining feedback from the...
val
[ "4g2z7J-Bn9J", "-Iugs_kuw53", "MgLhG_nnw59", "WUJU64_JuP", "5iAHpvxSxO2", "ihPPm485_hs", "NkNPfB2sDO", "TKdS-6KF19Q", "XAU4IeTD-ZI", "rsl1twvIVNJ", "p9s7M5gU7aP", "RV2DBFpHK7w", "Kg6lYla4fe", "baBfexs3A7z", "X165z947b33", "ShGFzXEtnpJ", "hiY019eDyrg", "9tnsaTCmYjt", "Xr_7XJZVKhX"...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_re...
[ "The paper considers model-based reinforcement learning (MBRL) of the MBPO flavour, where model-free RL methods are accelerated using rollouts of a learned model. \nTHis paper proposes to alleviate (off-policy) model bias per rollout using the (on-policy) data and calls this approach on-policy correction (OPC). \nW...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 5 ]
[ 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "iclr_2022_81e1aeOt-sd", "rsl1twvIVNJ", "X165z947b33", "NkNPfB2sDO", "ihPPm485_hs", "ShGFzXEtnpJ", "Kg6lYla4fe", "RV2DBFpHK7w", "ShGFzXEtnpJ", "p9s7M5gU7aP", "4g2z7J-Bn9J", "Xr_7XJZVKhX", "baBfexs3A7z", "9tnsaTCmYjt", "hiY019eDyrg", "iclr_2022_81e1aeOt-sd", "iclr_2022_81e1aeOt-sd", ...
iclr_2022_8eb12UQYxrG
The Role of Pretrained Representations for the OOD Generalization of RL Agents
Building sample-efficient agents that generalize out-of-distribution (OOD) in real-world settings remains a fundamental unsolved problem on the path towards achieving higher-level cognition. One particularly promising approach is to begin with low-dimensional, pretrained representations of our world, which should facil...
Accept (Poster)
It can be prohibitively expensive to train a reinforcement learner from scratch — particularly in cases where experience is expensive to obtain, such as with a physical robot. So, we might hope to speed up RL in a couple of ways: first, by pre-training a representation that makes subsequent RL need less data; and...
train
[ "FIM3dU4Q7I", "qWEkQPJ4L7_", "AR_bm1m5liy", "uUciYPKHFAf", "wQ8oot7giud", "Ck2HfaHgJf", "KDVmCTucCLh", "h1HA0woLjCr", "AppDE0iV-1L", "xgO3z_c5APl", "dYXf04vjhjH", "YUj7oZo8WX7", "oq_Ys-0jbuK", "OBk_B4w4ln2", "y1RDKNO1eAp" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " It would be great if the other reviewers could comment on reviewer vE69's updated review, in particular the point about whether there is a need to go farther out of distribution. I'd like to split into two questions:\n\n* The OOD settings in the paper are clearly different from the training distribution, but just...
[ -1, -1, -1, -1, -1, 5, 8, -1, -1, -1, -1, -1, -1, 3, 6 ]
[ -1, -1, -1, -1, -1, 4, 3, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2022_8eb12UQYxrG", "wQ8oot7giud", "xgO3z_c5APl", "h1HA0woLjCr", "AppDE0iV-1L", "iclr_2022_8eb12UQYxrG", "iclr_2022_8eb12UQYxrG", "oq_Ys-0jbuK", "Ck2HfaHgJf", "dYXf04vjhjH", "y1RDKNO1eAp", "OBk_B4w4ln2", "KDVmCTucCLh", "iclr_2022_8eb12UQYxrG", "iclr_2022_8eb12UQYxrG" ]
iclr_2022_PilZY3omXV2
CoST: Contrastive Learning of Disentangled Seasonal-Trend Representations for Time Series Forecasting
Deep learning has been actively studied for time series forecasting, and the mainstream paradigm is based on the end-to-end training of neural network architectures, ranging from classical LSTM/RNNs to more recent TCNs and Transformers. Motivated by the recent success of representation learning in computer vision and n...
Accept (Poster)
The paper proposes to learn disentangled trends and seasonal representations of time series for forecasting tasks. It shows separating the representation learning and downstream forecasting task to be a more promising paradigm than the standard end-to-end supervised training approach for time-series forecasting. Duri...
test
[ "KmMYgNn0kN", "VpesgFgTT1d", "xVM4Pbhh48s", "bA6Id2qGm5l", "QA9MPvE8XuB", "ZyNBBQ0Yae", "We8_OEWz2IL", "deqVUmc70Rd", "CHLhhN1BO3L", "Y3yqsB4jCMB", "53tZIGpMBz", "uOkjzMYHzhy", "xAt1McnmsrS", "irqKa-ig3b4", "zv0Y_5T9k6M", "oUXbkmIZ8XJ", "CAD_RVJ66KQ", "EY0JepiJjtj", "6BC6fU3q6Hc"...
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_...
[ " The response addressed all my concern and I would be happy to see it get accepted. But due to the limitation as Q4 states, I will keep my score as it is and hope to see it improved in the future.", " We thank the reviewer for the valuable comments. As the discussion phase will end soon, we would like to check i...
[ -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 5 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "VpesgFgTT1d", "CAD_RVJ66KQ", "ZyNBBQ0Yae", "deqVUmc70Rd", "iclr_2022_PilZY3omXV2", "zv0Y_5T9k6M", "CHLhhN1BO3L", "iclr_2022_PilZY3omXV2", "EY0JepiJjtj", "53tZIGpMBz", "uOkjzMYHzhy", "xAt1McnmsrS", "irqKa-ig3b4", "mPqwWGPLNp-", "QA9MPvE8XuB", "j2h6AbLGTz", "oUXbkmIZ8XJ", "6BC6fU3q6...
iclr_2022_EskfH0bwNVn
Resolving Training Biases via Influence-based Data Relabeling
The performance of supervised learning methods easily suffers from the training bias issue caused by train-test distribution mismatch or label noise. Influence function is a technique that estimates the impacts of a training sample on the model’s predictions. Recent studies on \emph{data resampling} have employed infl...
Accept (Oral)
All reviewers are very positive about this paper. The reviewer with the lowest score did independent experiments that show that the authors' method works well, and has had an extensive discussion with the authors that justifies a higher score. The paper is potentially very valuable to practitioners, since it shows how ...
train
[ "wg_-_kNRnfc", "NKEpRVS6qxq", "wqoOg8286Xu", "aIzUwoCkM7Y", "qYJrPKYAiPj", "sKyc-6wuphw", "SMPCVkD8ylQ", "fahfEj3d2De", "Q8FZmN8D7y", "wlEpNyse3Ge", "f3QFGJFD7eJ", "lEvdJ2WzlK", "U5Z5VDLImxQ", "EusC-A9gz45", "w8-V64DVOqp", "0Qaq25XnkT", "kLVnZ1UCR_", "IZ_5jg4DJMt", "9MVmm2oA0v", ...
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "...
[ " Dear reviewer,\n\nWe appreciate the response from the reviewer. It is good to see we are now in sync about the effectiveness of RDIA.\n\nFollowing your constructive suggestion, we would like to describe [Ref1] appropriately in the main body of the paper and include the additional comparison results in the Appendi...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5 ]
[ "wqoOg8286Xu", "iclr_2022_EskfH0bwNVn", "aIzUwoCkM7Y", "sKyc-6wuphw", "sKyc-6wuphw", "Q8FZmN8D7y", "Q8FZmN8D7y", "NKEpRVS6qxq", "lEvdJ2WzlK", "iclr_2022_EskfH0bwNVn", "IZ_5jg4DJMt", "w8-V64DVOqp", "0Qaq25XnkT", "0Qaq25XnkT", "0Qaq25XnkT", "zQQO-ahYToL", "NKEpRVS6qxq", "wlEpNyse3Ge"...
iclr_2022_RRGVCN8kjim
Sparse DETR: Efficient End-to-End Object Detection with Learnable Sparsity
DETR is the first end-to-end object detector using a transformer encoder-decoder architecture and demonstrates competitive performance but low computational efficiency. The subsequent work, Deformable DETR, enhances the efficiency of DETR by replacing dense attention with deformable attention, which achieves 10x faster...
Accept (Poster)
This paper proposes to modify DETR, a recent Transformer-based architecture for object detection. More precisely, they propose to sparsify input feature maps by learning an extra classifier to select which input features (few of them) will be used in the attention module. The supervision of this classifier is guided b...
train
[ "rnZwePlNMzG", "RGnLp6ecKS1", "HgkS0_lyy_D", "SHNZ-3g-bPM", "rKTYPEh-bhj", "aDj-u2UtFLf", "Au9dLEvSoxX", "txNPyrVSpbL", "qX94ASfFHpp", "oHUQN5ajajT" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks to all reviewers for constructive suggestions that help make this work more complete.\nFollowing their suggestions, we have made the following major updates to the manuscript, including **+3 pages** in Appendix, to provide more justifications for the paper:\n\n- *Sentences that may have ambiguous meanings ...
[ -1, -1, -1, -1, -1, -1, 8, 5, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, 5, 5, 5, 5 ]
[ "iclr_2022_RRGVCN8kjim", "txNPyrVSpbL", "oHUQN5ajajT", "qX94ASfFHpp", "txNPyrVSpbL", "Au9dLEvSoxX", "iclr_2022_RRGVCN8kjim", "iclr_2022_RRGVCN8kjim", "iclr_2022_RRGVCN8kjim", "iclr_2022_RRGVCN8kjim" ]
iclr_2022_57PipS27Km
Continuous-Time Meta-Learning with Forward Mode Differentiation
Drawing inspiration from gradient-based meta-learning methods with infinitely small gradient steps, we introduce Continuous-Time Meta-Learning (COMLN), a meta-learning algorithm where adaptation follows the dynamics of a gradient vector field. Specifically, representations of the inputs are meta-learned such that a tas...
Accept (Spotlight)
This paper addresses a continuous-time formulation of gradient-based meta-learning (COMLN) where the adaptation is the solution of a differential equations. In general, outer loop optimization requires backpropagating over trajectories involving gradient updates in the inner loop optimization. It is claimed that one of...
test
[ "PC1A8-OgZG3", "MmEw6zmXAiM", "R87YPA3xubV", "wX-4bmMVvLv", "hsbeOGtirCv", "Qsd_MopBrWP", "6knfClDA1el", "a3xcJ-d_BL", "zpa97lG1Ae", "8uU7K7rcHzy", "rZrvucMFriR", "YzDKnicDc7A", "wL8NdaE-VkI", "gpagJt_2LH4", "d_YisKjPOZH", "57QK-YeHsQA" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for their response, and the additional comment on MSE.\nI am glad to see the paper headed towards acceptance.", " We are glad that all your concerns have been addressed, and we appreciate that you increased your score. We will make sure that this discussion will be included in ...
[ -1, -1, -1, 6, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ -1, -1, -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "zpa97lG1Ae", "hsbeOGtirCv", "wX-4bmMVvLv", "iclr_2022_57PipS27Km", "6knfClDA1el", "iclr_2022_57PipS27Km", "a3xcJ-d_BL", "Qsd_MopBrWP", "57QK-YeHsQA", "rZrvucMFriR", "wX-4bmMVvLv", "wL8NdaE-VkI", "d_YisKjPOZH", "iclr_2022_57PipS27Km", "iclr_2022_57PipS27Km", "iclr_2022_57PipS27Km" ]
iclr_2022_vrW3tvDfOJQ
Sample Efficient Deep Reinforcement Learning via Uncertainty Estimation
In model-free deep reinforcement learning (RL) algorithms, using noisy value estimates to supervise policy evaluation and optimization is detrimental to the sample efficiency. As this noise is heteroscedastic, its effects can be mitigated using uncertainty-based weights in the optimization process. Previous methods rel...
Accept (Spotlight)
The reviewers unanimously appreciated the clarity of the work as well as the framing of the proposed method. Congratulations.
test
[ "b3-C-AfCQp", "WF93IeIupS", "QsimJ7puVS", "VXUCEI5M2Y", "Xytqj4nn3KW", "ilCeJs02LK2", "I17_uLVhBog", "qZBBl9swvq", "FMzB_SWhkN", "ecRfCvQ0H9f", "jYTqs6_CaJa", "ENT3KhlFmFr", "03-FyZCK-HW", "6Po9AxCalk" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "The paper presents inverse-variance RL, which combines an ensemble of probabilistic neural networks with batch inverse-variance weighting. The former provides a method for uncertainty estimation and the latter provides a method for incorporating that uncertainty, by multiplying the loss of each entry in each minib...
[ 8, -1, 8, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, 10 ]
[ 4, -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 5 ]
[ "iclr_2022_vrW3tvDfOJQ", "jYTqs6_CaJa", "iclr_2022_vrW3tvDfOJQ", "FMzB_SWhkN", "iclr_2022_vrW3tvDfOJQ", "ecRfCvQ0H9f", "ecRfCvQ0H9f", "6Po9AxCalk", "QsimJ7puVS", "Xytqj4nn3KW", "b3-C-AfCQp", "jYTqs6_CaJa", "iclr_2022_vrW3tvDfOJQ", "iclr_2022_vrW3tvDfOJQ" ]
iclr_2022_rUwm9wCjURV
In a Nutshell, the Human Asked for This: Latent Goals for Following Temporal Specifications
We address the problem of building agents whose goal is to learn to execute out-of distribution (OOD) multi-task instructions expressed in temporal logic (TL) by using deep reinforcement learning (DRL). Recent works provided evidence that the agent's neural architecture is a key feature when DRL agents are learning to ...
Accept (Poster)
The paper considers the problem of learning to carry out novel, multi-task instructions specified via temporal logic using deep reinforcement learning. A specific focus of the paper is improving generalization to test-time instructions that differ from those encountered during training. To facilitate this generalizatio...
train
[ "7UFfZwXnFTr", "9qfXa9c7lzs", "acqEx0pg6oU", "NO8MXhO4afV", "poATHw33U1N", "EPP6bmUA-Fk", "uEQhPvICdCA", "K_jNlQJeV-g", "mu-glYkKWsC", "N_ahZWGOfDI", "XRr2HxiqYdN", "uCHCF9vnsTY", "i2qjsVgjJKD", "Iuax6U438j", "ylzYDPdSG9m", "bcImNImG1OU", "oxifOWw0abn", "rKwyluiRDdP" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " > Re: Scope + Motivation.\n\nWe thank the reviewer for the additional time spent with our work. While the reviewer mentions very interesting and challenging problems from the robotics and locomotion literature, e.g. learning from human demonstrations, please note that our work is focused on the problem of agents ...
[ -1, -1, -1, -1, -1, 6, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 3 ]
[ -1, -1, -1, -1, -1, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "9qfXa9c7lzs", "acqEx0pg6oU", "rKwyluiRDdP", "K_jNlQJeV-g", "EPP6bmUA-Fk", "iclr_2022_rUwm9wCjURV", "iclr_2022_rUwm9wCjURV", "uCHCF9vnsTY", "i2qjsVgjJKD", "ylzYDPdSG9m", "oxifOWw0abn", "poATHw33U1N", "acqEx0pg6oU", "iclr_2022_rUwm9wCjURV", "bcImNImG1OU", "uEQhPvICdCA", "iclr_2022_rUw...
iclr_2022_jbrgwbv8nD
Constraining Linear-chain CRFs to Regular Languages
A major challenge in structured prediction is to represent the interdependencies within output structures. When outputs are structured as sequences, linear-chain conditional random fields (CRFs) are a widely used model class which can learn local dependencies in the output. However, the CRF's Markov assumption makes i...
Accept (Poster)
This paper does as it’s title suggests, it introduces an algorithm for constraining a CRF’s output space to correspond to a pre-specified regular language. The authors build upon a wealth of prior work aiming to enable CRFs to capture particular non-local dependencies and output constraints and present a coherent gener...
train
[ "p6_UBWIXTon", "0rzvgyQBbSv", "KhzC-eEEA8", "VJkSEjqbIYJ", "jgkQ1b63eju", "Mm-0cbr2fWp", "s6Tb8oeNVpi", "Z6qXdnCIv3G", "jdylFr9eX_", "pnRBxY2lmV" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper claims to propose a generalization version of CRF, regular-constrained CRF (RegCCRF). Compared with traditional CRF, it can not only model local interdependencies but also incorporate non-local constraints for the model. Specifically, by specifying the space of possible output structures as a regular la...
[ 5, -1, -1, -1, -1, -1, -1, 6, 6, 8 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2022_jbrgwbv8nD", "KhzC-eEEA8", "pnRBxY2lmV", "Z6qXdnCIv3G", "jdylFr9eX_", "s6Tb8oeNVpi", "p6_UBWIXTon", "iclr_2022_jbrgwbv8nD", "iclr_2022_jbrgwbv8nD", "iclr_2022_jbrgwbv8nD" ]
iclr_2022_H0oaWl6THa
Hybrid Local SGD for Federated Learning with Heterogeneous Communications
Communication is a key bottleneck in federated learning where a large number of edge devices collaboratively learn a model under the orchestration of a central server without sharing their own training data. While local SGD has been proposed to reduce the number of FL rounds and become the algorithm of choice for FL, i...
Accept (Spotlight)
The paper contributes to the literature on federated learning by introducing a hybrid local SGD (HL-SGD) method. HL-SGD is motivated by the setups where edge devices are grouped into clusters with fast connections within the cluster, but slower connection between the devices and the server. HL-SGD uses hybrid updates: ...
train
[ "Ci5HzH30-9e", "RGbKNldpa0", "bQCxtRGvVQ", "nU1CfIOCVZZ", "CyO3EdVj_I", "trDbVgBASwj", "cjou7rvsUqg", "rJ7TkBg9rkC", "U2ygb9_Lbkw", "KF9vSibza8g", "dkHijf_Ih2", "BPbRJljtAwm", "OIWnrPUpDGX", "VD08qzSiPF", "SIKjvc9B-xd", "-H-U9X4FmRU", "uOK5DeuyWOr", "KgC53hbMukO" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "public", "author", "public", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper introduces HL-SGD, a method to design a hybrid federated learning method to leverage a hybrid high-speed D2D and low-speed D2S network and speed up the communication of federated learning applications. Both theoretical analysis and empirical results are provided to show the effectiveness of HL-SGD. The ...
[ 6, -1, 8, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 3, -1, 4, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2022_H0oaWl6THa", "nU1CfIOCVZZ", "iclr_2022_H0oaWl6THa", "OIWnrPUpDGX", "iclr_2022_H0oaWl6THa", "cjou7rvsUqg", "-H-U9X4FmRU", "iclr_2022_H0oaWl6THa", "KF9vSibza8g", "dkHijf_Ih2", "BPbRJljtAwm", "iclr_2022_H0oaWl6THa", "VD08qzSiPF", "bQCxtRGvVQ", "KgC53hbMukO", "uOK5DeuyWOr", "r...
iclr_2022_Xo0lbDt975
An Agnostic Approach to Federated Learning with Class Imbalance
Federated Learning (FL) has emerged as the tool of choice for training deep models over heterogeneous and decentralized datasets. As a reflection of the experiences from different clients, severe class imbalance issues are observed in real-world FL problems. Moreover, there exists a drastic mismatch between the imbala...
Accept (Poster)
This paper presents a method to handle class imbalance in federated learning, while accounting for data heterogeneity and privacy. The key idea is to solve a constrained optimization problem where the difference between the global and local objective values has to be less than some parameter $\epsilon$. The paper propo...
train
[ "3as-Cj3yMw", "L3DrAEUPc5d", "b9KIOQuHAV5", "xpxlCg4gXam", "sgThmZgmW92", "JFnwFWIUCY", "Z5Z37kEGYRa", "R2RQ4UtvLSp" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The authors design a method, CLIMB, to solve the severe class imbalance issues in FL problem. In their method, they propose a constrained FL formulation (CFL) and adopt the method of Lagrange multipliers to solve this problem. They present some theoretical and experimental results to prove that their method is eff...
[ 6, 6, -1, -1, -1, -1, 6, 6 ]
[ 4, 3, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2022_Xo0lbDt975", "iclr_2022_Xo0lbDt975", "R2RQ4UtvLSp", "Z5Z37kEGYRa", "3as-Cj3yMw", "L3DrAEUPc5d", "iclr_2022_Xo0lbDt975", "iclr_2022_Xo0lbDt975" ]
iclr_2022_A3HHaEdqAJL
Task Relatedness-Based Generalization Bounds for Meta Learning
Supposing the $n$ training tasks and the new task are sampled from the same environment, traditional meta learning theory derives an error bound on the expected loss over the new task in terms of the empirical training loss, uniformly over the set of all hypothesis spaces. However, there is still little research on how...
Accept (Spotlight)
This paper provides generalization bounds for meta-learning based on a notion of task-relatedness. The result is natural and interesting--intuitively, when tasks are similar, then meta-learning algorithms should be able to utilize all data points across all tasks. The theoretical contribution is novel, and the results ...
val
[ "ljof7F2l60", "ebvkhyTFf16", "RaIrML4zehU", "pF1dutuvP5w" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper presents novel generalization bounds for meta-learning based on a notion of task-relatedness that allows one to compare two tasks by notably allowing a mapping only in subregions where the similarity can be measured in a sense. The theoretical results a covering number bound, a covering number meta-learn...
[ 8, 8, 8, 8 ]
[ 3, 1, 2, 3 ]
[ "iclr_2022_A3HHaEdqAJL", "iclr_2022_A3HHaEdqAJL", "iclr_2022_A3HHaEdqAJL", "iclr_2022_A3HHaEdqAJL" ]
iclr_2022_ccWaPGl9Hq
Towards Deployment-Efficient Reinforcement Learning: Lower Bound and Optimality
Deployment efficiency is an important criterion for many real-world applications of reinforcement learning (RL). Despite the community's increasing interest, there lacks a formal theoretical formulation for the problem. In this paper, we propose such a formulation for deployment-efficient RL (DE-RL) from an ''optimizat...
Accept (Spotlight)
The authors’ present a precise definition of deployment efficient RL, where each new update of the policy may be costly, and theoretically analyze this for finite-horizon linear MDPs. The authors include an information-theoretic lower bound for the number of deployments required. The reviewers found this an important s...
train
[ "-ErBdb7exUi", "FyAm0Y85LTi", "HjaN51aHQ8Y", "ZT4lUHEDrE", "FGZMEEUK7pX", "xlTRiNdlgYC", "I31CXLgmnzv", "yH5bl4apMG", "CVTzYnjQb63", "X9KkbAbkjX2", "kTWnsHkTk7" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Indeed it would be nicer to have $N=O(1/\\epsilon^2)$ even with a small constant $c_K$. The technical difficulty that prevents us from having this result is in the Ellipsoid Potential Lemma 4.2; see its proof on page 26. In particular, in Equation above Eq.(7), we have $(...)^p \\ge 1 + KN/d$, where $p$ is later ...
[ -1, -1, -1, -1, -1, -1, -1, 8, 8, 8, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 3 ]
[ "FyAm0Y85LTi", "FGZMEEUK7pX", "kTWnsHkTk7", "iclr_2022_ccWaPGl9Hq", "X9KkbAbkjX2", "CVTzYnjQb63", "yH5bl4apMG", "iclr_2022_ccWaPGl9Hq", "iclr_2022_ccWaPGl9Hq", "iclr_2022_ccWaPGl9Hq", "iclr_2022_ccWaPGl9Hq" ]
iclr_2022_mQxt8l7JL04
Regularized Autoencoders for Isometric Representation Learning
The recent success of autoencoders for representation learning can be traced in large part to the addition of a regularization term. Such regularized autoencoders ``constrain" the representation so as to prevent overfitting to the data while producing a parsimonious generative model. A regularized autoencoder should in...
Accept (Poster)
This paper proposes an extension to learning a representation: it motivates, proposes and evaluates a new regularizer term that promotes smoothness via enforcing the representation to be geometry-preserving (isometry, conformal mapping of degree k). Comparisons with a standard VAE and FMVAE (Chen et al. 2020) are shown...
val
[ "R0aLX_LKS0P", "TxIN4nhz4Q7", "ybmjmKgJ0Uf", "WS8UMo0Btt", "X7sWIX8lSN0", "m8opl65iIyI", "j-7A3-qspB8", "9HNOaIFcqHE", "BQvrc54yPiw", "hm86NzQ3JlF", "d5tyndtu5GL", "Mzwj5zG0j8C", "uFS_IUmVdNz", "BshNTpsXVX0", "LBzmyak48pA", "v60ourA3tf", "QnfphSwgDG", "grldadT43DD", "IG0yMkOX5AG"...
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", ...
[ " Thank you again for recognizing the value of our research. Your questions and comments have helped greatly in the revision of the paper!", " Thank you for acknowledging our contribution and raising the score! Your comments have helped a lot to improve our paper. We sincerely appreciate this reviewing process!",...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 8, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3, 4 ]
[ "ybmjmKgJ0Uf", "WS8UMo0Btt", "v60ourA3tf", "hm86NzQ3JlF", "m8opl65iIyI", "grldadT43DD", "9HNOaIFcqHE", "LBzmyak48pA", "T1ALZFbeYFG", "5scvzWXHZD", "Mqex2fDcRxr", "Mqex2fDcRxr", "Mqex2fDcRxr", "0uv9jZUw4ko", "0uv9jZUw4ko", "mNj8JNH9Z9s", "mNj8JNH9Z9s", "T1ALZFbeYFG", "5scvzWXHZD",...
iclr_2022_pfNyExj7z2
Vector-quantized Image Modeling with Improved VQGAN
Pretraining language models with next-token prediction on massive text corpora has delivered phenomenal zero-shot, few-shot, transfer learning and multi-tasking capabilities on both generative and discriminative language tasks. Motivated by this success, we explore a Vector-quantized Image Modeling (VIM) approach that ...
Accept (Poster)
This paper adopts ViT in the VQ-GAN framework replacing CNN, and achieves SOTA FID and IS scores. The empirical results are pretty impressive. It could benefit some practical applications. The technical novelty is limited, but the tricks such as l2-normalization of codes are interesting.
train
[ "8CsIvqlrfLO", "bOpw6IYzTVh", "sWTc9G6ccRv", "p5wmjueFnfo", "VFVpvB3Phvh", "79Dht9UqoXW", "kWUMqEWqWLU", "QAqLB2vKTiK", "mVzIbu6QBDg", "A1goIPCBsne", "AC6aN2bf3VM", "FCDzy1BQMyX", "biSJQbjYMzs", "BwGbrc7u23x" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "official_reviewer", "official_reviewer" ]
[ " ## Part 1: Main Concerns\n\nQ: Clear motivations and technical novelty.\n\nA: We first thank the reviewer for your insightful comments and questions, as well as encouraging comments tending to accept. We totally understand this concern and want to make it more clear as below (*mostly the same as our reply to Revi...
[ -1, -1, -1, -1, 6, -1, 6, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, -1, -1, 4, -1, 4, -1, -1, -1, -1, -1, 5, 4 ]
[ "biSJQbjYMzs", "kWUMqEWqWLU", "p5wmjueFnfo", "A1goIPCBsne", "iclr_2022_pfNyExj7z2", "mVzIbu6QBDg", "iclr_2022_pfNyExj7z2", "bOpw6IYzTVh", "BwGbrc7u23x", "VFVpvB3Phvh", "FCDzy1BQMyX", "iclr_2022_pfNyExj7z2", "iclr_2022_pfNyExj7z2", "iclr_2022_pfNyExj7z2" ]
iclr_2022_Nh7CtbyoqV5
Normalization of Language Embeddings for Cross-Lingual Alignment
Learning a good transfer function to map the word vectors from two languages into a shared cross-lingual word vector space plays a crucial role in cross-lingual NLP. It is useful in translation tasks and important in allowing complex models built on a high-resource language like English to be directly applied on an ali...
Accept (Poster)
The authors propose a normalization method for cross-lingual text representations. The goal is to normalize the monolingual embeddings based on spectral normalization. The study shows that produced text representations keep their meaning and improve performance on downstream tasks. There is a disagreement among the re...
train
[ "By5wuZgstqW", "dWyz8C6g4c", "ThQRTygPZhb", "oVsRXCebnr7", "EZr9TuKw5_0", "4DQJjJx-x5Q", "IVpeHBevpM_", "DztansNWqbo", "2zhB7Wudd4m", "PuTxBexnO-J", "eK4DqaHXP2h", "GlBizxqjVMZ", "-8VH_Orw-JY", "ZFqpz5s9S-P", "WsaSKGemDMQ", "4-7jEqii_n", "bVPthZ_jhGv", "SkC8kAcjw5Y", "HtT9BJbSiV"...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer",...
[ "The paper proposes a new spectral normalization technique that improves the cross-lingual mapping of monolingual embeddings by rigid, orthogonal transformations. This is demonstrated by consistent gains on bilingual lexical induction and two other downstreeam cross-lingual tasks.\n Strength:\n+ The goal is clearly...
[ 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 6, 8 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "iclr_2022_Nh7CtbyoqV5", "8uEGE1IywP0", "IVpeHBevpM_", "PuTxBexnO-J", "By5wuZgstqW", "rb4E2QfjU3w", "rb4E2QfjU3w", "xrdqJOTf6HL", "xrdqJOTf6HL", "09y6liWbw9r", "09y6liWbw9r", "09y6liWbw9r", "09y6liWbw9r", "09y6liWbw9r", "09y6liWbw9r", "ERkk_F50eyp", "ERkk_F50eyp", "ERkk_F50eyp", ...
iclr_2022_ei3SY1_zYsE
Fortuitous Forgetting in Connectionist Networks
Forgetting is often seen as an unwanted characteristic in both human and machine learning. However, we propose that forgetting can in fact be favorable to learning. We introduce forget-and-relearn as a powerful paradigm for shaping the learning trajectories of artificial neural networks. In this process, the forgetting...
Accept (Poster)
In the paper, it introduces a forget-and-relearn framework to the iterative learning algorithm. It provides serval new insight that forgetting could be favorable to learning and validates the insights via image classification and language tasks. The idea is novel and inspiring. Although there are some debates on the ex...
train
[ "pHdrHhH8u1", "kFYTOf8PwU", "7xCnmkbwz4", "TkaAcCUQw45", "gXP3tDLDi2z", "OFWQKmc3l-7", "05Dypf8qRC2", "iUdh0sJeaI6", "vnJOKUzRucT", "-_yeByzLeL", "Xtw3PckU20", "3nyFKfwyWuG", "xAuT65vCWvd", "LhrXalVQzOvf", "_Tmo5_2NSX", "ntltodyb3JD", "Fx5piVVGqD", "7H4X4O7EW7O" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " After reading the author's response (both to my review and others) and the revisions, I'm raising my score to 6.\n\nFor what it's worth, I also apologize for the late response.\n\n", "The paper attempts to unify several prior observations on how partial forgetting during neural network training can affect the f...
[ -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 10, 6 ]
[ -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "-_yeByzLeL", "iclr_2022_ei3SY1_zYsE", "iclr_2022_ei3SY1_zYsE", "iclr_2022_ei3SY1_zYsE", "ntltodyb3JD", "iclr_2022_ei3SY1_zYsE", "kFYTOf8PwU", "Fx5piVVGqD", "TkaAcCUQw45", "kFYTOf8PwU", "7H4X4O7EW7O", "7H4X4O7EW7O", "TkaAcCUQw45", "TkaAcCUQw45", "ntltodyb3JD", "Fx5piVVGqD", "iclr_202...
iclr_2022_ltM1RMZntpu
Weighted Training for Cross-Task Learning
In this paper, we introduce Target-Aware Weighted Training (TAWT), a weighted training algorithm for cross-task learning based on minimizing a representation-based task distance between the source and target tasks. We show that TAWT is easy to implement, is computationally efficient, requires little hyperparameter tuni...
Accept (Oral)
The paper proposes an approach to learn the task-specific weights in pretraining or mutli-task learning. It provides theoretical guarantees to the algorithm, as well as strong empirical results on several NLP problems. All the reviewers agreed that the work is interesting and the paper is well written. During the discu...
train
[ "a70DyEfba-G", "jCnF08Rq1Np", "S1V4lfJq3yC", "QZyxxlr0jYn", "CvFchBmxC_Z", "_skTszPNpyz", "1JfTzZ_Zf2e", "S6Xp_BWbpSL" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response.", " Thank you for your valuable feedback!\n\n**Reply to “Dynamic weights analysis”:** Thanks for your suggestion! We will highlight it in the main text in the revised version. \n\n**Reply to “Normalized joint training”:** Thanks for your suggestion! We will make it more clear in the re...
[ -1, -1, -1, -1, 8, 6, 8, 8 ]
[ -1, -1, -1, -1, 3, 3, 3, 3 ]
[ "QZyxxlr0jYn", "S6Xp_BWbpSL", "_skTszPNpyz", "CvFchBmxC_Z", "iclr_2022_ltM1RMZntpu", "iclr_2022_ltM1RMZntpu", "iclr_2022_ltM1RMZntpu", "iclr_2022_ltM1RMZntpu" ]
iclr_2022_RShaMexjc-x
Semi-relaxed Gromov-Wasserstein divergence and applications on graphs
Comparing structured objects such as graphs is a fundamental operation involved in many learning tasks. To this end, the Gromov-Wasserstein (GW) distance, based on Optimal Transport (OT), has proven to be successful in handling the specific nature of the associated objects. More specifically, through the nodes connecti...
Accept (Poster)
This paper takes advantage of a well known fact in the OT literature: that relaxing either of the marginals of OT problems results in nearest neighbor assignments (as e.g. in k-means) or soft-assignments when using an entropic regularizer. The authors take advantage of this simple property (used e.g. in the first itera...
train
[ "4iGgCMmz06J", "x_qkRPqpRhK", "QFq1n3j3g-M", "U9n-E3863TX", "S7HSZjsQ77h", "JRZ26twtO87", "A-xg1szqCIs", "rDBjcFN2ExO", "ZQCQUAUKB4", "XDxFHLRrWnt", "QXBtLCmCOcE", "fgTrKdbtjHo" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for the responses. I think the applications are interesting but lacking theory to back up the results rather than some intuitions. I put this paper on the borderline. ", " We thank the reviewers for its comments. Please find below our detailed answer:\n\n### Typos and minor co...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 3, 4 ]
[ "S7HSZjsQ77h", "XDxFHLRrWnt", "ZQCQUAUKB4", "fgTrKdbtjHo", "QXBtLCmCOcE", "QXBtLCmCOcE", "QXBtLCmCOcE", "iclr_2022_RShaMexjc-x", "iclr_2022_RShaMexjc-x", "iclr_2022_RShaMexjc-x", "iclr_2022_RShaMexjc-x", "iclr_2022_RShaMexjc-x" ]
iclr_2022_7grkzyj89A_
Generalization Through the Lens of Leave-One-Out Error
Despite the tremendous empirical success of deep learning models to solve various learning tasks, our theoretical understanding of their generalization ability is very limited. Classical generalization bounds based on tools such as the VC dimension or Rademacher complexity, are so far unsuitable for deep models and it ...
Accept (Poster)
This paper proposes to use LOO to characterize the generalization error of neural networks via the connection between NN and kernel learning. The reviewers find the new results interesting. The meta reviewer agrees and thus recommend acceptance.
train
[ "msldiaF3p4D", "xot6BsxKZLw", "PcJPBHpzZIf", "gc4VVh-M38G", "oQd1cZ1_eIh", "zzQ7LiAPqxU", "R3Lt4sHL-no", "uBwLiuZpyUe", "vmnXKNn37k6" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper investigates leave-one-out (LOO) error as a generalization measure for wide, deep neural networks using the correspondence to the NTK. The authors show LOO error also shows behaviors in DNN such as random label fitting, double descent and transfer learning. \n\nSpecific contributions claimed by the aut...
[ 6, 6, -1, -1, -1, -1, -1, -1, 6 ]
[ 4, 3, -1, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2022_7grkzyj89A_", "iclr_2022_7grkzyj89A_", "uBwLiuZpyUe", "xot6BsxKZLw", "gc4VVh-M38G", "R3Lt4sHL-no", "msldiaF3p4D", "vmnXKNn37k6", "iclr_2022_7grkzyj89A_" ]
iclr_2022_R79ZGjHhv6p
Toward Faithful Case-based Reasoning through Learning Prototypes in a Nearest Neighbor-friendly Space.
Recent advances in machine learning have brought opportunities for the ever-increasing use of AI in the real world. This has created concerns about the black-box nature of many of the most recent machine learning approaches. In this work, we propose an interpretable neural network that leverages metric and prototype le...
Accept (Poster)
The authors propose a neural network model to preserve the sub-class similarity. The key of the model is to add a prototype layer to a multi-scale deep nearest neighbor network. The prototype layer stores the representative prototypes of some fine-grained sub-classes. The use of the prototype layer preserves intepretab...
train
[ "5FzinoUK3RH", "qy5nsIUqTDv", "B80YW9m6Cf_", "NJqBfs9-Srg", "9lzHLwOgyx", "0VGIiXlt_o", "xpt40ybZ1H5", "akeS7DQhpE", "bGzYDCy_g1C", "vgVvRJfF8tr", "R2xK8Qdvs-f", "UUZh38jCPy", "f5hE7hxeEu" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ " Thanks for your responses, fixing the typo, explaining your kd-tree experiments and your experimental results without r1/r2 in the loss function, and the rationale behind the number of prototypes. Your responses look good to me.", "This paper presents a method to learn neural nets that preserves the subclass\ns...
[ -1, 6, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 8 ]
[ -1, 3, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, 4 ]
[ "vgVvRJfF8tr", "iclr_2022_R79ZGjHhv6p", "R2xK8Qdvs-f", "iclr_2022_R79ZGjHhv6p", "xpt40ybZ1H5", "iclr_2022_R79ZGjHhv6p", "akeS7DQhpE", "bGzYDCy_g1C", "0VGIiXlt_o", "f5hE7hxeEu", "qy5nsIUqTDv", "iclr_2022_R79ZGjHhv6p", "iclr_2022_R79ZGjHhv6p" ]
iclr_2022_WAid50QschI
Sparse Communication via Mixed Distributions
Neural networks and other machine learning models compute continuous representations, while humans communicate mostly through discrete symbols. Reconciling these two forms of communication is desirable for generating human-readable interpretations or learning discrete latent variable models, while maintaining end-to-en...
Accept (Oral)
This paper proposes mixed distributions over convex polytopes, and provides theory for mixed distributions that is relevant to the machine learning community. All of the reviewers were positive, and agree that this is a solid contribution. I agree, and I believe that this paper stands a chance of being a foundational p...
train
[ "RDMBi-Fvfjh", "warE_0QhIXg", "ZA359BgIhGR", "krFYpkE1Ko", "gsCKJJMAGlX", "3LRcrDxWAnu", "InRF0JlGANm", "Y4HGUhlhYRM", "ikK3ibntNvJ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper presents multidimensional extensions for mixed random variables originating from discrete-continuous hybrids based on truncation and rectification, which have been proposed for univariate distributions. The proposed extension replaces truncation by sparse projections to the simplex. The authors also pro...
[ 8, 8, 6, -1, -1, -1, -1, -1, 8 ]
[ 3, 5, 2, -1, -1, -1, -1, -1, 3 ]
[ "iclr_2022_WAid50QschI", "iclr_2022_WAid50QschI", "iclr_2022_WAid50QschI", "Y4HGUhlhYRM", "warE_0QhIXg", "ikK3ibntNvJ", "RDMBi-Fvfjh", "ZA359BgIhGR", "iclr_2022_WAid50QschI" ]
iclr_2022_ivQruZvXxtz
Sequential Reptile: Inter-Task Gradient Alignment for Multilingual Learning
Multilingual models jointly pretrained on multiple languages have achieved remarkable performance on various multilingual downstream tasks. Moreover, models finetuned on a single monolingual downstream task have shown to generalize to unseen languages. In this paper, we first show that it is crucial for those tasks to ...
Accept (Poster)
This paper presents a gradient alignment approach to alleviate negative transfer and catastrophic forgetting for multitask and cross lingual learning. Experiments on many domains and datasets demonstrate the efficacy of the proposed approach. All reviewers agree that the simplicity of the proposed method is a strength...
train
[ "sPdmfboPnkv", "DyUF7ISYMk6", "ZbMsnEtkxV", "5rXFfto22TU", "iih8RG_AGfB", "WfawYOuegHX", "PpABOjlQqE-", "OyGffzKbFG", "TENzfaeQHYQ", "aRQbnaCbXF1", "Wp1qtaOKdWv", "Cudo5BF46dy", "kze3MkyA6NJ", "FGpqHA76XCp", "KkSu0NdMiQs", "uT3pqKkEz2n", "95MAydiiTYZ", "yFKk69WKuZc", "r7cg8EuMOHf...
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ...
[ " Thank you for the response. We would like to kindly answer your comments and questions.\n\n**[Q1]**. In the GLUE benchmark, it seems the proposed algorithm only has marginal improvements against the standard MTL setting. It has meaningful improvements than the baseline in 4 benchmarks, while is on-par or worse on...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5, 4 ]
[ "DyUF7ISYMk6", "Cudo5BF46dy", "73qUbbCmCrA", "0ztepnZgbTX", "r7cg8EuMOHf", "gJbZJ5QCR47", "r7cg8EuMOHf", "iclr_2022_ivQruZvXxtz", "gJbZJ5QCR47", "gJbZJ5QCR47", "iclr_2022_ivQruZvXxtz", "r7cg8EuMOHf", "73qUbbCmCrA", "0ztepnZgbTX", "73qUbbCmCrA", "0ztepnZgbTX", "73qUbbCmCrA", "0ztepn...
iclr_2022_9ZPegFuFTFv
miniF2F: a cross-system benchmark for formal Olympiad-level mathematics
We present $\textsf{miniF2F}$, a dataset of formal Olympiad-level mathematics problems statements intended to provide a unified cross-system benchmark for neural theorem proving. The $\textsf{miniF2F}$ benchmark currently targets Metamath, Lean, Isabelle (partially) and HOL Light (partially) and consists of 488 problem...
Accept (Poster)
The paper presents miniF2F, a dataset of 488 highschool and college level math problems. The problems are fully formalized and include proofs in the Metamath, Lean and Isabelle theorem provers (as the reviewers pointed out, the support for Isabelle is limited, and that should be made clearer in the abstract). This mult...
train
[ "TFi1mMXE2Yz", "5jAoblzu0oV", "heMrGcysCr", "yVzRg9BvIr", "aNF1lJZjWkv", "ZDvBSjs0FH", "L_7kEfOANMu", "phUDMTyBlO5", "Ly2nXRH3Jip" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper presents miniF2F, a test suite of Olympiad-level problems of theorem proving that is implemented in Metamath, Lean and Isabelle. MiniF2F contains 488 individual theorem statements that are formalized from Olympiad math contests. GPT-f models trained on Metamath and Lean are evaluated on this test suite....
[ 8, -1, -1, -1, -1, -1, 5, 6, 8 ]
[ 5, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "iclr_2022_9ZPegFuFTFv", "Ly2nXRH3Jip", "yVzRg9BvIr", "L_7kEfOANMu", "TFi1mMXE2Yz", "phUDMTyBlO5", "iclr_2022_9ZPegFuFTFv", "iclr_2022_9ZPegFuFTFv", "iclr_2022_9ZPegFuFTFv" ]
iclr_2022_DSQHjibtgKR
Online Facility Location with Predictions
We provide nearly optimal algorithms for online facility location (OFL) with predictions. In OFL, $n$ demand points arrive in order and the algorithm must irrevocably assign each demand point to an open facility upon its arrival. The objective is to minimize the total connection costs from demand points to assigned fac...
Accept (Poster)
This paper considers the recent line of work on algorithms with predictions. They give new results on the online facility location problem. Overall, the reviewers felt the topic was of interest to the community. There were some concerns about the error metric used and the overall framework. However, the majority of r...
train
[ "RnMuIZ6DUBk", "RBefOraCJjD", "-89Blz5r3m", "cVxqkOMoJdi", "vwNkklea8gg", "zyKZEvE9nbR", "cmA_8eErN49", "hCAkfNsRRvB", "ERj8rDPQ7y3", "ikEfhfyZ7bp", "WWjJdI0cAqk", "wtTwD61BAih", "F2EZz9N0jK", "scYrJu6Z-_k" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper considers the online facility location problem with predictions. In this problem, there is a sequence of points which arrive online and the algorithm must either open a facility to serve each point, paying the facility opening cost plus the distance to the opened facility, or it can assign the point to...
[ 8, 6, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 6, 6 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "iclr_2022_DSQHjibtgKR", "iclr_2022_DSQHjibtgKR", "cVxqkOMoJdi", "scYrJu6Z-_k", "F2EZz9N0jK", "wtTwD61BAih", "WWjJdI0cAqk", "RBefOraCJjD", "RnMuIZ6DUBk", "iclr_2022_DSQHjibtgKR", "iclr_2022_DSQHjibtgKR", "iclr_2022_DSQHjibtgKR", "iclr_2022_DSQHjibtgKR", "iclr_2022_DSQHjibtgKR" ]
iclr_2022__5js_8uTrx1
Towards Evaluating the Robustness of Neural Networks Learned by Transduction
There has been emerging interest in using transductive learning for adversarial robustness (Goldwasser et al., NeurIPS 2020; Wu et al., ICML 2020; Wang et al., ArXiv 2021). Compared to traditional defenses, these defense mechanisms "dynamically learn" the model based on test-time input; and theoretically, attacking the...
Accept (Poster)
The paper formalizes the adversarial attack problem for transductive defenses, where the model is sequentially updated with a batch of (adversarial) test inputs. The paper comes up with a quite generic attack scheme and their instantiation of this scheme shows that RMC and DENT are not robust respectively not more robu...
train
[ "Kr3-gkIXnm", "XCfsS-cAxa1", "UECHuXlbHQh", "9mLpvx-IJ2", "GgCa7-XTmLW", "NxZrpSDg-7K", "RACQCq35jx3", "tbmzu8uifRT", "oEb4H8Pjrm", "uCy5MRo9BQB" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes adaptive attacks against transductive robust learning methods. These methods aim to improve the robustness of models to adversarial examples by updating the model using unlabeled test data. However, this paper notes that previous evaluation of transductive robust learning has not considered att...
[ 6, -1, -1, -1, -1, -1, -1, 5, 6, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "iclr_2022__5js_8uTrx1", "iclr_2022__5js_8uTrx1", "RACQCq35jx3", "uCy5MRo9BQB", "oEb4H8Pjrm", "Kr3-gkIXnm", "tbmzu8uifRT", "iclr_2022__5js_8uTrx1", "iclr_2022__5js_8uTrx1", "iclr_2022__5js_8uTrx1" ]
iclr_2022_hl9ePdHO4_s
Do We Need Anisotropic Graph Neural Networks?
Common wisdom in the graph neural network (GNN) community dictates that anisotropic models---in which messages sent between nodes are a function of both the source and target node---are required to achieve state-of-the-art performance. Benchmarks to date have demonstrated that these models perform better than comparabl...
Accept (Poster)
The manuscript develops a new and simple graph neural network architecture. The proposal make use of only O(V) (number of vertices) rather than O(E) (number of edges, meaning that it may be useful for scaling to larger problems. The didactic figures are especially clear, and as is shown in Fig 1 the proposed architectu...
train
[ "Y505v2SKNX2", "UKqOou_wEaO", "LdrmjvhXwqI", "Hq-4L0TUFPS", "rn6zvJuLSj0", "ghH5p7aJnC_", "YvI9rdVPbC-", "UUWbk8VqUEH", "vJemh6hgjy6", "Nz76cHjmrP7" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your responses, I am mostly convinced by what you report.\n\n\"It’s true that commercially available accelerators do not offer great support for sparse matrices yet. However, we would argue that sparsity support is a very popular area for research, and we will start to see these developments arrive ...
[ -1, -1, -1, -1, -1, -1, 3, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "UKqOou_wEaO", "UUWbk8VqUEH", "iclr_2022_hl9ePdHO4_s", "Nz76cHjmrP7", "vJemh6hgjy6", "YvI9rdVPbC-", "iclr_2022_hl9ePdHO4_s", "iclr_2022_hl9ePdHO4_s", "iclr_2022_hl9ePdHO4_s", "iclr_2022_hl9ePdHO4_s" ]
iclr_2022_H94a1_Pyr-6
Auto-scaling Vision Transformers without Training
This work targets automated designing and scaling of Vision Transformers (ViTs). The motivation comes from two pain spots: 1) the lack of efficient and principled methods for designing and scaling ViTs; 2) the tremendous computational cost of training ViT that is much heavier than its convolution counterpart. To tackle...
Accept (Poster)
The paper introduces As-ViT, an interesting framework for searching and scaling ViTs without training. Overall, the paper received positive reviews. On the other hand, R1 rated the paper as marginally below the threshold, raising concerns about search on small datasets and issues regarding the comparison in terms of FL...
train
[ "e8qsEwm8Mhn", "uSrvDOG91Cl", "RjQ9yXSdEJt", "mUpEohBHypc", "pJ9Cwoce6zv", "5AA-rWJAWdP", "5J7nM7XxPW", "tS7024kiIFf", "5yfk-oVVsw5" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewers and AC panel,\n\nThank you again for your valuable reviews that have helped improve and revise our submission. We are happy that the contributions of our work have been recognized by reviewers eYy9 and rFuj. We also realize that the end of the rebuttal period is only one day away, and we are yet to...
[ -1, -1, -1, -1, -1, -1, 6, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "iclr_2022_H94a1_Pyr-6", "tS7024kiIFf", "tS7024kiIFf", "5yfk-oVVsw5", "5J7nM7XxPW", "tS7024kiIFf", "iclr_2022_H94a1_Pyr-6", "iclr_2022_H94a1_Pyr-6", "iclr_2022_H94a1_Pyr-6" ]
iclr_2022_D78Go4hVcxO
How Do Vision Transformers Work?
The success of multi-head self-attentions (MSAs) for computer vision is now indisputable. However, little is known about how MSAs work. We present fundamental explanations to help better understand the nature of MSAs. In particular, we demonstrate the following properties of MSAs and Vision Transformers (ViTs): (1) MSA...
Accept (Spotlight)
The paper presents an empirical analysis of Vision Transformers - and in particular multi-headed self-attention - and ConvNets, with a focus on optimization-related properties (loss landscape, Hessian eigenvalues). The paper shows that both classes of models have their strengths and weaknesses and proposes a hybrid mod...
train
[ "9oqKZcwT1vq", "lp0Tl9T8D_", "W9fQuLrpD2i", "DAj9_WxosyW", "tlmeUqmF5U_", "IGWADGFWREs", "IuuQkDKDman", "wBY2fBpyoyd" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your insightful feedback. We address all of your concerns below. If you find our responses adequate, we would appreciate it if you consider increasing your score.\n\n---\n\n**Ⅲ-1. The visualized loss landscape in the paper is different from Fig 1 of [1].**\n\nThe difference between the loss landscap...
[ -1, -1, -1, -1, 8, 8, 5, 8 ]
[ -1, -1, -1, -1, 3, 4, 4, 3 ]
[ "IuuQkDKDman", "wBY2fBpyoyd", "IGWADGFWREs", "tlmeUqmF5U_", "iclr_2022_D78Go4hVcxO", "iclr_2022_D78Go4hVcxO", "iclr_2022_D78Go4hVcxO", "iclr_2022_D78Go4hVcxO" ]
iclr_2022_Dzpe9C1mpiv
A Unified Wasserstein Distributional Robustness Framework for Adversarial Training
It is well-known that deep neural networks (DNNs) are susceptible to adversarial attacks, exposing a severe fragility of deep learning systems. As the result, adversarial training (AT) method, by incorporating adversarial examples during training, represents a natural and effective approach to strengthen the robustness...
Accept (Poster)
The paper extends the previously established connection between adversarial training (AT) and Wasserstein distributional robustness (WDR) to other adversarial defense methods such as PGD-AT, TRADES and MART, and connects them to WDR. While this connection itself is not surprising given earlier works connecting AT and W...
train
[ "n0Uq8LHiJu", "-2Rfr7xvlfT", "mqZp9LUIepJ", "qfkCSh4xJnc", "48nx93Hrg1t", "iFi2pHhL0ze", "sBbQLRk0zY7", "23g41htbqJL", "d0PKWuDgvj8", "LrJOj44zoD", "C2vxwJjp7wV", "FDkGEBFr5E3", "SxuaVCruLM", "t-kfuVZ-28D", "mVc2t2pHmV", "PHAzhHSlhIQ" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the additional explanations and the additional results. \n\nI maintain that the discussion of the previous work by Sinha et al. 2018 is not sufficient in some parts of the paper. E.g., the authors still write \n> Different from WRM (Sinha et al., 2018) , our proposed framework is developed based on ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 8, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 2, 3, 3 ]
[ "qfkCSh4xJnc", "n0Uq8LHiJu", "PHAzhHSlhIQ", "48nx93Hrg1t", "mVc2t2pHmV", "sBbQLRk0zY7", "t-kfuVZ-28D", "d0PKWuDgvj8", "SxuaVCruLM", "FDkGEBFr5E3", "iclr_2022_Dzpe9C1mpiv", "iclr_2022_Dzpe9C1mpiv", "iclr_2022_Dzpe9C1mpiv", "iclr_2022_Dzpe9C1mpiv", "iclr_2022_Dzpe9C1mpiv", "iclr_2022_Dzp...
iclr_2022_KBQP4A_J1K
The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization
Despite progress across a broad range of applications, Transformers have limited success in systematic generalization. The situation is especially frustrating in the case of algorithmic tasks, where they often fail to find intuitive solutions that route relevant information to the right node/operation at the right time...
Accept (Poster)
This work proposes a novel Transformer Control Flow model and achieves near-perfect accuracy on length generalization, simple arithmetic tasks, and computational depth generalization. All reviewers give positive scores. AE agrees that this work is very interesting and has many potentials. It would be exciting if the a...
train
[ "HsG9tnWS424", "lQFp13tKreq", "sbdXsHEzm3X", "Ce4TOEtFjnU", "mUpvvUH7uRj", "Kwnvl6Q9qdv", "y16cylcAvks", "L01HWh_v74q", "yL0yEMJYJct", "2BO-85axb1H", "NQq67WRBsBc", "Gom-TpDDhh", "8PAcvFxXA2_" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " Thank you very much for the updated score.", "This paper addresses an issue of transformers that sometimes they fail to find solutions that are easily expressible by attention patterns. The issue is justified to be the same as the problem of learning useful control flow. The authors propose two modifications, n...
[ -1, 8, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ -1, 4, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "lQFp13tKreq", "iclr_2022_KBQP4A_J1K", "mUpvvUH7uRj", "Kwnvl6Q9qdv", "iclr_2022_KBQP4A_J1K", "iclr_2022_KBQP4A_J1K", "L01HWh_v74q", "yL0yEMJYJct", "mUpvvUH7uRj", "8PAcvFxXA2_", "Gom-TpDDhh", "lQFp13tKreq", "iclr_2022_KBQP4A_J1K" ]
iclr_2022_SYuJXrXq8tw
Sparsity Winning Twice: Better Robust Generalization from More Efficient Training
Recent studies demonstrate the deep networks, even robustified by the state-of-the-art adversarial training (AT), still suffer from large robust generalization gaps, in addition to the much more expensive training costs than standard training. In this paper, we investigate this intriguing problem from a new perspective...
Accept (Poster)
This paper focuses on leveraging static and dynamic sparsity in efficient robust training. The proposed methods can significantly mitigate the robust generalization gap while retaining competitive performance (standard/robust accuracy) with substantially reduced computation budgets. The philosophy behind sounds quite i...
train
[ "xXn6vTnTj2", "L4Fyu-F8nGV", "p0vMafCkvfy", "D5TUWT3Y7WW", "0LF-M7BWBid", "662K5sW3CFZ", "1PV03pjvI5", "8AgS9CdkW_H", "63bvTgpjojV", "MlArMi0RNu1", "jlhGeb9dGaC", "ecwjJ6xi1qv", "zlVlWcZABBg", "C-kvq69kKi8", "p6DIxDRuz1", "_icokRj6eYE", "kfCynvlDHY_", "f3x942tqhn_", "iUTrlQWMYBC"...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "public", "official_reviewer", "officia...
[ " Dear Reviewer **75p4**,\n\nWe really appreciate reviewer **75p4** for increasing our score and supporting the acceptance of our paper. We are glad to see our response has addressed reviewer **75p4**'s concerns.\n\nWe are again very thankful for your time and all constructive feedback!\n\nBest wishes,\n\nAuthors",...
[ -1, 8, -1, -1, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "p0vMafCkvfy", "iclr_2022_SYuJXrXq8tw", "_icokRj6eYE", "ZlyFfFz4yxq", "ZlyFfFz4yxq", "HCQ3tZ2GTQ", "ZlyFfFz4yxq", "ZlyFfFz4yxq", "iclr_2022_SYuJXrXq8tw", "P253FOUCuJU", "P253FOUCuJU", "L4Fyu-F8nGV", "L4Fyu-F8nGV", "ZlyFfFz4yxq", "L4Fyu-F8nGV", "L4Fyu-F8nGV", "63bvTgpjojV", "ZlyFfFz...
iclr_2022_k9bx1EfHI_-
Self-Supervised Graph Neural Networks for Improved Electroencephalographic Seizure Analysis
Automated seizure detection and classification from electroencephalography (EEG) can greatly improve seizure diagnosis and treatment. However, several modeling challenges remain unaddressed in prior automated seizure detection and classification studies: (1) representing non-Euclidean data structure in EEGs, (2) accura...
Accept (Poster)
This work tackles an important clinical application. It is experimentally solid and investigates novel deep learning methodologies in a convincing way. For these reasons, this work is endorsed for publication at ICLR 2022.
train
[ "LSBovGdOS9m", "ScqlsPaQghd", "XOJHOTbjzS9", "Z8-zhipo1M", "Q7nXeOJjWkW", "KaNhtEpsUif", "K0wYhJw6DW", "T5Q3X3lWchW", "spuvy3PFjX0", "qiO9QC6vgln", "7fFZJiK170", "yPczb4mNIX1" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for the review!", "The paper proposes a method to combine modeling based on graphs mapping electrode geometry and self-supervised pre-training for EEG-based seizure detection and classification. It also proposes an interpretation method using occlusion maps to the demonstrate the model's ability to lo...
[ -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "XOJHOTbjzS9", "iclr_2022_k9bx1EfHI_-", "spuvy3PFjX0", "7fFZJiK170", "7fFZJiK170", "yPczb4mNIX1", "yPczb4mNIX1", "yPczb4mNIX1", "ScqlsPaQghd", "iclr_2022_k9bx1EfHI_-", "iclr_2022_k9bx1EfHI_-", "iclr_2022_k9bx1EfHI_-" ]
iclr_2022_8c50f-DoWAu
Diffusion-Based Voice Conversion with Fast Maximum Likelihood Sampling Scheme
Voice conversion is a common speech synthesis task which can be solved in different ways depending on a particular real-world scenario. The most challenging one often referred to as one-shot many-to-many voice conversion consists in copying target voice from only one reference utterance in the most general case when bo...
Accept (Oral)
The paper is exceptionally well summarized by Reviewer QC5G which is difficult to improve up on. I will save the readers the effort of reading more text (without adding more substance). The reviewers unanimously rated this paper highly. The discussion has been robust, enlightening and also has improved the revised pap...
train
[ "fL-0wzgRCBM", "bZnuEZc0ucJ", "PF2LEcoqdnS", "D0Cfgt9n4O", "SW6bgiyqT1", "xu1S_OUfKUE", "afeao2w3FH2", "SkJmpcH5hF7" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper proposes a method to perform voice conversion through recent methods in Diffusion Based Probability Modeling (DPM) through Scochastic Differential Equations (SDE), building upon recent work Grad-TTS and Glow-TTS. The architecture is an encoder-decoder setup, trained separately, described in upcoming par...
[ 8, -1, -1, -1, -1, -1, 8, 10 ]
[ 4, -1, -1, -1, -1, -1, 4, 3 ]
[ "iclr_2022_8c50f-DoWAu", "afeao2w3FH2", "fL-0wzgRCBM", "fL-0wzgRCBM", "afeao2w3FH2", "SkJmpcH5hF7", "iclr_2022_8c50f-DoWAu", "iclr_2022_8c50f-DoWAu" ]
iclr_2022_hqkhcFHOeKD
Learning Towards The Largest Margins
One of the main challenges for feature representation in deep learning-based classification is the design of appropriate loss functions that exhibit strong discriminative power. The classical softmax loss does not explicitly encourage discriminative learning of features. A popular direction of research is to incorporat...
Accept (Poster)
This work presents a principled objective function for large margin learning. Specifically, it introduces class margin and sample margin, both of which it aims to promote. It also derives a generalized margin softmax loss which to draw general conclusions on the existing margin-based losses. The effectiveness of the pr...
train
[ "WoW-JGVQeo", "jZhGDL2HHxR", "-PCKwHWu-Q6", "sVU0qG_fTKX", "Ak2hWLPeOeU", "2C1UGqBCZXE", "u_HjUPukJAU", "LXcTyX5Rgwu", "BtZiYI3WegH", "ZQcT2_TIgu-", "p0aeniOllJB" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " This paper makes a good response and clarifies the previous comments.", "This paper analyzes large-margin based loss functions. The authors formulate two types of margins, class- and sample-margins, and then analyze lower bounds of various margin-based losses in a unified framework to show that those losses are...
[ -1, 6, 8, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, 4, 4, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "sVU0qG_fTKX", "iclr_2022_hqkhcFHOeKD", "iclr_2022_hqkhcFHOeKD", "p0aeniOllJB", "jZhGDL2HHxR", "-PCKwHWu-Q6", "jZhGDL2HHxR", "jZhGDL2HHxR", "ZQcT2_TIgu-", "iclr_2022_hqkhcFHOeKD", "iclr_2022_hqkhcFHOeKD" ]
iclr_2022_vwj6aUeocyf
Long Expressive Memory for Sequence Modeling
We propose a novel method called Long Expressive Memory (LEM) for learning long-term sequential dependencies. LEM is gradient-based, it can efficiently process sequential tasks with very long-term dependencies, and it is sufficiently expressive to be able to learn complicated input-output maps. To derive LEM, we consid...
Accept (Spotlight)
The paper proposes a new recurrent architecture based on discretization of ODEs which allow for learning multi-scale representations and help with the vanishing gradient problem. The reviewers all agree this architecture is novel and provide substantial theoretical and empirical evidence. A strong accept.
train
[ "q03MNrypm8u", "yf4x4iabKdW", "RV0N_GTYsjj", "dCEBtnzAzi", "ut7CRsms1y0", "2LDU7ZWWIOk", "Alkk8oUk5y_", "bQfTbn_TdDQ", "zBTkOpYr5gu", "-Z5iRAkzm6", "VYnwozJJKw", "rYSv6K043sb", "PIcGGYn4ZyK", "wm8E66Y4Y9W", "EajRsw2FAUU", "kv8lAl8IkQU" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper introduces a model class of neural networks whose architecture is defined by a circuit that computes a discretized step of a pair of multi-scale ODEs. The key novelty is on the multi-scale aspect of this system. In terms of representation capability, this class can represent LSTMs (and vice-versa) and c...
[ 6, 6, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ 3, 3, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "iclr_2022_vwj6aUeocyf", "iclr_2022_vwj6aUeocyf", "-Z5iRAkzm6", "iclr_2022_vwj6aUeocyf", "2LDU7ZWWIOk", "Alkk8oUk5y_", "PIcGGYn4ZyK", "zBTkOpYr5gu", "q03MNrypm8u", "VYnwozJJKw", "rYSv6K043sb", "yf4x4iabKdW", "dCEBtnzAzi", "kv8lAl8IkQU", "iclr_2022_vwj6aUeocyf", "iclr_2022_vwj6aUeocyf" ...
iclr_2022_7QfLW-XZTl
Energy-Inspired Molecular Conformation Optimization
This paper studies an important problem in computational chemistry: predicting a molecule's spatial atom arrangements, or a molecular conformation. We propose a neural energy minimization formulation that casts the prediction problem into an unrolled optimization process, where a neural network is parametrized to learn...
Accept (Poster)
All reviewers except one agreed that this paper should be accepted because of the strong author response during the rebuttal phase. Specifically the reviewers appreciated the new ablation study showing that improvements are not due to minor architectural changes, the new experiment on the number of time steps required ...
train
[ "cTfqcx44mf-", "Dx39aJDBgnv", "DAA2PRm_qYD", "UcIwZX81uzB", "RVbxGCjCS8", "QgaApnfQksY", "yHsli-q3Ve", "Bjc-ETQ5Rnd", "sV8WME8M87s", "W82GzscdKnb", "WIWUMDd-wH7", "Nvq7NtizUbm", "tN5ws_mHlpi", "W2wylvvd6ca", "MuqicKU9kK", "N8_4U6LvO9K", "FFGud5Ac5OQ", "pg-tAaEcQEt", "Dnw_neY7lcM"...
[ "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_r...
[ " Hope we have clarified the misunderstanding here and please let us know if you have further questions.", " Thank you for raising the score, and we appreciate your engagement throughout the rebuttal period. They are very helpful!", "The authors present an equivariant graph network approach for optimizing and g...
[ -1, -1, 6, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3 ]
[ -1, -1, 4, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "W82GzscdKnb", "UcIwZX81uzB", "iclr_2022_7QfLW-XZTl", "DAA2PRm_qYD", "14ZJvcxksRF", "iclr_2022_7QfLW-XZTl", "Crw2nbPVga9", "vwMPeW6nlVP", "OuRW6DT7Dg4", "WIWUMDd-wH7", "Nvq7NtizUbm", "tN5ws_mHlpi", "T7MAgrwSyqJ", "nBmgQG3GdFw", "iclr_2022_7QfLW-XZTl", "OuRW6DT7Dg4", "T7MAgrwSyqJ", ...
iclr_2022_VFBjuF8HEp
Learning Fast Samplers for Diffusion Models by Differentiating Through Sample Quality
Diffusion models have emerged as an expressive family of generative models rivaling GANs in sample quality and autoregressive models in likelihood scores. Standard diffusion models typically require hundreds of forward passes through the model to generate a single high-fidelity sample. We introduce Differentiable Diffu...
Accept (Poster)
The paper tackles a very interesting problem in the context of diffusion-based generative models and provides empirical improvements. Pre-rebuttal, reviewers' main concerns lie in the motivation and clarification of the method, while after rebuttal, all reviewers satisfied the response and gave positive scores. The aut...
test
[ "AhpKZ_04zok", "qQVi-V6p_dY", "1LjPA5EU0Je", "R-bZ3lPtRNJ", "qDoME97JrSq", "NWSqOgyF9nl", "e3Y3hQKA_ap", "bJZ5C-KfUw9", "9sSoQt2aZUt", "kdhZIvK9lp", "ZFT5GNOCIRN", "0t2BFEvbP1h" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "Denoising Diffusion Probabilistic Models (DDPMs) can generate high-quality samples, but they are not efficient at inference. This paper proposes Differentiable Diffusion Sampler Search (DDSS) to generate high-quality samples while using fewer inference steps than DDPM. The proposed approach uses reparametrization ...
[ 6, 6, 8, -1, 6, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 3, -1, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2022_VFBjuF8HEp", "iclr_2022_VFBjuF8HEp", "iclr_2022_VFBjuF8HEp", "kdhZIvK9lp", "iclr_2022_VFBjuF8HEp", "0t2BFEvbP1h", "bJZ5C-KfUw9", "iclr_2022_VFBjuF8HEp", "qQVi-V6p_dY", "AhpKZ_04zok", "1LjPA5EU0Je", "qDoME97JrSq" ]
iclr_2022_SLz5sZjacp
Evaluating Disentanglement of Structured Representations
We introduce the first metric for evaluating disentanglement at individual hierarchy levels of a structured latent representation. Applied to object-centric generative models, this offers a systematic, unified approach to evaluating (i) object separation between latent slots (ii) disentanglement of object properties in...
Accept (Poster)
This paper presents a new metric for disentanglement of learned representations, extending a prominent framework (DCI) to support object-centric structured representations. The reviewers agree on the importance of the question and find the metric a valuable contribution for addressing this problem. In the discussion, ...
train
[ "edHfkpnq8Vb", "zaA1Bh75xSu", "8uPe98YReIN", "JO5Ib9bqw7V", "jhwUF3tw1Ee", "wiYXGjyNzpb", "VveCBjTVEuM", "ZEg2ZgQh89G", "G9ci1-m_46e", "314M2cXg5w9", "xAN_S7WG8hl", "5hppQ99zg4v", "qhf4torW1eO", "eBATQI0-EjQ", "zBWbHlVcQr3", "pmUvpTxWLvu", "lxzVqgPcske", "5rMEuYA0wwV" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thanks for increasing your score. We will adapt this claim in future versions", " Thank you for your clarification, I agree that it makes sense to keep your EM probing algorithm as a technical contribution. \n\nWhile I think a direct comparison to Sinkhorn-based matching would be valuable, it is not the highest...
[ -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "8uPe98YReIN", "ZEg2ZgQh89G", "wiYXGjyNzpb", "iclr_2022_SLz5sZjacp", "VveCBjTVEuM", "G9ci1-m_46e", "314M2cXg5w9", "5hppQ99zg4v", "xAN_S7WG8hl", "eBATQI0-EjQ", "JO5Ib9bqw7V", "zBWbHlVcQr3", "iclr_2022_SLz5sZjacp", "JO5Ib9bqw7V", "5rMEuYA0wwV", "lxzVqgPcske", "iclr_2022_SLz5sZjacp", ...
iclr_2022_NdOoQnYPj_
BAM: Bayes with Adaptive Memory
Online learning via Bayes' theorem allows new data to be continuously integrated into an agent's current beliefs. However, a naive application of Bayesian methods in non-stationary environments leads to slow adaptation and results in state estimates that may converge confidently to the wrong parameter value. A common s...
Accept (Poster)
The article introduces a Bayesian approach for online learning in non-stationary environments. The approach, which bears similarities with weighted likelihood estimation methods, associate a binary weight to each past observation, indicating if this observation should be including or not to compute the posterior. The w...
train
[ "mwmzj4iFeeB", "Z5kJfA_C3Dk", "sqR1Rn_G9Tz", "a3Lr-dj7wtT", "ftmrQYJ8MpA", "bna7xPYN9L3", "47a1cXDLYYr", "2_klNjPLy8P", "j4Xk9dVtAvn", "XiJnXFTXdhmr", "IJkmf8bT9Ons", "LwFxre6pLV8", "0XEgCA3zGO1", "kkBdUwwVv8F", "4s825UcQCGK" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the further clarification.\n\n1. Well, for me, the proposed framework, although well-executed, and conceptually attractive, is not that really surprising. I'm not saying that I have actually seen similar previous works before. My point is, I would have expected more in order for a conceptual framework ...
[ -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 8, 8 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "sqR1Rn_G9Tz", "iclr_2022_NdOoQnYPj_", "ftmrQYJ8MpA", "bna7xPYN9L3", "2_klNjPLy8P", "47a1cXDLYYr", "j4Xk9dVtAvn", "Z5kJfA_C3Dk", "0XEgCA3zGO1", "4s825UcQCGK", "kkBdUwwVv8F", "iclr_2022_NdOoQnYPj_", "iclr_2022_NdOoQnYPj_", "iclr_2022_NdOoQnYPj_", "iclr_2022_NdOoQnYPj_" ]
iclr_2022_CAjxVodl_v
Generalized Decision Transformer for Offline Hindsight Information Matching
How to extract as much learning signal from each trajectory data has been a key problem in reinforcement learning (RL), where sample inefficiency has posed serious challenges for practical applications. Recent works have shown that using expressive policy function approximators and conditioning on future trajectory inf...
Accept (Spotlight)
The paper describes a framework that unifies several previous lines under hindsight information matching. Within that framework, the paper also describes variants of the decision transformer (DT) called categorical DT and unsupervised DT. The rebuttal was quite effective and the reviewers confirmed that their concern...
test
[ "C80M2mQYDN", "j9jlLhaOrel", "3kERdQbAkkW", "uOXCJFR9fFy", "hfLaNd6YHv", "qppxF0xQLv", "0yZKII9ubaw", "4bP0NzoZm3E", "hH-K1JBSeun", "uNmw9AZkmYe", "ggBaszpLdpA", "UhlZ5Kxvh-a", "E738JnyIj4z", "NUTGOBEs5EF", "W6KHuKFn9v8", "s2zp99RKMwb", "xJ9s8JuKKLo", "vV_TSSIUb4a" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author" ]
[ " Thanks for the clarifications. I have edited my original review and score. (I am just leaving this note in case the review edit does not trigger an email notification.)", "This paper presents variants of the Decision Transformer that can condition on desired state or feature distributions instead of scalars.\nA...
[ -1, 6, -1, -1, 8, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, -1, -1, 4, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "3kERdQbAkkW", "iclr_2022_CAjxVodl_v", "uOXCJFR9fFy", "qppxF0xQLv", "iclr_2022_CAjxVodl_v", "0yZKII9ubaw", "W6KHuKFn9v8", "hH-K1JBSeun", "ggBaszpLdpA", "iclr_2022_CAjxVodl_v", "xJ9s8JuKKLo", "E738JnyIj4z", "iclr_2022_CAjxVodl_v", "j9jlLhaOrel", "NUTGOBEs5EF", "xJ9s8JuKKLo", "uNmw9AZk...
iclr_2022_fXHl76nO2AZ
Gradient Importance Learning for Incomplete Observations
Though recent works have developed methods that can generate estimates (or imputations) of the missing entries in a dataset to facilitate downstream analysis, most depend on assumptions that may not align with real-world applications and could suffer from poor performance in subsequent tasks such as classification. Thi...
Accept (Poster)
The paper proposed an imputation free method to handle missing data by learning an input encoding matrix using RL with the prediction error as reward/penalty signal. Reviewers appreciate the interesting setup where RL is used to deal with missing data, and the method being imputation free. Three out of four reviewers (...
val
[ "jMKp4DiOmqY", "PptLLLazU0X", "bwFy0icFt32", "TTCfjMqo10f", "ahVI29gOuId", "hfG87SKDVhC", "rFG7OWBYT4", "qpgR7pdktLh", "qjBnMs02tu4", "ST9l_vTX_2g", "2hg7l_JS3Mbh", "em8PT0tgIx", "PggtJS0pQ7by", "Jh8WXPRIuzs", "WvvcHzrxGl", "jGd5rfgjIB2", "mQHaeZUBnKO" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We greatly appreciate the timely responses from the reviewer, as well as the efforts reading through our replies and following up with insightful comments. As suggested by the reviewer, the results and discussions we made in the response earlier are now included in to Appendix G of the paper.", "The paper propo...
[ -1, 6, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, 2, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "bwFy0icFt32", "iclr_2022_fXHl76nO2AZ", "ahVI29gOuId", "qpgR7pdktLh", "hfG87SKDVhC", "em8PT0tgIx", "Jh8WXPRIuzs", "qjBnMs02tu4", "iclr_2022_fXHl76nO2AZ", "PptLLLazU0X", "qjBnMs02tu4", "PptLLLazU0X", "qjBnMs02tu4", "mQHaeZUBnKO", "jGd5rfgjIB2", "iclr_2022_fXHl76nO2AZ", "iclr_2022_fXHl...
iclr_2022_06Wy2BtxXrz
Learning Scenario Representation for Solving Two-stage Stochastic Integer Programs
Many practical combinatorial optimization problems under uncertainty can be modeled as stochastic integer programs (SIPs), which are extremely challenging to solve due to the high complexity. To solve two-stage SIPs efficiently, we propose a conditional variational autoencoder (CVAE) based method to learn scenario repr...
Accept (Poster)
This paper presents a conditional variational autoencoder (CVAE) approach to solve an instance of stochastic integer program (SIP) using graph convolutional networks. Experiments show that their method achieves high quality solutions with high performance. It holds merit as an interesting novel application of CVAEs t...
val
[ "vm0L8esCY3g", "aU8A0Vaj1D", "tmIybrvI5l", "Wpy_apo_eGf", "Tie03KeBa3H", "gbENUrmDJCb", "MixU0HI8HME", "LTT5t2IGFiV", "Jy0R33USq5L", "Kx1R77xW7s", "BwV-xWcRp16", "9Q1OY9aY0yq", "750NhspJiRG", "xt6zh_afAa", "2Dy3oW-yTye", "DG_aDQfSpwH", "t-1qVrDTBz", "oaUNA6nik3", "hseVyDxV2SF", ...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", ...
[ " We greatly thank the reviewer for increasing the score!", "This paper studies the problem of generating representative scenarios for two-stage stochastic integer programmings in which the parameters could be either static (referred to as context) or stochastic (forming a space of scenarios). The proposed method...
[ -1, 6, -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ -1, 3, -1, 3, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "aU8A0Vaj1D", "iclr_2022_06Wy2BtxXrz", "Wpy_apo_eGf", "iclr_2022_06Wy2BtxXrz", "gbENUrmDJCb", "iclr_2022_06Wy2BtxXrz", "LTT5t2IGFiV", "Kx1R77xW7s", "9Q1OY9aY0yq", "750NhspJiRG", "xt6zh_afAa", "oaUNA6nik3", "t-1qVrDTBz", "DG_aDQfSpwH", "oaUNA6nik3", "aU8A0Vaj1D", "aU8A0Vaj1D", "aU8A...
iclr_2022_6vkzF28Hur8
Training Transition Policies via Distribution Matching for Complex Tasks
Humans decompose novel complex tasks into simpler ones to exploit previously learned skills. Analogously, hierarchical reinforcement learning seeks to leverage lower-level policies for simple tasks to solve complex ones. However, because each lower-level policy induces a different distribution of states, transitioning ...
Accept (Poster)
Description of paper content: The paper proposes a strategy to train a “transition policy” that can connect two pre-trained policies. The transition policy tries to reach state-action pairs that are within the occupancy distribution of the second policy using Inverse RL. The technique was evaluated on robot manipulati...
train
[ "FS28I-7p39E", "x-4Hn7kAjpw", "cSjDaYy12H", "Ei-cN7O5lhW", "ud6nx3FObGK", "Z27A42JZeYu", "LU8P5Xw2e1" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your answers to my questions. Based on my own appreciation of it and on the other reviewers' feedback, I hope your paper will get accepted.", " Thank you for your review.\n\n(R1) You characterise your technique ... Can you please clarify?\\\n=> Our point is that the inverse RL techniques, while th...
[ -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, 3, 2, 2 ]
[ "x-4Hn7kAjpw", "LU8P5Xw2e1", "Z27A42JZeYu", "ud6nx3FObGK", "iclr_2022_6vkzF28Hur8", "iclr_2022_6vkzF28Hur8", "iclr_2022_6vkzF28Hur8" ]
iclr_2022_3tbDrs77LJ5
Large Learning Rate Tames Homogeneity: Convergence and Balancing Effect
Recent empirical advances show that training deep models with large learning rate often improves generalization performance. However, theoretical justifications on the benefits of large learning rate are highly limited, due to challenges in analysis. In this paper, we consider using Gradient Descent (GD) with a large l...
Accept (Poster)
The paper studies gradient descent for matrix factorization with a learning rate that is large relative to the a certain notion of the scale of the problem. In particular, they show that the use of large learning rates leads to balancing between the two factors in the factorization. The discussion between the authors ...
train
[ "Fn2hgkLKd3E", "beL3Cgw52xM", "2Gi4MS3vBV4", "3CmrT266tLh", "7hNGZm3y_1-", "WUIrJicgOVp", "nHj3luAI1Gg", "Z_cniMT6ui", "btSaDN1IL-", "JBZSmkzxZ1-", "rAWhtb7zgTF", "paSRc-BP0JC", "Vb_DRQWKJ54", "QqXz7K70GR", "jR_QkyEZx16", "lcE9gMYk3c_", "YKehPUgSbr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ " Thank you for the clarification. I have increased my score.", "This paper studies the properties of gradient descent with large learning rate in the matrix factorization problem. The goal is to understand when gradient descent converges to a global minimum where the two factors are roughly balanced in norm, whi...
[ -1, 8, -1, 6, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 8 ]
[ -1, 3, -1, 3, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4 ]
[ "WUIrJicgOVp", "iclr_2022_3tbDrs77LJ5", "7hNGZm3y_1-", "iclr_2022_3tbDrs77LJ5", "nHj3luAI1Gg", "Z_cniMT6ui", "rAWhtb7zgTF", "JBZSmkzxZ1-", "iclr_2022_3tbDrs77LJ5", "beL3Cgw52xM", "3CmrT266tLh", "Vb_DRQWKJ54", "QqXz7K70GR", "btSaDN1IL-", "YKehPUgSbr", "iclr_2022_3tbDrs77LJ5", "iclr_20...
iclr_2022__CfpJazzXT2
F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization
Neural network quantization is a promising compression technique to reduce memory footprint and save energy consumption, potentially leading to real-time inference. However, there is a performance gap between quantized and full-precision models. To reduce it, existing quantization approaches require high-precision INT3...
Accept (Oral)
This paper proposes an approach for 8-bit fixed point training of NNs, based on a careful analysis of quantization error in fixed-point methods. They present convincing and thorough empirical results in addition to a detailed analysis providing insights about their method. Reviews for this paper were quite split. One r...
train
[ "-LyMvXAx3xh", "p8Ji-rJPoJ", "4coD0kxlt2O", "j1OzrgN6qwP", "U8KodgW74y", "m6QY7F9MWET", "l3ZzzuR8sZ", "qcI9Ce8oxRc", "_j7IhaiJAp", "c79rwEWwL4B", "Y0zlH26NmIH", "YgnvNEtck1_", "5PLrqykzi-", "WWT0HLWC3in", "52WwCTCzidu", "afd4gHw8t-x", "la8INPwFUwW", "ZfPn48NoJH", "wj53ZmqRqh2", ...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "...
[ " Dear Reviewer xQht,\n\nWe hope our provided additional information about XNNPACK in the latest response could help answer why XNNPACK is a suitable library for testing the integer quantization models. \n*XNNPACK is designed for speeding up not only for floating-point models, but also integer quantized models.* \n...
[ -1, -1, -1, -1, -1, -1, -1, 10, -1, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4 ]
[ "4coD0kxlt2O", "U8KodgW74y", "j1OzrgN6qwP", "l3ZzzuR8sZ", "c79rwEWwL4B", "Y0zlH26NmIH", "afd4gHw8t-x", "iclr_2022__CfpJazzXT2", "vGt_KMAxxo", "la8INPwFUwW", "YgnvNEtck1_", "0pqOSH93LZ3", "WWT0HLWC3in", "x1hfSxwATN0", "iclr_2022__CfpJazzXT2", "la8INPwFUwW", "ZfPn48NoJH", "OuQS6LQLfi...
iclr_2022_OM_lYiHXiCL
AEVA: Black-box Backdoor Detection Using Adversarial Extreme Value Analysis
Deep neural networks (DNNs) are proved to be vulnerable against backdoor attacks. A backdoor could be embedded in the target DNNs through injecting a backdoor trigger into the training examples, which can cause the target DNNs misclassify an input attached with the backdoor trigger. Recent backdoor detection methods o...
Accept (Poster)
This work proposed to detect backdoor in a black-box manner, where only the model output is accessible. Most reviewers think it is a valuable task, and this work provides a novel perspective of using adversarial perturbation to diagnosis the backdoor. Some theoretical analysis for linear models and kernel models are ...
train
[ "h3qQryEipoS", "WY4F13X_ZFE", "I5OzrREGjXj", "iT_m2Gljttk", "l6UUu5BHUvU" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer" ]
[ "This paper proposed an adversarial extreme value analysis (AEVA) framework to detect backdoors in black-box neural networks.\nSpecifically, they first obtained a new (upper bound) backdoor detection formulation by using convex relaxation. With linear model assumption and mean squared error loss, they showed that t...
[ 6, 8, 8, -1, 6 ]
[ 4, 4, 4, -1, 4 ]
[ "iclr_2022_OM_lYiHXiCL", "iclr_2022_OM_lYiHXiCL", "iclr_2022_OM_lYiHXiCL", "iclr_2022_OM_lYiHXiCL", "iclr_2022_OM_lYiHXiCL" ]
iclr_2022_IvepFxYRDG
Sample Efficient Stochastic Policy Extragradient Algorithm for Zero-Sum Markov Game
Two-player zero-sum Markov game is a fundamental problem in reinforcement learning and game theory. Although many algorithms have been proposed for solving zero-sum Markov games in the existing literature, many of them either require a full knowledge of the environment or are not sample-efficient. In this paper, we dev...
Accept (Poster)
This work presents a new sample-based policy extragradient algorithm for finding an approximate Nash equilibrium in tabular two-player zero-sum Markov games with improved sample complexity guarantees. While originally the reviewers had concerns regarding the novelty and technical difficulty of the paper, these were su...
train
[ "a-ejM448OxA", "i9OxZJVhaz", "aBu4iV7_SMa", "nPwLW9q3pec", "ICUmtJmH40y", "FiQU0uz3k1V", "3PV-6r8CL7Y", "YBMKgqk79pW", "2CCP6jb1VQd", "sEYq3mQ6yQt", "dabzZzfpEx9", "PM-HnhS4tmd", "wPysAgAM2xg" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their detailed response. I have read the discussions and have decided to keep my score as it is.", "This paper theoretically studies gradient-based algorithms for two-player zero-sum Markov Games (MGs), an important problem in multi-agent reinforcement learning. The main contribution of ...
[ -1, 6, 6, 6, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, 4, 3, 3, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "sEYq3mQ6yQt", "iclr_2022_IvepFxYRDG", "iclr_2022_IvepFxYRDG", "iclr_2022_IvepFxYRDG", "FiQU0uz3k1V", "dabzZzfpEx9", "i9OxZJVhaz", "wPysAgAM2xg", "nPwLW9q3pec", "PM-HnhS4tmd", "aBu4iV7_SMa", "iclr_2022_IvepFxYRDG", "iclr_2022_IvepFxYRDG" ]
iclr_2022_qwULHx9zld
Random matrices in service of ML footprint: ternary random features with no performance loss
In this article, we investigate the spectral behavior of random features kernel matrices of the type ${\bf K} = \mathbb{E}_{{\bf w}} \left[\sigma\left({\bf w}^{\sf T}{\bf x}_i\right)\sigma\left({\bf w}^{\sf T}{\bf x}_j\right)\right]_{i,j=1}^n$, with nonlinear function $\sigma(\cdot)$, data ${\bf x}_1, \ldots, {\bf x}_n...
Accept (Poster)
The reviewers overall were quite happy after the rebuttal phase, in which the authors considerably improved the presentation quality and addressed reviewer concerns, and recommended acceptance. The reviewers agreed that while the theory was short and relied on various possibly restrictive assumptions and maybe was larg...
val
[ "vZ-0bxfqp5x", "uNRyOpgsEpv", "OrFbW3nOd9", "JtEbDYflsZJ", "tAp0T7IMHIO", "Q_y7fPad6pe", "MJik067s2u", "1Zdu95wbgZx", "c-90PpwJUmT", "qrTqq2xIOyd" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for clarification on the Nystrom method. Viewing floating point as quantization of real numbers, we can then run the Nystrom method with 16 floats and consider it as Quantized Nystrom 16 bits. We will add that in the final version of the paper.", "The authors present a resource-efficient a...
[ -1, 8, -1, 6, -1, -1, -1, -1, 8, 6 ]
[ -1, 2, -1, 2, -1, -1, -1, -1, 3, 3 ]
[ "OrFbW3nOd9", "iclr_2022_qwULHx9zld", "MJik067s2u", "iclr_2022_qwULHx9zld", "JtEbDYflsZJ", "qrTqq2xIOyd", "c-90PpwJUmT", "uNRyOpgsEpv", "iclr_2022_qwULHx9zld", "iclr_2022_qwULHx9zld" ]
iclr_2022_T0GpzBQ1Fg6
Step-unrolled Denoising Autoencoders for Text Generation
In this paper we propose a new generative model of text, Step-unrolled Denoising Autoencoder (SUNDAE), that does not rely on autoregressive models. Similarly to denoising diffusion techniques, SUNDAE is repeatedly applied on a sequence of tokens, starting from random inputs and improving them each time until convergenc...
Accept (Poster)
The paper introduces a simple technique to improve non-autoregressive generation by training the model to reconstruct model-perturbed inputs in addition to inputs perturbed by a fixed noise source. Despite interest in the paper, we were worried about a number of aspects missing from section 3. During the rebuttal pha...
train
[ "GH2PlDSpOOO", "Ukvo3GeM-lw", "TV06z_Jex32", "AMphNtgdRg7", "WkxsAkcWC3V", "WTGTLRaA16o", "uAHc24-Fnt", "XPztnIeMh1h", "Z8mQ-4b1OMP", "BdAanJi8Zx5", "oQ-db_w1Qhs", "B_Qr5Wth00e" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for making all the changes to their draft. Most of my questions/concerns have been addressed and I am increasing my score to 8. \n\n> distillation makes the training pipeline complicated and resource-consuming in practice because it requires sequentially decoding a pre-trained AR...
[ -1, 8, 6, -1, -1, -1, -1, -1, -1, -1, 5, 6 ]
[ -1, 3, 4, -1, -1, -1, -1, -1, -1, -1, 4, 3 ]
[ "BdAanJi8Zx5", "iclr_2022_T0GpzBQ1Fg6", "iclr_2022_T0GpzBQ1Fg6", "WTGTLRaA16o", "iclr_2022_T0GpzBQ1Fg6", "uAHc24-Fnt", "B_Qr5Wth00e", "oQ-db_w1Qhs", "TV06z_Jex32", "Ukvo3GeM-lw", "iclr_2022_T0GpzBQ1Fg6", "iclr_2022_T0GpzBQ1Fg6" ]
iclr_2022_uVXEKeqJbNa
Stiffness-aware neural network for learning Hamiltonian systems
We propose stiffness-aware neural network (SANN), a new method for learning Hamiltonian dynamical systems from data. SANN identifies and splits the training data into stiff and nonstiff portions based on a stiffness-aware index, a simple, yet effective metric we introduce to quantify the stiffness of the dynamical syst...
Accept (Poster)
This paper introduces the Stiffness-aware neural network (SANN) for improving numerical stability in Hamiltonian neural networks. To this end, the authors introduce the stiffness-aware index (SAI) to classify time intervals into stiff and non-stiff portions, and propose to adapt the integration scheme accordingly. The...
train
[ "4wfL3lM1-5", "t58voUXg6ys", "1GP8FgzdUd", "NlpmGBijzq7", "7VBHgaArw9H", "CJ9MJsVlTZ", "P-Kz10lDtEJ", "zx7fxY95Tf7", "RZpfzeNHKQF", "S_WN9ytlrdF", "0Z7ld3rAEiA" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This study proposes the stiff-aware index (SAI) for an ordinary differential equation and the training strategy that uses samples with large SAIs more frequently. A neural network has the implicit bias to tend to learn a smooth function. Only a limited portion of data obtained from a stiff system exhibits a rapid ...
[ 6, 8, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ 4, 4, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2022_uVXEKeqJbNa", "iclr_2022_uVXEKeqJbNa", "NlpmGBijzq7", "7VBHgaArw9H", "CJ9MJsVlTZ", "4wfL3lM1-5", "0Z7ld3rAEiA", "S_WN9ytlrdF", "t58voUXg6ys", "iclr_2022_uVXEKeqJbNa", "iclr_2022_uVXEKeqJbNa" ]
iclr_2022_6sh3pIzKS-
Chemical-Reaction-Aware Molecule Representation Learning
Molecule representation learning (MRL) methods aim to embed molecules into a real vector space. However, existing SMILES-based (Simplified Molecular-Input Line-Entry System) or GNN-based (Graph Neural Networks) MRL methods either take SMILES strings as input that have difficulty in encoding molecule structure informati...
Accept (Poster)
This paper uses chemical reaction data as a means to help train molecule embeddings, by requiring embeddings to satisfy known reaction equations. The idea is nice and clear, and the paper includes strong empirical evaluation. All four reviewers agreed the paper could be accepted, with two of them raising their scores a...
train
[ "d2oTPdyrP-s", "3bbIUvZqFzJ", "Ny2atNaifO4", "CtYe4Ykcr7N", "f2PrYm2Vup", "OOOIFAdO9Q2", "IfopKk7wAOn", "NSvec6iZS8o", "z1oNEqKJJV6", "IT5mhVbagle", "UVUWILuE-g-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you for the author's clarification on the definition of reaction center, it is very clear now. Thank you for the response to other questions too. After reading the author's response and other reviewers' comments I decided to keep my original score. ", "The paper proposes a molecule representation learning...
[ -1, 8, -1, -1, 8, -1, -1, -1, -1, 6, 6 ]
[ -1, 4, -1, -1, 5, -1, -1, -1, -1, 3, 4 ]
[ "IT5mhVbagle", "iclr_2022_6sh3pIzKS-", "z1oNEqKJJV6", "NSvec6iZS8o", "iclr_2022_6sh3pIzKS-", "UVUWILuE-g-", "IT5mhVbagle", "f2PrYm2Vup", "3bbIUvZqFzJ", "iclr_2022_6sh3pIzKS-", "iclr_2022_6sh3pIzKS-" ]
iclr_2022_RriDjddCLN
Language-driven Semantic Segmentation
We present LSeg, a novel model for language-driven semantic image segmentation. LSeg uses a text encoder to compute embeddings of descriptive input labels (e.g., ``grass'' or ``building'') together with a transformer-based image encoder that computes dense per-pixel embeddings of the input image. The image encoder is t...
Accept (Poster)
The paper presents an approach to semantic segmentation based on text embedding of class labels. This enables zero-shot semantic segmentation with class labels that were not seen during training. I appreciate the new ablation against a ResNet-101 backbone. I don't find the similarity with CLIP substantial, and I recomm...
train
[ "K5W-HTqLRy5", "QXh6IPIHiU-", "I1oQ4SB66_", "cxUI95AG4GU", "_HfsjUADQq8", "X9NVJUCwlV6", "8Rve84Cs1iF", "Ad924kfemZ", "-Q3MIRmZhtD", "5oTXIEP4OhE", "1qv0PAS4URv", "JH43t6vFkeb", "epv3FGgBY_r", "wlfMg7wnXAN", "tfd-F2TiAOG" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer" ]
[ " Dear Reviewer u2M8,\n\nThanks for your time and patience. When you get a chance to look at our updated response, we hope we have addressed your comments satisfactorily. And we would like to let you know that we are willing and happy to answer your additional comments if you have any.\n", " Dear Reviewer Cogy,\n...
[ -1, -1, -1, -1, 5, -1, -1, -1, -1, -1, -1, -1, 8, -1, 6 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, 3, -1, 3 ]
[ "cxUI95AG4GU", "I1oQ4SB66_", "X9NVJUCwlV6", "_HfsjUADQq8", "iclr_2022_RriDjddCLN", "8Rve84Cs1iF", "tfd-F2TiAOG", "_HfsjUADQq8", "iclr_2022_RriDjddCLN", "_HfsjUADQq8", "tfd-F2TiAOG", "_HfsjUADQq8", "iclr_2022_RriDjddCLN", "epv3FGgBY_r", "iclr_2022_RriDjddCLN" ]
iclr_2022_nkaba3ND7B5
Autonomous Reinforcement Learning: Formalism and Benchmarking
Reinforcement learning (RL) provides a naturalistic framing for learning through trial and error, which is appealing both because of its simplicity and effectiveness and because of its resemblance to how humans and animals acquire skills through experience. However, real-world embodied learning, such as that performed ...
Accept (Poster)
This paper formalizes the setting where an autonomous RL agent operates with zero or very few resets, and provides a novel benchmark for this setting with diverse environments ranging from simple manipulation to complex manipulation/locomotion. The paper then uses this benchmark to analyze current methods and provide i...
test
[ "8ctihDaLkMx", "QXosEgF6017", "8qjbkh_lXeRH", "7RfjZtWZGyB", "au2ToZE9kud", "y1gOLFtjFC-", "-puk46y_Rno", "R8nWcBwNn94", "VSQhlduKff4", "A1pdAB5w-4i", "jcjoAYOX9C" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " My concern about the actual framing of the problem against existing definition of continual learning vs autonomous RL are still there.\n\nIn fact the authors say that episodic lifetimes \"implicitly necessitates human intervention\". It does not seem the case in general - in theory this might just automatic.\n\nT...
[ -1, -1, -1, -1, -1, -1, -1, 8, 5, 8, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "au2ToZE9kud", "y1gOLFtjFC-", "7RfjZtWZGyB", "jcjoAYOX9C", "VSQhlduKff4", "A1pdAB5w-4i", "R8nWcBwNn94", "iclr_2022_nkaba3ND7B5", "iclr_2022_nkaba3ND7B5", "iclr_2022_nkaba3ND7B5", "iclr_2022_nkaba3ND7B5" ]
iclr_2022_BnQhMqDfcKG
Probabilistic Implicit Scene Completion
We propose a probabilistic shape completion method extended to the continuous geometry of large-scale 3D scenes. Real-world scans of 3D scenes suffer from a considerable amount of missing data cluttered with unsegmented objects. The problem of shape completion is inherently ill-posed, and high-quality result requires s...
Accept (Spotlight)
This paper introduced a probabilistic extension to a pipeline for 3D scene geometry reconstruction from large-scale point clouds. All reviewers recognized the significance of the proposed approach and praised the simplicity of deriving a probabilistic version of Generative Cellular Automata that performs well in a num...
train
[ "S2kCAd_saQc", "-0Ae41yE6a", "y02uMIEUZwM", "NIu9iOazV-", "EaZ2Vv20HGr", "0CMiMMdDEyB", "bgUv44e8frB", "CxFBCzz3GFA", "5BudqK9Q_i3", "6SrZlpQxGUe", "2mAe5Osascy", "0dwlTRzWBP_", "J18x--B-Xi", "x4yCzPqtsQW", "eqhM0GrEgtb", "boDs3FxxIrX", "M6_2eYO1waR", "eLG6E3Uzrkx", "bP9bBHxqbL",...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", ...
[ " Thanks for the explanations, they answered my questions. After reading the authors answers and the other reviews I would like to keep my positive rating of this submission.", " I'm satisfied with the response. Besides, after reading the comments from other reviewers, and also the responses, I'm feeling quite po...
[ -1, -1, 8, 8, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8 ]
[ -1, -1, 4, 3, -1, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "boDs3FxxIrX", "bgUv44e8frB", "iclr_2022_BnQhMqDfcKG", "iclr_2022_BnQhMqDfcKG", "eLG6E3Uzrkx", "iclr_2022_BnQhMqDfcKG", "y02uMIEUZwM", "5BudqK9Q_i3", "0dwlTRzWBP_", "J18x--B-Xi", "iclr_2022_BnQhMqDfcKG", "6SrZlpQxGUe", "x4yCzPqtsQW", "NIu9iOazV-", "o7lklEmHaaq", "eqhM0GrEgtb", "bP9bB...
iclr_2022_N8MaByOzUfb
New Insights on Reducing Abrupt Representation Change in Online Continual Learning
In the online continual learning paradigm, agents must learn from a changing distribution while respecting memory and compute constraints. Experience Replay (ER), where a small subset of past data is stored and replayed alongside new data, has emerged as a simple and effective learning strategy. In this work, we focus ...
Accept (Poster)
The manuscript develops new insights into how catastrophic forgetting takes place in the context of continual learning. The authors develop a new method based on this insight and demonstrate that it performs better than or as well as previously developed baselines, as well as showing that it is more widely applicable t...
train
[ "tNEOzbtjcO", "X-7R9ClHCLo", "hnZI2xvvTmq", "ZmhUl2rMS3M", "65V9UTyZMyD", "ELdkxhHy7Xp", "Y4eloQOGiji", "qUWRQnXJS1T", "qhp5Q_y5cl", "g16SEvocwc", "o6pjdyXSnto", "O9AVYY9Cimq", "xy5osT92xRI", "wclzbbN8AjZ", "upEq3RwbAXM", "P24mIHdaB1Q", "FXy_T6olkEs" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response and engagement. \n\nFirst, there is **significant value in explaining and understanding why something works,** which is **exactly** what we do : we show that disruptive gradients at task boundaries are responsible for catastrophic forgetting in this setting, a **novel finding** not pre...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 8, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 5 ]
[ "X-7R9ClHCLo", "g16SEvocwc", "iclr_2022_N8MaByOzUfb", "qhp5Q_y5cl", "FXy_T6olkEs", "iclr_2022_N8MaByOzUfb", "wclzbbN8AjZ", "upEq3RwbAXM", "P24mIHdaB1Q", "65V9UTyZMyD", "65V9UTyZMyD", "65V9UTyZMyD", "65V9UTyZMyD", "iclr_2022_N8MaByOzUfb", "iclr_2022_N8MaByOzUfb", "iclr_2022_N8MaByOzUfb"...
iclr_2022_HTVch9AMPa
Delaunay Component Analysis for Evaluation of Data Representations
Advanced representation learning techniques require reliable and general evaluation methods. Recently, several algorithms based on the common idea of geometric and topological analysis of a manifold approximated from the learned data representations have been proposed. In this work, we introduce Delaunay Component Anal...
Accept (Poster)
The paper proposes "Delaunay Component Analysis", a novel manifold learning technique. Reviewers raised several concerns regarding novelty, computational complexity of the method, and presentation. The authors provided a thorough rebuttal and engaged in discussion with the reviewers that addressed the concerns in a sat...
train
[ "1ZdQggOzHMz", "suLXo4CKYNV", "-0o5MLLuuLF", "u_zu_hyAre", "KjlorHeiQUE", "-MthPoJhsvCF", "f-Dpm2_-mSZ", "o15TLHoHEAI", "E5mhBGATlAA", "3BlVXMCboKk", "zSnTy80ishV", "uxVR07bDRW5", "-p9I9ZUGOLV", "jWT4zcU2g5E", "prfFktxq87W", "Th6iDAC7FRj", "H0GFKMM6Le0", "nW3m-S0zk8A", "tO0mAWmuF...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "officia...
[ " We thank the reviewer for the feedback. We updated the submission where we removed the orange color marking the changes.", " We thank the reviewer for the feedback. We updated the submission where we removed the orange color marking the changes.", " I think the submission now is stronger than its original sta...
[ -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3 ]
[ "u_zu_hyAre", "-0o5MLLuuLF", "C7AAYcgbiVH", "-p9I9ZUGOLV", "iclr_2022_HTVch9AMPa", "-p9I9ZUGOLV", "zSnTy80ishV", "uxVR07bDRW5", "1u7reP3a6Yw", "C7AAYcgbiVH", "wD9rRgKiTx2", "C7AAYcgbiVH", "KjlorHeiQUE", "KjlorHeiQUE", "KjlorHeiQUE", "KjlorHeiQUE", "KjlorHeiQUE", "C7AAYcgbiVH", "C...
iclr_2022_Ht85_jyihxp
Efficient and Differentiable Conformal Prediction with General Function Classes
Quantifying the data uncertainty in learning tasks is often done by learning a prediction interval or prediction set of the label given the input. Two commonly desired properties for learned prediction sets are \emph{valid coverage} and \emph{good efficiency} (such as low length or low cardinality). Conformal predict...
Accept (Poster)
This paper describes a few practically relevant extensions of the conformal prediction framework, that has recently become popular in the ML community for providing (marginally valid) prediction sets without making distributional assumptions. The conceptual contributions are not major, given existing work --- without r...
train
[ "LrrGag_qMHC", "CNOtS2AI8-p", "2aWhTJLxRPY", "ZX7NrEYKwCM", "wEksKb_Zj8a", "7_heGhyFcm4", "xHlAO9qY2IN", "GLiOp5L_x2C", "J8DXdyA6_26", "7SxjGItnkkG", "DfvL3UIeIhv", "smge_-Pm1e", "y54CMMFSuN" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Thank you to the authors for their time and effort in replying in detail to the review, and for the updates to the original manuscript! I believe that many of my original concerns have been addressed. My recommendation for acceptance still stands, though like Reviewer Pu7a, my score remains the same (as I don't s...
[ -1, 6, -1, 6, -1, -1, -1, -1, -1, -1, -1, 6, 8 ]
[ -1, 4, -1, 4, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "xHlAO9qY2IN", "iclr_2022_Ht85_jyihxp", "7SxjGItnkkG", "iclr_2022_Ht85_jyihxp", "iclr_2022_Ht85_jyihxp", "y54CMMFSuN", "smge_-Pm1e", "ZX7NrEYKwCM", "GLiOp5L_x2C", "DfvL3UIeIhv", "CNOtS2AI8-p", "iclr_2022_Ht85_jyihxp", "iclr_2022_Ht85_jyihxp" ]
iclr_2022_qnQN4yr6FJz
Variational Inference for Discriminative Learning with Generative Modeling of Feature Incompletion
We are concerned with the problem of distributional prediction with incomplete features: The goal is to estimate the distribution of target variables given feature vectors with some of the elements missing. A typical approach to this problem is to perform missing-value imputation and regression, simultaneously or seque...
Accept (Oral)
While generative model can be used to input data, this work propose to a novel discriminative learning approach to optimize this data imputation phase by deriving a discriminative version of the traditional variational lower bound (ELBO). The resulting bound can be estimated without bias with Monte Carlo estimation lea...
train
[ "QmbxnBE0Ja5", "8sT5j5FgIq7", "KESvtoG2X7Z", "eW7lo3zvZH", "hywSkr24yW_", "Pqg5kAR1Et2n", "GvzcHl-aUwj", "T9RWwRMEn7ZY", "lwnRX4lZh3", "Beg2TKjfty0", "hq2bSm0sYK2", "wMV5F4agxNA" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We have decided to leave the MNAR experiment as future work\nconsidering the necessity and the time constraint.\n\nNote that MCAR is already suitable for demonstrating \nan advantage of vDIG since it can be captured only with the generative approach and the DIG approach,\nnot with the discriminative approach.\nOn...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 8, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "GvzcHl-aUwj", "eW7lo3zvZH", "lwnRX4lZh3", "Pqg5kAR1Et2n", "T9RWwRMEn7ZY", "hq2bSm0sYK2", "wMV5F4agxNA", "Beg2TKjfty0", "iclr_2022_qnQN4yr6FJz", "iclr_2022_qnQN4yr6FJz", "iclr_2022_qnQN4yr6FJz", "iclr_2022_qnQN4yr6FJz" ]
iclr_2022_7fFO4cMBx_9
Variational Neural Cellular Automata
In nature, the process of cellular growth and differentiation has lead to an amazing diversity of organisms --- algae, starfish, giant sequoia, tardigrades, and orcas are all created by the same generative process. Inspired by the incredible diversity of this biological generative process, we propose a generative model...
Accept (Poster)
Meta Review for Variational Neural Cellular Automata This paper proposes a generative model, a VAE whose decoder is implemented via neural cellular automata (NCA). The authors show that this model performs well for reconstruction, but they also show that the architecture has some robustness properties against damage d...
val
[ "jHa7G1fEBe8", "GdqEppPaNKJ", "LtNrWCl7Lu", "DOkh1xz3LP2", "XCoqle2rd_u", "7FA6VN3-xCg", "gq8DIwLbdY", "U9Vf9j6yVl", "QFJicywXjCG", "khwCBIBtwRi", "1dYLpi1TzpB", "aOf-e1GBP0H", "7xUY_mKm7Ea", "mQmXOVcJYXU", "KN_tl_1RGAd", "04xuNIBCoDI" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer. Have you had a chance to look at our rebuttal and updated paper? We're eagerly awaiting your response.", " Dear Reviewer. Have you had a chance to look at our rebuttal and updated paper? We're eagerly awaiting your response.", " Thank you for your clarifications. Since neither better samples, n...
[ -1, -1, -1, -1, 5, -1, 8, -1, -1, -1, -1, -1, -1, -1, 5, 5 ]
[ -1, -1, -1, -1, 4, -1, 5, -1, -1, -1, -1, -1, -1, -1, 4, 4 ]
[ "khwCBIBtwRi", "mQmXOVcJYXU", "mQmXOVcJYXU", "7FA6VN3-xCg", "iclr_2022_7fFO4cMBx_9", "1dYLpi1TzpB", "iclr_2022_7fFO4cMBx_9", "aOf-e1GBP0H", "iclr_2022_7fFO4cMBx_9", "04xuNIBCoDI", "XCoqle2rd_u", "7xUY_mKm7Ea", "gq8DIwLbdY", "KN_tl_1RGAd", "iclr_2022_7fFO4cMBx_9", "iclr_2022_7fFO4cMBx_9...
iclr_2022_gJLEXy3ySpu
Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations
Top-$k$ predictions are used in many real-world applications such as machine learning as a service, recommender systems, and web searches. $\ell_0$-norm adversarial perturbation characterizes an attack that arbitrarily modifies some features of an input such that a classifier makes an incorrect prediction for the pertu...
Accept (Poster)
Thank you for your submission to ICLR. The reviewers ultimately have mixed opinions on this paper, but reading in a bit more depth I don't feel that the critical comments raised by the sole negative reviewer really raise valid points. Specifically, the fact that this reviewer directly asks e.g. for comparisons to Lev...
test
[ "CXFL1fzyx-Z", "h7gdeWZkUNz", "QT_vj5FPEQI", "WTVfXcdYwYX", "mygRHD1yBWs", "BC9jBU58tJL", "2Cvoc523ZPt", "fBbJX96YgBv" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper provides an almost tight l0-norm certified robustness guarantee for top-k predictions against adversarial perturbations, which extends certified radius of the top-1 prediction from Levine & Feizi (2019) to that of the top-k predictions, and the l2-norm certified radius from Jia et al. (2020) to the l0-n...
[ 5, -1, -1, -1, -1, 6, 6, 6 ]
[ 3, -1, -1, -1, -1, 2, 3, 4 ]
[ "iclr_2022_gJLEXy3ySpu", "BC9jBU58tJL", "CXFL1fzyx-Z", "2Cvoc523ZPt", "fBbJX96YgBv", "iclr_2022_gJLEXy3ySpu", "iclr_2022_gJLEXy3ySpu", "iclr_2022_gJLEXy3ySpu" ]
iclr_2022_vds4SNooOe
Superclass-Conditional Gaussian Mixture Model For Learning Fine-Grained Embeddings
Learning fine-grained embeddings is essential for extending the generalizability of models pre-trained on "coarse" labels (e.g., animals). It is crucial to fields for which fine-grained labeling (e.g., breeds of animals) is expensive, but fine-grained prediction is desirable, such as medicine. The dilemma necessitates ...
Accept (Spotlight)
This work presents an approach to learning good representations for few-shot learning when supervision is provided at the super-class level and is otherwise missing at the sub-class level. After some discussion with the authors, all reviewers are supportive of this work being accepted. Two reviewers were even supporti...
train
[ "RBm6XnTnXmQ", "tbMzKlgx40U", "QUmQLkiPwr", "quZUEbiILx6", "WD1OwGGEY7", "i2oayk5-Vl", "-UEKEe9TKq4", "vA10l55UIdr", "KFq6A5XVXO", "OCpX7oNWimK", "PTK1xjLSDQ1y", "4TVh9fz6jv", "Pg5vdZfBHfH", "rdePXl5Xi7H", "ym4GZZh3wN1I", "zHqasKBmC4qY", "DbA39Rj9fwB", "BzFE-c0o_RIX", "ys0IJFSLvm...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author"...
[ " Thank you for reviewing our response, and we are happy that our response helps with the questions. We also appreciate your recognition of our work! The following are our responses to your suggestions.\n\n(1) Q7: Adding text in the manuscript clarifying the explanation the authors provide here about exactly what i...
[ -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6 ]
[ -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2 ]
[ "QUmQLkiPwr", "iclr_2022_vds4SNooOe", "hQcnva9y1ri", "tbMzKlgx40U", "tbMzKlgx40U", "tbMzKlgx40U", "vA10l55UIdr", "G7bsMAFifBw", "tbMzKlgx40U", "tbMzKlgx40U", "tbMzKlgx40U", "Pg5vdZfBHfH", "4JWYJ4tRM0", "tbMzKlgx40U", "tbMzKlgx40U", "tbMzKlgx40U", "tbMzKlgx40U", "tbMzKlgx40U", "wr...
iclr_2022_oapKSVM2bcj
Einops: Clear and Reliable Tensor Manipulations with Einstein-like Notation
Tensor computations underlie modern scientific computing and deep learning. A number of tensor frameworks emerged varying in execution model, hardware support, memory management, model definition, etc. However, tensor operations in all frameworks follow the same paradigm. Recent neural network architectures demonstrate...
Accept (Oral)
All reviewers agree that this paper is a useful and valuable contributions to ML engineering. - insightful analysis .. highly user friendly operator design - "useful and I can see it having large adoption in the community of scientific computing" ... " - "Personally I tend to buy these advantages of einops" ... "How...
val
[ "cO_1wrl0Z4", "nMhgbEVtmSg", "3xW6VJXvsY", "8MoweRGg0oc", "3wJKrbZinVU", "-R5jvlwWrl", "9i-DJQ_fcN_", "da0AUoD1Z5-", "lvp6cvCCno", "n9LmQ3TwTIR", "RBl_wqaeXZR", "7o7TpIlF1qZ", "vZo-nJ8Ln7h", "VlDHW7rSJ2U", "V8R-ue0QNU" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewers and Area Chairs,\n\nThanks again for your constructive and specific feedback which helps us to improve clarity and presentation of the paper.\n\nWe have incorporated fruitful suggestions on improving the paper's text and included additional information requested by reviewers. The main changes made ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 3, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "iclr_2022_oapKSVM2bcj", "3xW6VJXvsY", "3wJKrbZinVU", "iclr_2022_oapKSVM2bcj", "vZo-nJ8Ln7h", "vZo-nJ8Ln7h", "vZo-nJ8Ln7h", "vZo-nJ8Ln7h", "V8R-ue0QNU", "VlDHW7rSJ2U", "7o7TpIlF1qZ", "iclr_2022_oapKSVM2bcj", "iclr_2022_oapKSVM2bcj", "iclr_2022_oapKSVM2bcj", "iclr_2022_oapKSVM2bcj" ]
iclr_2022_q4tZR1Y-UIs
It Takes Four to Tango: Multiagent Self Play for Automatic Curriculum Generation
We are interested in training general-purpose reinforcement learning agents that can solve a wide variety of goals. Training such agents efficiently requires automatic generation of a goal curriculum. This is challenging as it requires (a) exploring goals of increasing difficulty, while ensuring that the agent (b) is e...
Accept (Poster)
While one reviewer remained concerned about the possibility of convergence to bad equilibria and felt that the proposed method appears to be four minor changes from prior work (PAIRED), the authors demonstrate empirically that the proposed changes make a significant difference in their evaluation. Other reviewers were ...
train
[ "oSliU699LLB", "rmDBfwC6JB", "yTEuH7TvGTX", "-Qdn2ghrimI", "obTi97UaLcr", "80QPXiL2pl", "6r1vbcTMK9H", "VvcDaV526Z", "Fka802G6Fx", "_yZPQ7j9wSI", "vDo_MIuiKGn", "wuvqxtpQJq9", "fgjIt2q4WS2", "epufp2Wmd_q", "5meJl4jbrO8", "yzG81dSq9Ne", "69qSwrz2vay", "pmqSKG7XRa", "EWgCTdHZvG1", ...
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", ...
[ " We thank the reviewers again for their thorough feedback and reviews. The following is a summary of changes made to address reviewer concerns and suggestions, which are highlighted in pink text in the PDF:\n\nMain Text:\n- We added a citation to *Automatic Curriculum Learning for Deep RL: A short survey.* (Portel...
[ -1, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, 8, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, -1, -1, -1, -1, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4 ]
[ "iclr_2022_q4tZR1Y-UIs", "wPfRBo0gDmk", "OL63kGfAB85", "vDo_MIuiKGn", "80QPXiL2pl", "Fka802G6Fx", "iclr_2022_q4tZR1Y-UIs", "wPfRBo0gDmk", "OA-5_7XZxo", "OL63kGfAB85", "fgjIt2q4WS2", "iclr_2022_q4tZR1Y-UIs", "epufp2Wmd_q", "pmqSKG7XRa", "yzG81dSq9Ne", "69qSwrz2vay", "pmqSKG7XRa", "E...
iclr_2022_ULfq0qR25dY
Maximum n-times Coverage for Vaccine Design
We introduce the maximum $n$-times coverage problem that selects $k$ overlays to maximize the summed coverage of weighted elements, where each element must be covered at least $n$ times. We also define the min-cost $n$-times coverage problem where the objective is to select the minimum set of overlays such that the sum...
Accept (Poster)
The paper introduces the maximum n-times coverage, a new NP-hard (and non-submodular) optimization problem. It is shown that the problem can naturally arise in ML-based vaccine design, and two heuristics are given to solve the problem. The results are used to produce a pan-strain COVID vaccine. The reviewers and I th...
train
[ "Zer4K1Trdb7", "W4faTUGJLef", "4ZYFn4ucZLw", "6Q4UgLScSHM", "jW6VZepM0w", "vAcMpgh2z6U", "pps5KTxRRZ", "IlzSDH2Bh2k", "CH9_TX5PUQN", "tjqo0WG4Pe", "tRPouczD-pt" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "The paper introduces a variant/generalization of multi set multi cover problem, where the aim is to maximize the weight of the elements covered at least n times by up to k overlays (subsets of a given input familiy of sets over the elements' universe). The authors show that the objective function is not submodular...
[ 6, -1, -1, -1, 6, -1, -1, -1, -1, 8, 6 ]
[ 3, -1, -1, -1, 4, -1, -1, -1, -1, 3, 3 ]
[ "iclr_2022_ULfq0qR25dY", "6Q4UgLScSHM", "Zer4K1Trdb7", "vAcMpgh2z6U", "iclr_2022_ULfq0qR25dY", "IlzSDH2Bh2k", "tRPouczD-pt", "jW6VZepM0w", "jW6VZepM0w", "iclr_2022_ULfq0qR25dY", "iclr_2022_ULfq0qR25dY" ]
iclr_2022_hniLRD_XCA
DeSKO: Stability-Assured Robust Control with a Deep Stochastic Koopman Operator
The Koopman operator theory linearly describes nonlinear dynamical systems in a high-dimensional functional space and it allows to apply linear control methods to highly nonlinear systems. However, the Koopman operator does not account for any uncertainty in dynamical systems, causing it to perform poorly in real-world...
Accept (Poster)
The paper was seen positively by all reviewers. The strength of the paper are: - Intuitive and interesting combination of Koopman Operators and Optimal Control for Reinforcement Learning - Convincing experiments on challenging benchmark tasks - All of the issues of the reviewers (advantages to SAC, gaps in the theory a...
train
[ "v9Wz5W5LG6x", "-eBg7Cdo4-M", "x1MM0I2dLA-", "NlGe3g5G2cc", "MYp-gHUZCfd", "G2QiRW2lAGP", "9YYHoV705t", "C6vto9INRvn", "U9zkL1LKCm3", "B67D8l268be" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hi,\n\nThank you. The changes made to the manuscript address the concerns I raised in my review. I would be happy to see this work published in its revised form. I prefer to keep my original score.", " We greatly appreciate the reviewer’s affirmative comments on the technical contributions of this work and deta...
[ -1, -1, -1, -1, -1, -1, 6, 8, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 2, 4, 3, 5 ]
[ "x1MM0I2dLA-", "B67D8l268be", "-eBg7Cdo4-M", "U9zkL1LKCm3", "C6vto9INRvn", "9YYHoV705t", "iclr_2022_hniLRD_XCA", "iclr_2022_hniLRD_XCA", "iclr_2022_hniLRD_XCA", "iclr_2022_hniLRD_XCA" ]
iclr_2022_fPhKeld3Okz
Gradient Step Denoiser for convergent Plug-and-Play
Plug-and-Play methods constitute a class of iterative algorithms for imaging problems where regularization is performed by an off-the-shelf denoiser. Although Plug-and-Play methods can lead to tremendous visual performance for various image problems, the few existing convergence guarantees are based on unrealistic (or ...
Accept (Poster)
The paper proposes a plug-and-play method for solving imaging problems. Plug-and-play methods use a denoiser to solve linear inverse problems. The paper proposes a plug-and-play method and uses convex optimization tools from analyzing proximal gradient methods to provide convergence guarantees. The algorithm is applied...
train
[ "oRvkGZD-M1r", "xU1EqLs88ed", "4t1MDUqj0O", "RLaRmo1gOk", "tGI8eX_Tf4B", "vx0z9IshY_l", "1UFTpY3BcZT", "PKCcdQNvcD3", "gWe8RJmMYMB", "G1crShFvQF", "jcWFw6JTAc7", "aps8Uld7cGy", "C_HJmFD7gV" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer" ]
[ "This paper makes an extension of the plug-and-play framework by formulating an explicit regularizer $g(x)=||x - N_\\sigma(x)||_2^2$ whose gradient $\\nabla g$ corresponds to the noise residual $x-D_\\sigma(x)$. By replacing the proximal of regularizer with this gradient step denoiser, the authors proposed the GS-P...
[ 6, -1, -1, -1, -1, -1, 6, -1, -1, -1, -1, 8, 6 ]
[ 4, -1, -1, -1, -1, -1, 5, -1, -1, -1, -1, 4, 4 ]
[ "iclr_2022_fPhKeld3Okz", "C_HJmFD7gV", "C_HJmFD7gV", "oRvkGZD-M1r", "aps8Uld7cGy", "1UFTpY3BcZT", "iclr_2022_fPhKeld3Okz", "1UFTpY3BcZT", "oRvkGZD-M1r", "1UFTpY3BcZT", "iclr_2022_fPhKeld3Okz", "iclr_2022_fPhKeld3Okz", "iclr_2022_fPhKeld3Okz" ]