paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18
values | meta_review stringlengths 29 10k | label stringclasses 3
values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2022_WE4qe9xlnQw | A Program to Build E(N)-Equivariant Steerable CNNs | Equivariance is becoming an increasingly popular design choice to build data efficient neural networks by exploiting prior knowledge about the symmetries of the problem at hand. Euclidean steerable CNNs are one of the most common classes of equivariant networks. While the constraints these architectures need to satisfy... | Accept (Poster) | The paper proposes an approach to constructing steerable equivariant CNNs over arbitrary subgroups of E(3), by generalizing the Wigner-Eckart theorem for steerable kernels in Lang & Weiler (2020). The intuitive idea is to use a steerable basis for a large group like O(3) to build a basis for a subgroup of interest like... | train | [
"zwfn0FWLrLy",
"dwWwqW7CjDk",
"KHooBggZpd8",
"Hs0WwRhJZL",
"HybJa2W_Ny7",
"3xyGkXklnN",
"nh4Ekiw1qyQ",
"ekK0HclHlD",
"gVoPJ5t5qkl",
"hbLz235hfXs"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors present a general algorithm for constructing steerable equivariant neural networks. The main idea is that by constructing a steerable basis for a large symmetry group such as O(3), using the ideas in the paper, we can readily construct equivariant networks which are equivariants to smaller subgroups, s... | [
6,
-1,
8,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
3,
-1,
2,
-1,
-1,
-1,
-1,
-1,
3,
2
] | [
"iclr_2022_WE4qe9xlnQw",
"nh4Ekiw1qyQ",
"iclr_2022_WE4qe9xlnQw",
"ekK0HclHlD",
"hbLz235hfXs",
"gVoPJ5t5qkl",
"zwfn0FWLrLy",
"KHooBggZpd8",
"iclr_2022_WE4qe9xlnQw",
"iclr_2022_WE4qe9xlnQw"
] |
iclr_2022_Q42f0dfjECO | Differentially Private Fine-tuning of Language Models | We give simpler, sparser, and faster algorithms for differentially private fine-tuning of large-scale pre-trained language models, which achieve the state-of-the-art privacy versus utility tradeoffs on many standard NLP tasks. We propose a meta-framework for this problem, inspired by the recent success of highly parame... | Accept (Poster) | Discussions and additional baseline experiments added during the author response period were enough to motivate multiple reviewers to change their recommendation to an accept during the author response. Multiple reviewers felt that the technical novelty of the work was limited, but the rebuttal cleared up their concern... | train | [
"GsfbgMSusgw",
"Z1WjbiL9Whn",
"2QTzqf9HgA",
"CqhLFaDJ0Ui",
"Gu62qx2aEGb",
"cwOPrk25RUk",
"MErArkWtxp3",
"4eLSFsnR7SS",
"FGPqvWHO-dT",
"JqxAWYklDHT"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Hi reviewer oFqs, just wanted to point out that we have added more baselines as requested. Are these what you were hoping to see, or are there other baselines that you think would help improve the paper? Thanks!",
"This paper demonstrates the feasibility of fine-tuning large language models (pretrained on publi... | [
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
8,
6
] | [
-1,
5,
4,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"Gu62qx2aEGb",
"iclr_2022_Q42f0dfjECO",
"iclr_2022_Q42f0dfjECO",
"iclr_2022_Q42f0dfjECO",
"JqxAWYklDHT",
"Z1WjbiL9Whn",
"FGPqvWHO-dT",
"2QTzqf9HgA",
"iclr_2022_Q42f0dfjECO",
"iclr_2022_Q42f0dfjECO"
] |
iclr_2022_8in_5gN9I0 | Triangle and Four Cycle Counting with Predictions in Graph Streams | We propose data-driven one-pass streaming algorithms for estimating the number of triangles and four cycles, two fundamental problems in graph analytics that are widely studied in the graph data stream literature. Recently, Hsu et al. (2019) and Jiang et al. (2020) applied machine learning techniques in other data stre... | Accept (Poster) | The paper proposed a novel one-pass efficient streaming algorithm for estimating the number of triangles and four cycles. The concerns raised by reviewers were nicely addressed in the rebuttal and all the reviewers agree that the paper is above bar for publication. | train | [
"BqbjXifrviy",
"cuUBNOVuENg",
"ouUE0Ly69yQ",
"r2rZaHSmY4V"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer"
] | [
"The paper proposes a one pass streaming algorithms for estimating the number of triangles in adjacency list and arbitrary order models and 4-cycle in arbitrary edge arrival order. The authors propose algorithms for these streaming models under the assumption of a \"heavy\" edge oracle/ML model. The paper support t... | [
6,
6,
-1,
8
] | [
4,
3,
-1,
4
] | [
"iclr_2022_8in_5gN9I0",
"iclr_2022_8in_5gN9I0",
"iclr_2022_8in_5gN9I0",
"iclr_2022_8in_5gN9I0"
] |
iclr_2022_o-1v9hdSult | Bridging the Gap: Providing Post-Hoc Symbolic Explanations for Sequential Decision-Making Problems with Inscrutable Representations | As increasingly complex AI systems are introduced into our daily lives, it becomes important for such systems to be capable of explaining the rationale for their decisions and allowing users to contest these decisions. A significant hurdle to allowing for such explanatory dialogue could be the {\em vocabulary mismatch}... | Accept (Poster) | The paper gives a method for generating contrastive explanations, in terms of user-specified concepts, for an agent in a sequential decision making setting.
The reviewers found the paper to be a strong contribution to explainable AI and RL. There were some concerns about the writing, but the revisions have addressed ... | train | [
"THWwbFqcRl5",
"OCkZjhv3xWz",
"O1cOONFWslQ",
"2Jhl4eSu3c",
"xR-zZF9ht-O",
"Xj6QBXcNweF",
"BVDy6wJHPjV",
"p6ZhGHBN4KP",
"JvBioGfDch",
"TMcb-zm7TrT",
"fgMUVE592R8"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a symbolic model implemented alongside a planning AI, which can respond to human queries about why one plan is taken by the AI, and not another. The system takes as input a user concept vocabulary, where each concept is implemented by binary classifier indicating whether the proposition is pr... | [
5,
-1,
-1,
10,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
2,
4
] | [
"iclr_2022_o-1v9hdSult",
"JvBioGfDch",
"2Jhl4eSu3c",
"iclr_2022_o-1v9hdSult",
"iclr_2022_o-1v9hdSult",
"fgMUVE592R8",
"TMcb-zm7TrT",
"2Jhl4eSu3c",
"THWwbFqcRl5",
"iclr_2022_o-1v9hdSult",
"iclr_2022_o-1v9hdSult"
] |
iclr_2022_r5qumLiYwf9 | MaGNET: Uniform Sampling from Deep Generative Network Manifolds Without Retraining | Deep Generative Networks (DGNs) are extensively employed in Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and their variants to approximate the data manifold, and data distribution on that manifold. However, training samples are often obtained based on preferences, costs, or convenience produ... | Accept (Poster) | The paper proposes a simple method for uniform sampling from generative manifold using change of variables formula. The method works by first sampling a much larger number of samples (N) from uniform distribution in the latent space and then does sampling by replacement (using probability proportional to change in volu... | test | [
"PXLLGU4hCdL",
"MhjpJnmLgV",
"NQaIozH2RXc",
"panu_aJVE9n",
"UZ7PNJDXLE",
"ARMBV70njO2",
"W3Uj6UaGVgp",
"BvIpRXcoe6",
"321izR4qZhm"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" **Q4. Computationally, the authors demonstrate in Appendix D that sampling with N past 250k does not affect the Precision-Recall metric, but I could not find what N is in the experiments shown. And since each image sample requires computing the Jacobian of the DGN w.r.t. its input, I wonder what is the approximat... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
8,
5
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
3,
4
] | [
"321izR4qZhm",
"W3Uj6UaGVgp",
"iclr_2022_r5qumLiYwf9",
"BvIpRXcoe6",
"321izR4qZhm",
"NQaIozH2RXc",
"NQaIozH2RXc",
"iclr_2022_r5qumLiYwf9",
"iclr_2022_r5qumLiYwf9"
] |
iclr_2022_USC0-nvGPK | Information Gain Propagation: a New Way to Graph Active Learning with Soft Labels | Graph Neural Networks (GNNs) have achieved great success in various tasks, but their performance highly relies on a large number of labeled nodes, which typically requires considerable human effort. GNN-based Active Learning (AL) methods are proposed to improve the labeling efficiency by selecting the most valuable nod... | Accept (Poster) | This paper proposes a new approach to graph-based active learning, using the query whether the predictions made by the current model are correct or not.
Although the theoretical underpinnings of the proposed approach are a bit weak, the problem formulation that is newly proposed in this paper makes sense from a practic... | val | [
"mo9hDWuabPs",
"4DgbpHSaW88",
"KXJ-JGmRSw",
"HGpBlyz1Ad",
"dJMmds1g-lp",
"d8M-wjYxYDY",
"rfhflaPngiq",
"jqqhp4XCIAm",
"er5z6OLWzjO",
"CQ9GxhANANz",
"HbmWgEIsYC",
"a4JI27XIuti",
"S1dMeHaXK0g",
"cIlxyBVsghF",
"k5t0L4BzlzX",
"4-CGJsubvc9",
"9pCUE2sm8cA",
"4OeE9hu-oG"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the continuous comments.\n\n### 1. Technical errors\nWe have corrected the notation errors pointed out by the reviewer in Equation (4) and Equation (8) as follows: \nEquation(4)\n\n\\begin{equation}\n\\small\n \\begin{aligned}\n a_t\\^{\\prime}=\n \\begin{cases}\n \\mathbb{1}... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
6,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"er5z6OLWzjO",
"HGpBlyz1Ad",
"dJMmds1g-lp",
"S1dMeHaXK0g",
"S1dMeHaXK0g",
"jqqhp4XCIAm",
"cIlxyBVsghF",
"CQ9GxhANANz",
"HbmWgEIsYC",
"HbmWgEIsYC",
"k5t0L4BzlzX",
"4OeE9hu-oG",
"9pCUE2sm8cA",
"4-CGJsubvc9",
"iclr_2022_USC0-nvGPK",
"iclr_2022_USC0-nvGPK",
"iclr_2022_USC0-nvGPK",
"icl... |
iclr_2022_XOh5x-vxsrV | Cross-Trajectory Representation Learning for Zero-Shot Generalization in RL | A highly desirable property of a reinforcement learning (RL) agent -- and a major difficulty for deep RL approaches -- is the ability to generalize policies learned on a few tasks over a high-dimensional observation space to similar tasks not seen during training. Many promising approaches to this challenge consider RL... | Accept (Poster) | Meta Review of Cross-Trajectory Representation Learning for Zero-Shot Generalization in RL
This work investigates a zero-shot generalization method for RL based on an online-clustering adapting it to RL. The intuition of this approach (called Cross Trajectory Representation Learning, CTRL) is that the self-supervised ... | train | [
"AU7DjNSs3Mj",
"IA_IeMFywx2",
"QGEACj60aEY",
"gCY0du1TtMJ",
"pnXsP6gVwk",
"PrH0InWY9YY",
"1xLJtqwL5vQ",
"4AmvbjfO9KC",
"XFXQhYKpopD",
"1b1d7hxeUY"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes an extension of the MYOW (Azabou et al 2021) self-supervised learning technique, combining it with SwAV (Caron et al 2021)’s method to perform online clustering, and adapt it for RL to assess generalisation performance on Procgen.\nIt compares against several recent baselines (DBC, PSE, CURL, P... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"iclr_2022_XOh5x-vxsrV",
"1xLJtqwL5vQ",
"pnXsP6gVwk",
"1b1d7hxeUY",
"XFXQhYKpopD",
"4AmvbjfO9KC",
"AU7DjNSs3Mj",
"iclr_2022_XOh5x-vxsrV",
"iclr_2022_XOh5x-vxsrV",
"iclr_2022_XOh5x-vxsrV"
] |
iclr_2022_7KdAoOsI81C | Transfer RL across Observation Feature Spaces via Model-Based Regularization | In many reinforcement learning (RL) applications, the observation space is specified by human developers and restricted by physical realizations, and may thus be subject to dramatic changes over time (e.g. increased number of observable features). However, when the observation space changes, the previous policy will li... | Accept (Poster) | This work suggests using models of the environment as regularizers for performing explicit transfer in RL. Here are some of the highlights from the reviews and subsequent discussions:
* Novel problem
* Unclear to some of the reviewers why the problem setting is in fact important.
* Well-written
* Interesting t... | train | [
"LgHzvZ3gTol",
"uT0R71LnVDW",
"sG9ZPAx4cib",
"4gr6pk6ggBA",
"KUGjbvSVDke",
"zav6yuH9nyj",
"KQhbCX2E60V",
"YPhCwFTz9Uz",
"IlPC5jvNjCv",
"Md2FVDVUZw0",
"DRU9PDgBbWB",
"NZ3O0QoA5bx",
"kGGyNSV4pUf",
"LmSKD_k9h0_",
"HQEOfZeDM8A",
"OSrNjEVup4S",
"t-9_DCQrDmq",
"AeQDA3Ow25h",
"FwYyLZ7aU... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",... | [
"The paper addresses the problem of adapting a policy to a novel representation of the environment. Different from most previous work the paper focuses on the problem of a change in the representation of the observation provided by the environment, i.e., assuming that the underlying dynamics P and R remains mostly ... | [
5,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
4,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2022_7KdAoOsI81C",
"Md2FVDVUZw0",
"KUGjbvSVDke",
"iclr_2022_7KdAoOsI81C",
"KQhbCX2E60V",
"LgHzvZ3gTol",
"4gr6pk6ggBA",
"IlPC5jvNjCv",
"yeNO37Ykq6R",
"iclr_2022_7KdAoOsI81C",
"iclr_2022_7KdAoOsI81C",
"Md2FVDVUZw0",
"Md2FVDVUZw0",
"Md2FVDVUZw0",
"Md2FVDVUZw0",
"Md2FVDVUZw0",
"LgH... |
iclr_2022_OUz_9TiTv9j | A Zest of LIME: Towards Architecture-Independent Model Distances | Definitions of the distance between two machine learning models either characterize the similarity of the models' predictions or of their weights. While similarity of weights is attractive because it implies similarity of predictions in the limit, it suffers from being inapplicable to comparing models with different ar... | Accept (Poster) | This paper presents a method, called Zest, to measure the similarity between two supervised machine learning models based on their model explanations computed by the LIME feature attribution method. The technical novelty and significant are high, and results are strong. Reviewers had clarifying questions regarding ex... | val | [
"Sss5tNIeeav",
"iSDI4wuEmbI",
"ZsiMKcmelOC",
"jMATLYZhvx5",
"1B-26wJvHk",
"M-CBFTjvk7k",
"BbCyP5NdDSF",
"sBHndfDQcmF",
"F2YLB5gEg1",
"tBcGZIyFHsz",
"BT6ZDy9nXql",
"OoBHowy8964",
"5R4gykX5MG8",
"J-gFdJ7osZv"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the detailed response from the authors.",
" Dear Reviewer MKjr,\n\nThank you for your time reading our responses. We were wondering if we were able to adequately address your concerns.\n\n\nBest,\n\nPaper3112 authors",
"The paper presents a method (Zest) for measuring the distance or similarity bet... | [
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
6
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"1B-26wJvHk",
"OoBHowy8964",
"iclr_2022_OUz_9TiTv9j",
"J-gFdJ7osZv",
"J-gFdJ7osZv",
"5R4gykX5MG8",
"ZsiMKcmelOC",
"OoBHowy8964",
"ZsiMKcmelOC",
"OoBHowy8964",
"OoBHowy8964",
"iclr_2022_OUz_9TiTv9j",
"iclr_2022_OUz_9TiTv9j",
"iclr_2022_OUz_9TiTv9j"
] |
iclr_2022_siCt4xZn5Ve | What Happens after SGD Reaches Zero Loss? --A Mathematical Framework | Understanding the implicit bias of Stochastic Gradient Descent (SGD) is one of the key challenges in deep learning, especially for overparametrized models, where the local minimizers of the loss function $L$ can form a manifold. Intuitively, with a sufficiently small learning rate $\eta$, SGD tracks Gradient Descent (G... | Accept (Spotlight) | All the reviewers agree that this paper made a solid contribution of understanding the algorithmic regularization of SGD noise (in particular the label noise for regression) after reaching zero loss. The framework is novel and has the potential to extend to other settings. | train | [
"OHCox5kRYaO",
"kYytoJyrZAn",
"rFGMLuegfYh",
"xk7yLcZT-mA",
"CRG4wPD_eIY",
"HtY-NGnhGe",
"9xTYzUNnwgz",
"FNx6ngg5ZH",
"4-9TivqOEHP",
"qdld717veIH",
"X8vcq6-pVoU",
"2bNVaQOxeHc",
"1NCrtYUxWl-",
"bRzkHa5VK65",
"JsIcTsINEn",
"9vslQbto1vs",
"inwwYyQMiPc",
"Vq2zZKVbO3X",
"55ZoDgtbVS",... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_r... | [
" \n**How does the current approach extend to cross-entropy loss when the minimizer doesn't exist?** \n\nYou're correct that when the global minimum cannot be attained, our theorem will be vacuous, as there is no such manifold of minimizers. However, we kindly note that **this is not an inherent issue of cross-entr... | [
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
10
] | [
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5
] | [
"xk7yLcZT-mA",
"2bNVaQOxeHc",
"CRG4wPD_eIY",
"X8vcq6-pVoU",
"inwwYyQMiPc",
"iclr_2022_siCt4xZn5Ve",
"X8vcq6-pVoU",
"X8vcq6-pVoU",
"X8vcq6-pVoU",
"X8vcq6-pVoU",
"iclr_2022_siCt4xZn5Ve",
"1NCrtYUxWl-",
"cGaXrXgrp3D",
"Vq2zZKVbO3X",
"iclr_2022_siCt4xZn5Ve",
"55ZoDgtbVS",
"HtY-NGnhGe",
... |
iclr_2022_metRpM4Zrcb | Continual Learning with Filter Atom Swapping | Continual learning has been widely studied in recent years to resolve the catastrophic forgetting of deep neural networks. In this paper, we first enforce a low-rank filter subspace by decomposing convolutional filters within each network layer over a small set of filter atoms. Then, we perform continual learning with ... | Accept (Spotlight) | The authors propose a memory-based continual learning method that decomposes the models' parameters and that shares a large number of the decomposed parameters across tasks. In other words, only a small number of parameters are task-specific and the memory usage of storing models from previous tasks is hence a fraction... | train | [
"90W9wKmNyAN",
"4C8M7MGWmsa",
"8Z5qyG7mLa",
"GGXSoT9GfPC",
"8DH-Auud0Jo",
"IUizFn5T1bW7",
"5bX3fPOvO_",
"7GS3LCu7BC1",
"5IshJu3OVt",
"qiU2YcywCF-"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper introduces a model for continual learning based on the decomposition of linear filters into low-rank components, called atoms. Specifically, the authors decompose convolutional filters shaped (c,c',k,k) into two components: i) alpha, shaped (c,c',m) and D, shaped (m,k,k). The former is learned on the fi... | [
8,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
8
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"iclr_2022_metRpM4Zrcb",
"GGXSoT9GfPC",
"GGXSoT9GfPC",
"90W9wKmNyAN",
"qiU2YcywCF-",
"5IshJu3OVt",
"7GS3LCu7BC1",
"iclr_2022_metRpM4Zrcb",
"iclr_2022_metRpM4Zrcb",
"iclr_2022_metRpM4Zrcb"
] |
iclr_2022_qhAeZjs7dCL | Generative Models as a Data Source for Multiview Representation Learning | Generative models are now capable of producing highly realistic images that look nearly indistinguishable from the data on which they are trained. This raises the question: if we have good enough generative models, do we still need datasets? We investigate this question in the setting of learning general-purpose visual... | Accept (Poster) | All of the reviewers appreciate the clarity of exposition and the importance of the problem studied. That said, I agree with Reviewer P9Ys that the results are somewhat underwhelming. The baselines appear weak and are likely not well tuned on the Stanford car dataset. Key question that remains unanswered in my opinion ... | train | [
"JIFeve-4dc7",
"g31BGq3ry2c",
"0UmXMuUiggz",
"mVuG9RFaUoI",
"2beOikMZ6fv",
"YqDSeygnftM",
"y52pia_c3Sm",
"Ze8hHvpMEE-",
"zq7MpGKkbxR",
"cm-rI-t3odEa",
"P_FzS9uGDbn_",
"BVeFgNYj5d",
"_5ivvFO5Vxb2",
"q1DV-UuKBkqF",
"VZknh46Q7F9",
"6Q2hevMzBLq"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The work investigates **generative models as a data source for self-supervised representation learning**. In particular, the authors propose to form contrastive pairs in the latent space of the generative model and combine it with standard contrastive learning where pairs are formed in image space. \nWhile perform... | [
8,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2022_qhAeZjs7dCL",
"Ze8hHvpMEE-",
"iclr_2022_qhAeZjs7dCL",
"YqDSeygnftM",
"0UmXMuUiggz",
"y52pia_c3Sm",
"P_FzS9uGDbn_",
"JIFeve-4dc7",
"q1DV-UuKBkqF",
"iclr_2022_qhAeZjs7dCL",
"0UmXMuUiggz",
"6Q2hevMzBLq",
"VZknh46Q7F9",
"iclr_2022_qhAeZjs7dCL",
"iclr_2022_qhAeZjs7dCL",
"iclr_202... |
iclr_2022_OzyXtIZAzFv | Task-Induced Representation Learning | In this work, we evaluate the effectiveness of representation learning approaches for decision making in visually complex environments. Representation learning is essential for effective reinforcement learning (RL) from high-dimensional in- puts. Unsupervised representation learning approaches based on reconstruction, ... | Accept (Poster) | The paper is an interesting take on representation learning, using (prior) tasks to determine which information is important. The problem setting is somewhat difficult to pin down, so that that finding the correct comparisons is not obvious and opinions differ on many details of the setup. However, this is not a fault ... | test | [
"W9_MklmegiU",
"wt5am3-tbkJ",
"hUhKrh4pcdF",
"IZmolZu3j_N",
"TvnlqxBPzzo",
"Yx4KtCphIhX",
"WpDivcZVUL9",
"9GNGqt8Iiu",
"6qgfT6TPgDF",
"6_NUwhDi-n7",
"phjU_Ty-Pd0",
"X89JJ7q0PlG",
"rFZLEA7X1sW",
"d27kY8VVH0P",
"RXtZO6sbkeg",
"HIEYM8OVkv",
"QVLU679K54q"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper proposes a representation learning approach for multi-task reinforcement learning. The idea is to pre-train a model using multi-task data either with rewards or behavior cloning. Then fine-tune it on a new task hopefully more efficiently. They contrast it to unsupervised representation learning.\n\nFine-... | [
6,
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
4,
-1,
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2022_OzyXtIZAzFv",
"RXtZO6sbkeg",
"iclr_2022_OzyXtIZAzFv",
"iclr_2022_OzyXtIZAzFv",
"9GNGqt8Iiu",
"6_NUwhDi-n7",
"6qgfT6TPgDF",
"6qgfT6TPgDF",
"d27kY8VVH0P",
"phjU_Ty-Pd0",
"X89JJ7q0PlG",
"QVLU679K54q",
"IZmolZu3j_N",
"IZmolZu3j_N",
"W9_MklmegiU",
"hUhKrh4pcdF",
"iclr_2022_OzyX... |
iclr_2022_dpXL6lz4mOQ | LEARNING GUARANTEES FOR GRAPH CONVOLUTIONAL NETWORKS ON THE STOCHASTIC BLOCK MODEL | An abundance of neural network models and algorithms for diverse tasks on graphs have been developed in the past five years. However, very few provable guarantees have been available for the performance of graph neural network models. This state of affairs is in contrast with the steady progress on the theoretical unde... | Accept (Poster) | This paper shows that (under some parameter range) graph convolutional networks learns communities in the stochastic block model. The result is clean, the proof techniques rely on partitioning neurons of three types and seems applicable to more general settings. The reviewers agree that the main theorems are interestin... | train | [
"wU7WtG6QZwt",
"uhx1vsN9Sv_",
"s_8dlTohHX2",
"RhbOpI2LFwn",
"f_Li2PzODz",
"n4WJWh6UiH",
"vltI-fnWj1P",
"UFJ-NSiDDF9",
"vDkeYWjTb0x"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper presents a theoretical analysis of two-layer graph convolutional networks (GCN). The main goal is to study the behaviors of GCNs when the inputs are random graphs generated by stochastic block models. The stochastic block models are constructed by three components: the number of vertices, intra-connectio... | [
5,
-1,
-1,
-1,
-1,
-1,
8,
5,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"iclr_2022_dpXL6lz4mOQ",
"vDkeYWjTb0x",
"UFJ-NSiDDF9",
"vltI-fnWj1P",
"wU7WtG6QZwt",
"iclr_2022_dpXL6lz4mOQ",
"iclr_2022_dpXL6lz4mOQ",
"iclr_2022_dpXL6lz4mOQ",
"iclr_2022_dpXL6lz4mOQ"
] |
iclr_2022_CJzi3dRlJE- | Connectome-constrained Latent Variable Model of Whole-Brain Neural Activity | The availability of both anatomical connectivity and brain-wide neural activity measurements in C. elegans make the worm a promising system for learning detailed, mechanistic models of an entire nervous system in a data-driven way. However, one faces several challenges when constructing such a model. We often do not ha... | Accept (Poster) | The authors build an encoding model of whole-brain brain activity by integrating incomplete functional data with anatomical/connectomics data. This work is significant from a computational neuroscience perspective because it constitutes a proof of concept regarding how whole brain calcium imaging data can be used to c... | train | [
"vKJ4YADd02",
"D1-IHvqdEtV",
"wiqtXanoPMH",
"ScC5CRzcl5O",
"-2QjWVE0GY",
"5C9f9N0Dy-N",
"snkph53XyvR",
"NTcYJKnQg-x",
"NAXXjQuvAQl",
"fAz1z0eOaW"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose a biologically constrained latent linear dynamical model of the C. elegans nervous system. They use connectomic information (including chemical vs electrical synapses) to constrain connections between units during inference. They fit the model to calcium imaging data from whole C. elegans using... | [
8,
-1,
8,
-1,
-1,
-1,
-1,
-1,
6,
3
] | [
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2022_CJzi3dRlJE-",
"5C9f9N0Dy-N",
"iclr_2022_CJzi3dRlJE-",
"fAz1z0eOaW",
"wiqtXanoPMH",
"vKJ4YADd02",
"-2QjWVE0GY",
"NAXXjQuvAQl",
"iclr_2022_CJzi3dRlJE-",
"iclr_2022_CJzi3dRlJE-"
] |
iclr_2022_YpSxqy_RE84 | How Low Can We Go: Trading Memory for Error in Low-Precision Training | Low-precision arithmetic trains deep learning models using less energy, less memory and less time. However, we pay a price for the savings: lower precision may yield larger round-off error and hence larger prediction error. As applications proliferate, users must choose which precision to use to train a new model, and ... | Accept (Poster) | The paper considers a learning problem to determine the best low-precision configuration within the memory budget. It is an interesting problem that could be of interest to the community. Overall, the reviewers were fairly positive on the paper and believe the paper give interesting insights into how to use limited m... | train | [
"d-91Zef_2Gv",
"9ICGWx0nBYa",
"7moA7-q_eb",
"6zE6gOATrTi",
"KxKb61jxfI",
"iIJFp1QDFrg",
"CEnnbCA4V3T",
"VC9uruHjsm",
"_qREJ1WRyO",
"UXMnoWrdW5N",
"WYixWaT3Ga"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the additional comments and clarifications. I appreciate the time the authors and the reviewers spent to discuss this manuscript. I will keep my current score.",
" I appreciate the authors' responses to my comments. After carefully reading your responses and other reviewers’ comments, I have decid... | [
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"CEnnbCA4V3T",
"VC9uruHjsm",
"iclr_2022_YpSxqy_RE84",
"KxKb61jxfI",
"UXMnoWrdW5N",
"WYixWaT3Ga",
"_qREJ1WRyO",
"7moA7-q_eb",
"iclr_2022_YpSxqy_RE84",
"iclr_2022_YpSxqy_RE84",
"iclr_2022_YpSxqy_RE84"
] |
iclr_2022_BRFWxcZfAdC | LOSSY COMPRESSION WITH DISTRIBUTION SHIFT AS ENTROPY CONSTRAINED OPTIMAL TRANSPORT | We study an extension of lossy compression where the reconstruction distribution is different from the source distribution in order to account for distributional shift due to processing. We formulate this as a generalization of optimal transport with an entropy bottleneck to account for the rate constraint due to compr... | Accept (Poster) | This paper discusses the problem of cross-domain lossy compression on the basis of its reformulation as an entropy-constrained optimal transport. Two average distortion measures (without and with common randomness) are defined (Definitions 2 and 3), and some of their properties are investigated, as summarized in Theore... | train | [
"Alj3XN3-DwA",
"-FYZheIZsmY",
"iDPcw9TRFVq",
"SMC8OjXUB52",
"FwZQ_I9eAhM",
"G2na0T57UTX",
"i-18PFQ8rCC",
"KRwZ87q20Y5",
"xr6337wq6Qe",
"bEcJGhBU-B6",
"bIJMQz2Letg",
"9wT0u3bf4B9",
"ubdRbi3iwUF",
"k__9SVcBYGM",
"KlaB62Sry0",
"yg35HJ0gy_s",
"p0f79uRLbwt"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" \nWe thank the reviewers for their constructive comments that has helped us prepare a revised version of the paper. The major changes are as follows.\n\n1) Based on a comment by reviewer SKN1, we have revised the title of the paper to *Lossy Compression with Distribution Shift as Entropy Constrained Optimal Trans... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6,
8,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
2,
3,
4
] | [
"iclr_2022_BRFWxcZfAdC",
"xr6337wq6Qe",
"p0f79uRLbwt",
"FwZQ_I9eAhM",
"G2na0T57UTX",
"p0f79uRLbwt",
"yg35HJ0gy_s",
"yg35HJ0gy_s",
"KlaB62Sry0",
"KlaB62Sry0",
"k__9SVcBYGM",
"ubdRbi3iwUF",
"iclr_2022_BRFWxcZfAdC",
"iclr_2022_BRFWxcZfAdC",
"iclr_2022_BRFWxcZfAdC",
"iclr_2022_BRFWxcZfAdC"... |
iclr_2022_hm2tNDdgaFK | Learning 3D Representations of Molecular Chirality with Invariance to Bond Rotations | Molecular chirality, a form of stereochemistry most often describing relative spatial arrangements of bonded neighbors around tetrahedral carbon centers, influences the set of 3D conformers accessible to the molecule without changing its 2D graph connectivity. Chirality can strongly alter (bio)chemical interactions, pa... | Accept (Poster) | This work presents ChIRo, a method that incorporates 3D torsion angles of a molecular conformer to specifically handle chirality. Specifically ChIRo uses trigonometric functions to encode the torsion angles, which are invariant to bond torsion but sensitive to chirality, thus capable of distinguishing between enantiome... | train | [
"2TiGcPNkxAy",
"drw0ileTFl",
"eJEgvW6rKel",
"WpXxqpShPq",
"b9ZDc6RApcW",
"GIEW0Ef3zMo",
"4yJpwnclIr8",
"Latzn2xQYWQ",
"2lRLYwEgjN",
"UIxLPvdCQc",
"UA3mw135rk-",
"kofsE0BTC-c",
"QzNQOI5zA-Z",
"U7eFpF08ybu",
"85Dnl6TyYck",
"6atHe-rzdO7",
"W2x4R6sRleH",
"XixpmpkV6fq",
"FmkH1h0Zsj4",... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"... | [
"The paper focuses on improving the capacity of GNN models, using chirality identification as the case study. As an important character to represent the geometry in molecules, chirality has a fundamental impact on the molecule properties and downstream applications. It is a major extension of [1]. The distinguishin... | [
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2022_hm2tNDdgaFK",
"6atHe-rzdO7",
"W2x4R6sRleH",
"XixpmpkV6fq",
"2lRLYwEgjN",
"UA3mw135rk-",
"QzNQOI5zA-Z",
"iclr_2022_hm2tNDdgaFK",
"kofsE0BTC-c",
"iclr_2022_hm2tNDdgaFK",
"85Dnl6TyYck",
"Latzn2xQYWQ",
"QKYb-DAINhN",
"85Dnl6TyYck",
"UIxLPvdCQc",
"W2x4R6sRleH",
"XixpmpkV6fq",
... |
iclr_2022_vkaMaq95_rX | EXACT: Scalable Graph Neural Networks Training via Extreme Activation Compression | Training Graph Neural Networks (GNNs) on large graphs is a fundamental challenge due to the high memory usage, which is mainly occupied by activations (e.g., node embeddings). Previous works usually focus on reducing the number of nodes retained in memory.
In parallel, unlike what has been developed for other types of ... | Accept (Poster) | There are numerous known methods for memory reduction used in CNNs.
This paper takes two such---quantization (Q) and random projection (RP)---and applies them to GNNs. This is a novelty, but I agree with the reviewers: on its own this novelty would not be "surprising" enough to report at ICLR.
The paper further goe... | train | [
"WoLRM1nVTb",
"9gOLmUT5LG",
"TRkZ04gylqD",
"2OmA_eG_TP0",
"DZlrUqq70B7",
"iHfMqw39s0u",
"-MFS8sEkcdM",
"IHUuvm-tJ8k",
"LxuKeDYIJdX",
"gkz-9NZC7Lq",
"6S9oCm2xo31",
"Zfqix1P0z0u",
"9Kmw2G9Eyhc",
"We6CYv3OZEp",
"wwXWTMH6H3",
"FUpF2Dx4AV2",
"5PtJT31iiUW",
"K0RdQHwa8ei",
"_UAtNZ5WDXk"... | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",... | [
" > (Author) Reviewer 2GtZ argues that the existing solutions (mainly quantization) can tackle the GNN's scalability issue well. \n\n> (Reviewer 2GtZ) This is completely untrue. I did not argue that quantization alone (e.g. quantization aware training) was a good solution to GNN scalability.\n\nAfter closer examin... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"9gOLmUT5LG",
"-MFS8sEkcdM",
"-MFS8sEkcdM",
"-MFS8sEkcdM",
"-MFS8sEkcdM",
"-MFS8sEkcdM",
"LxuKeDYIJdX",
"K0RdQHwa8ei",
"iclr_2022_vkaMaq95_rX",
"6S9oCm2xo31",
"_UAtNZ5WDXk",
"_UAtNZ5WDXk",
"_UAtNZ5WDXk",
"_UAtNZ5WDXk",
"iclr_2022_vkaMaq95_rX",
"mokB9MCAJZ2",
"ae2_I-LtSEH",
"ae2_I-L... |
iclr_2022_48RBsJwGkJf | CrossMatch: Cross-Classifier Consistency Regularization for Open-Set Single Domain Generalization | Single domain generalization (SDG) is a challenging scenario of domain generalization, where only one source domain is available to train the model. Typical SDG methods are based on the adversarial data augmentation strategy, which complements the diversity of source domain to learn a robust model. Existing SDG methods... | Accept (Poster) | The paper presents a new problem: open-set single domain generalization, where only one source domain is available and unknown classes and unseen target domains increase the difficulty of the task. To tackle this challenging problem, this paper designs a CrossMatch approach to improve the performance of SDG methods on ... | train | [
"a-hT077n3x",
"QsIffzmUpAi",
"AezoQY6uPDU",
"xb4OLlEVQQZ",
"xvLRfWUPsrN",
"Rml6Al9jmMt",
"2EbiJ2XjKuO",
"04AYymM5WFm",
"5h3WVp_zH3",
"2fHdTJSVbf",
"POVkrd3fq5",
"ZT1iBJZ3rzm",
"fekzYLMZqPf",
"HDvAAhsAzII"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In domain generalization (DG), label set of target domain is that of source domains. However, we might meet the unknown classes in the target domain, which will cause significantly prediction error on such unknown-class data points in the target domain. To avoid this issue, this paper formulates a new problem sett... | [
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"iclr_2022_48RBsJwGkJf",
"a-hT077n3x",
"xb4OLlEVQQZ",
"fekzYLMZqPf",
"Rml6Al9jmMt",
"2EbiJ2XjKuO",
"HDvAAhsAzII",
"5h3WVp_zH3",
"2fHdTJSVbf",
"a-hT077n3x",
"ZT1iBJZ3rzm",
"iclr_2022_48RBsJwGkJf",
"iclr_2022_48RBsJwGkJf",
"iclr_2022_48RBsJwGkJf"
] |
iclr_2022_ckZY7DGa7FQ | A Fine-Tuning Approach to Belief State Modeling | We investigate the challenge of modeling the belief state of a partially observable Markov system, given sample-access to its dynamics model. This problem setting is often approached using parametric sequential generative modeling methods. However, these methods do not leverage any additional computation at inference t... | Accept (Poster) | The paper proposes to fine tune the belief states of a MDP, for later using the learned model for decision-time planning, e.g. via search.
The contribution is well-presented, motivated and focused to a specific scenario, which is generally considered challenging in the literature. This scenario is exemplified by the co... | train | [
"LpXH9HvTAni",
"6LGIztrusEu",
"TcyZYBuXYUB",
"-WIqGgupoa",
"VzilXXmOyc",
"KuxHZw2X2Zu",
"0lSqy-HMC5y",
"z2prraYXjry",
"EXIA6kgSVa6",
"s4ts6Jv5Cz1",
"C_lhLxCisbe",
"CzokDudQDQi",
"W_pqkBnElk0",
"GB_mxXkwJc5"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" > The paper is a little light on exact algorithmic details (and I did not see more in the appendix). In particular, I think it would be useful to exactly detail out the algorithm being proposed in a more fleshed-out manner somewhere in the main text or appendix.\n\nWe agree! We will include additional details in ... | [
-1,
8,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
3,
8
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
3,
4
] | [
"6LGIztrusEu",
"iclr_2022_ckZY7DGa7FQ",
"VzilXXmOyc",
"C_lhLxCisbe",
"0lSqy-HMC5y",
"CzokDudQDQi",
"LpXH9HvTAni",
"iclr_2022_ckZY7DGa7FQ",
"s4ts6Jv5Cz1",
"z2prraYXjry",
"GB_mxXkwJc5",
"W_pqkBnElk0",
"iclr_2022_ckZY7DGa7FQ",
"iclr_2022_ckZY7DGa7FQ"
] |
iclr_2022_Ek7PSN7Y77z | Multi-Stage Episodic Control for Strategic Exploration in Text Games | Text adventure games present unique challenges to reinforcement learning methods due to their combinatorially large action spaces and sparse rewards. The interplay of these two factors is particularly demanding because large action spaces require extensive exploration, while sparse rewards provide limited feedback. Thi... | Accept (Spotlight) | I thank the authors for their submission and active participation in the discussions. All reviewers are unanimously leaning towards acceptance of this paper. Reviewers in particular liked that the paper is well-written and easy to follow [186e,TAdH,Exgo], well motivated [TAdH], interesting [PsKh], novel [186e] and prov... | test | [
"HBeBhnoCTCr",
"9J10bcnAVz",
"CQWSfIV4zJ",
"7DQt6IC0PVW",
"MqjjJSTAupb",
"gAIWsjWrt0O",
"q_hWw6oPxlE",
"h9pZPt9iQ9A",
"UITF31OhBG0",
"JFZt_ssYhv",
"uTluVT-PSzm",
"UoSJDo4eipv",
"bksEv_7y_N8o",
"Bs8NlgzLD7",
"RPSDpyNHg4y",
"8a2834Yvqil0",
"o4Y-ON4K6V",
"OK0UfHkloBp",
"LMKnjpagcN",... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_... | [
" Thank you for the valuable feedback and discussion! ",
" Thank you for the valuable feedback and discussion!",
" Thank you for the valuable discussion. We will make sure to restructure the inverse dynamics part of our methods section in the final version of the paper, as well as add a footnote to the Dragon r... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"h9pZPt9iQ9A",
"MqjjJSTAupb",
"gAIWsjWrt0O",
"iclr_2022_Ek7PSN7Y77z",
"UoSJDo4eipv",
"q_hWw6oPxlE",
"Bs8NlgzLD7",
"OK0UfHkloBp",
"iclr_2022_Ek7PSN7Y77z",
"iclr_2022_Ek7PSN7Y77z",
"iclr_2022_Ek7PSN7Y77z",
"o4Y-ON4K6V",
"iclr_2022_Ek7PSN7Y77z",
"RPSDpyNHg4y",
"7DQt6IC0PVW",
"xMrrbzgABl_"... |
iclr_2022_iMqTLyfwnOO | Augmented Sliced Wasserstein Distances | While theoretically appealing, the application of the Wasserstein distance to large-scale machine learning problems has been hampered by its prohibitive computational cost. The sliced Wasserstein distance and its variants improve the computational efficiency through the random projection, yet they suffer from low accur... | Accept (Poster) | The paper presents a variant of sliced wasserstein distance , where the slicing operation is performed with a neural network. The resulting distance is studied and experiments on synthetic data and as cost in generative modeling are performed.
While the idea of the paper is not that novel, the work is overall well exe... | train | [
"cuSVInb0Z4s",
"nNj8RLJGnI",
"CZzPRP8dwkn",
"KbmYGkev11i",
"s98v1iDgNDT",
"OlZdCu-GQft",
"jxY38HhM36K",
"XPNcb3BTwO",
"Wi0WLuWNpk1",
"_2lT9zXvFHW",
"Nu_7D38Ca7e",
"FgNbGVLXPIP",
"FRU_b0d5jKl",
"zvhDkeKjqxK",
"H_hkjE_W6I",
"yiEdIh-SWLa",
"eKgAPQdRJ1p",
"PeGijZDvNpT"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their thorough response to my questions and concerns.",
"The paper proposes a new slice-based approach to efficiently compute the Wasserstein distance between two distributions $\\nu$ and $\\mu$. The method termed ASWD (augmented sliced Wasserstein Distance) first projects the samples f... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"Nu_7D38Ca7e",
"iclr_2022_iMqTLyfwnOO",
"jxY38HhM36K",
"s98v1iDgNDT",
"nNj8RLJGnI",
"KbmYGkev11i",
"s98v1iDgNDT",
"zvhDkeKjqxK",
"iclr_2022_iMqTLyfwnOO",
"FRU_b0d5jKl",
"XPNcb3BTwO",
"H_hkjE_W6I",
"PeGijZDvNpT",
"eKgAPQdRJ1p",
"yiEdIh-SWLa",
"iclr_2022_iMqTLyfwnOO",
"iclr_2022_iMqTLy... |
iclr_2022_TrjbxzRcnf- | Memorizing Transformers | Language models typically need to be trained or finetuned in order to acquire new knowledge, which involves updating their weights.
We instead envision language models that can simply read and memorize new data at inference time, thus acquiring new knowledge immediately. In this work, we extend language models with t... | Accept (Spotlight) | This paper studies the problem of dealing with long contexts within a Transformer architecture.
The key contribution is a kNN memory module that works in concert with a Transformer by integrating upper layers with additional retrieved context.
The idea is simple but the execution is good. While the idea is remini... | train | [
"r2wKYBSrEPu",
"HL_YDGg7lSw",
"BJn9x23H60q",
"bg9r4wePlUF",
"yHUL0rvl_F",
"FAm1Jqwi3g",
"LgP8cPGvsf8",
"jvjw_iMXyc",
"lT6o1hjM-e",
"N9mDU9d8CIx",
"sOomepUR3GY",
"7v7h_kfiSoN",
"TQSzikXaY0K",
"1N19fSGrMfm",
"z7NPBapsyn",
"LEe5_4lwSEP",
"ttIgY5EvR6g",
"6glDzJVGsm",
"7fv6o2kjgT2",
... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_r... | [
"This work proposes to deal with long-range dependencies in sequences using a transformer language model augmented with memory. The memory comes in the form of a k-Nearest Neighbor (kNN). The memory stores the last M (Key, Values) from the previous-to-last layer, and proposes k neighbors from the memory per elemen... | [
5,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"iclr_2022_TrjbxzRcnf-",
"iclr_2022_TrjbxzRcnf-",
"LEe5_4lwSEP",
"TQSzikXaY0K",
"FAm1Jqwi3g",
"ttIgY5EvR6g",
"sOomepUR3GY",
"7v7h_kfiSoN",
"ttIgY5EvR6g",
"LEe5_4lwSEP",
"7fv6o2kjgT2",
"z7NPBapsyn",
"iclr_2022_TrjbxzRcnf-",
"6glDzJVGsm",
"86eil-gt7wU",
"HL_YDGg7lSw",
"r2wKYBSrEPu",
... |
iclr_2022_JPkQwEdYn8 | Neural Processes with Stochastic Attention: Paying more attention to the context dataset | Neural processes (NPs) aim to stochastically complete unseen data points based on a given context dataset. NPs essentially leverage a given dataset as a context representation to derive a suitable identifier for a novel task. To improve the prediction accuracy, many variants of NPs have investigated context embedding a... | Accept (Poster) | This work receives mostly positive rates. Most reviewers agree that the use of Bayesian attention to neural processes is novel, and its interpretation is interesting. Since the reviewer TBTA requests a substantial revision of the submission and fortunately authors’ feedback is thoroughly satisfactory, we highly recomme... | val | [
"ZBRojp28Arl",
"UL1CfZWrfYQ",
"2q4447ve6ke",
"ZUSMU9_CND",
"YCoQN64nsfa",
"kkgVfH_d5jl",
"xoIXziMvYHI",
"o7jO337unAG",
"Xfy0SDIIOSW",
"tLF8eKYLw5n",
"i1Fe5MvPEUt",
"GLoMlafAT5x",
"Zc3IK3bN9dt",
"Xbq_SzT3qXX",
"TvpZwEztQal",
"WLo-XHCWWc",
"5iiLhRkTuwb",
"JFa46lrXNeH",
"v33JX05Eql4... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"officia... | [
" We appreciate your time and effort in reviewing our paper, as well as of your constructive comments. \n\nTo begin, we are relieved that the issues mentioned by reviewer TBTA about the experiment results appears to be resolved throughout the rebuttal procedure; comparison adding the contextual prior and excluding ... | [
-1,
5,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
2
] | [
"UL1CfZWrfYQ",
"iclr_2022_JPkQwEdYn8",
"iclr_2022_JPkQwEdYn8",
"iclr_2022_JPkQwEdYn8",
"iclr_2022_JPkQwEdYn8",
"o7jO337unAG",
"2q4447ve6ke",
"tLF8eKYLw5n",
"UL1CfZWrfYQ",
"kSCdjngMNLg",
"8-12aqoqDbQ",
"8-12aqoqDbQ",
"2q4447ve6ke",
"2q4447ve6ke",
"UL1CfZWrfYQ",
"2q4447ve6ke",
"2q4447v... |
iclr_2022_CSfcOznpDY | Recursive Disentanglement Network | Disentangled feature representation is essential for data-efficient learning. The feature space of deep models is inherently compositional. Existing $\beta$-VAE-based methods, which only apply disentanglement regularization to the resulting embedding space of deep models, cannot effectively regularize such compositiona... | Accept (Poster) | This paper proposes an algorithm for achieving disentangled representations by encouraging low mutual information between features at each layer, rather than only at the encoder output, and proposes a neural architecture for learning. Empirically, the proposed method achieves good disentanglement metric and likelihood ... | test | [
"A_9G5_7dOUj",
"WG2yryZanIN",
"4joY3BRlJAX",
"fa7KK6X3MSr",
"X6N8t2coYml",
"fDiMKhrtXA6",
"cHi_z-TpGw7",
"yV_0UVOW2bW",
"BNFIQrShiX3",
"CdkeIEpgvDv",
"6I5bdfZmJJB",
"r1FNdRXnp7p",
"OtKOzzPGUT2",
"L33ciIlR7A",
"LhcbKJu3wg0",
"q_LPZuQ7nnH",
"e3pCGYwROro",
"8Sbe9jaIn29",
"eXGGIWwjMG... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" Thank you for the explanations, which cleared my questions.",
"This paper proposed a recursive disentanglement network (RecurD) for the learning of disentangled representations from information theoretic perspective. The experimental results show RecurD outperforms some existing baselines on two benchmark datas... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"JYqpn6d-fXB",
"iclr_2022_CSfcOznpDY",
"fa7KK6X3MSr",
"X6N8t2coYml",
"fDiMKhrtXA6",
"cHi_z-TpGw7",
"yV_0UVOW2bW",
"e3pCGYwROro",
"8Sbe9jaIn29",
"6I5bdfZmJJB",
"r1FNdRXnp7p",
"OtKOzzPGUT2",
"Rmam8GNtDO",
"iclr_2022_CSfcOznpDY",
"q_LPZuQ7nnH",
"e3pCGYwROro",
"8Sbe9jaIn29",
"WG2yryZan... |
iclr_2022_izvwgBic9q | Unsupervised Learning of Full-Waveform Inversion: Connecting CNN and Partial Differential Equation in a Loop | This paper investigates unsupervised learning of Full-Waveform Inversion (FWI), which has been widely used in geophysics to estimate subsurface velocity maps from seismic data. This problem is mathematically formulated by a second order partial differential equation (PDE), but is hard to solve. Moreover, acquiring velo... | Accept (Poster) | The paper presents an unsupervised method for learning Full-Waveform Inversion in geophysics, by combining a differentiable physics simulation with a CNN based inversion network.
The reviewers agreed that the paper was well written and described an important advance but were concerned about limited novelty and a poten... | train | [
"tk2lbevAvlb",
"Bc9sAVn2LiS",
"Ux_M3cQvZo",
"BmBXtv7RHFX",
"PUbDknJBw0",
"lNQReS64jgy",
"yPHgab6ZrmK",
"TwNn57TsUuU",
"I6UE72tn49I",
"IBkS8VUs0ND",
"jUQR7Bgjyv8",
"TvScvJaJ0p",
"WUXxnD3q1VU",
"ykMVf9aQNfH",
"0RklDJquc_R",
"v5oG8zbBF2i",
"mAdZMQ-Hkc4",
"IZ5IMi1add",
"cPqMvpEfu3D",... | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"officia... | [
" \nDear reviewer, we sincerely thank you for your valuable suggestions to help us improve our paper. Also, we would love to hear any feedback or comment from you on our response and provide additional information to answer your questions. Thank you very much.",
" Due to similar reasons we have discussed in ViT, ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"0RklDJquc_R",
"BmBXtv7RHFX",
"BmBXtv7RHFX",
"kGJ33dx_ag3",
"kGJ33dx_ag3",
"kGJ33dx_ag3",
"kGJ33dx_ag3",
"kGJ33dx_ag3",
"iclr_2022_izvwgBic9q",
"jUQR7Bgjyv8",
"iclr_2022_izvwgBic9q",
"0RklDJquc_R",
"do6HhLasHd9",
"P0UR3KDbI20",
"cPqMvpEfu3D",
"RA9Fdo8HeWG",
"RA9Fdo8HeWG",
"RA9Fdo8H... |
iclr_2022_E4EE_ohFGz | Diurnal or Nocturnal? Federated Learning of Multi-branch Networks from Periodically Shifting Distributions | Federated learning has been deployed to train machine learning models from decentralized client data on mobile devices in practice. The clients available for training are observed to have periodically shifting distributions changing with the time of day, which can cause instability in training and degrade the model per... | Accept (Poster) | The paper considers FL with periodically shifting distributions, which is a very relevant and timely research question in the area of federated learning, and learning under distributions shifts. The paper proposed an interesting unsupervised way to learn grouping clients into different branches during training, using a... | train | [
"1PEPXoK1dat",
"WV_qnvkwfLY",
"_7bl0snk8WQ",
"KIPCIPNF_qm",
"EzPJbwRnq-",
"XK85IktgGdK",
"d5mA8bAhhGf",
"uqvzfWnmVEn",
"hLnRFT5S6p",
"EnmTxueqp9f",
"iXXXKrcp80t",
"EGb0yhFC8CQ",
"DH206wPuStl",
"wgtTq1dXXFE",
"TZGttT-SYj4",
"h90QHQo5_Ho",
"APxA0IN8N4E"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the comprehensive response! While the experimental results and the method proposed in this paper are solid, the distribution shift being considered is a little constraining. I can see how it is relevant to a production use case. But, in terms of analysis the implicit dichotomy of clients belonging to o... | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
8
] | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2
] | [
"hLnRFT5S6p",
"KIPCIPNF_qm",
"iclr_2022_E4EE_ohFGz",
"XK85IktgGdK",
"_7bl0snk8WQ",
"_7bl0snk8WQ",
"TZGttT-SYj4",
"iclr_2022_E4EE_ohFGz",
"wgtTq1dXXFE",
"iclr_2022_E4EE_ohFGz",
"TZGttT-SYj4",
"TZGttT-SYj4",
"APxA0IN8N4E",
"h90QHQo5_Ho",
"iclr_2022_E4EE_ohFGz",
"iclr_2022_E4EE_ohFGz",
... |
iclr_2022_nwKXyFvaUm | Diverse Client Selection for Federated Learning via Submodular Maximization | In every communication round of federated learning, a random subset of clients communicate their model updates back to the server which then aggregates them all. The optimal size of this subset is not known and several studies have shown that typically random selection does not perform very well in terms of ... | Accept (Poster) | The paper proposes a novel method for (diverse) client selection at each round of a federated learning procedure with the aim of improving performance in terms of convergence, learning efficiency and fairness. The main idea is to introduce a facility location objective to quantify how representative/informative is the... | train | [
"oBziK56entx",
"tVYt4hBBs-q",
"gUQYGaVHw_",
"rGVf89Ge48o",
"7DgAIj4IISW",
"zwOEbUDyW3s",
"_N_s2YDB1zW",
"6gFZOPxZDF",
"vKVAWt3UHzD",
"qo5EzGfECdw",
"uWBrGT_NLw2",
"wctYIQDirMx"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" **[C_1]**\n- In theory, as we mentioned in our response, $c$ (and $c_1=(1-\\frac{1}{e})c$) is data dependent and the theoretical bound holds for $c=\\min_{s\\in V} G(\\{s\\})$. \n- In experiments, we do not need to set the value of $c_1$ because running a greedy (or stochastic greedy) algorithm to minimize $G(S)$... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
2,
3
] | [
"tVYt4hBBs-q",
"_N_s2YDB1zW",
"7DgAIj4IISW",
"iclr_2022_nwKXyFvaUm",
"wctYIQDirMx",
"uWBrGT_NLw2",
"qo5EzGfECdw",
"vKVAWt3UHzD",
"iclr_2022_nwKXyFvaUm",
"iclr_2022_nwKXyFvaUm",
"iclr_2022_nwKXyFvaUm",
"iclr_2022_nwKXyFvaUm"
] |
iclr_2022_C8Ltz08PtBp | Distributional Reinforcement Learning with Monotonic Splines | Distributional Reinforcement Learning (RL) differs from traditional RL by estimating the distribution over returns to capture the intrinsic uncertainty of MDPs. One key challenge in distributional RL lies in how to parameterize the quantile function when minimizing the Wasserstein metric of temporal differences. Existi... | Accept (Poster) | The paper proposes monotonic splines as an improvement on current approaches to parametrising quantiles in distributional RL. The idea is an obvious, natural improvement on what exists, and yields improved experimental results. | train | [
"spfPVfiuDg",
"Ht3_TcOBH4S",
"F4WbOmSsT3N",
"UnsNbiUKIo",
"HpMhRBvbQRU",
"y2nNBPjCFxX",
"SaWEWaxo_LU",
"Gsr68t9n04p",
"kN3QNc41TMt",
"ry398P67gkq",
"w6dJQ6m-i9o"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a new neural network design to represent quantile functions for distributional reinforcement learning, based on smooth rational-quadratic splines. This representation has the advantage of being continuously differentiable.\nThe loss is computed by evaluating the quantile loss on a set of unifor... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
6
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5
] | [
"iclr_2022_C8Ltz08PtBp",
"F4WbOmSsT3N",
"HpMhRBvbQRU",
"Gsr68t9n04p",
"spfPVfiuDg",
"w6dJQ6m-i9o",
"ry398P67gkq",
"kN3QNc41TMt",
"iclr_2022_C8Ltz08PtBp",
"iclr_2022_C8Ltz08PtBp",
"iclr_2022_C8Ltz08PtBp"
] |
iclr_2022_ci7LBzDn2Q | Deep ReLU Networks Preserve Expected Length | Assessing the complexity of functions computed by a neural network helps us understand how the network will learn and generalize. One natural measure of complexity is how the network distorts length - if the network takes a unit-length curve as input, what is the length of the resulting curve of outputs? It has been wi... | Accept (Poster) | The paper studies the length distortion in a random (deep) ReLU network — namely, it bounds the expectation and higher moments of the length of the curve in feature space produced by applying a random ReLU work to a smooth curve. Because the product of layer norms grows exponentially in the depth, it might be natural t... | train | [
"67dXKgBddYs",
"innzURliG5W",
"-TMebHTwD5q",
"tjJsM3G-qcH",
"TFML1wJw14b",
"ooJqgZ5UDfn",
"WLiy2AVcFu",
"14SfzjXQMsV",
"el4SEdS-XS",
"wumjQj1TBlo",
"IAx6IC2rnrA"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the various clarifications!",
" The authors addressed my comments and I changed my score to 6.",
"The paper's main focus is on the following question: \"How the length of the output of a NN is distorted with its input length\". \nThe main assumption is that the weights of the NN are sampled from... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
8,
6,
8
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"14SfzjXQMsV",
"tjJsM3G-qcH",
"iclr_2022_ci7LBzDn2Q",
"-TMebHTwD5q",
"WLiy2AVcFu",
"IAx6IC2rnrA",
"wumjQj1TBlo",
"el4SEdS-XS",
"iclr_2022_ci7LBzDn2Q",
"iclr_2022_ci7LBzDn2Q",
"iclr_2022_ci7LBzDn2Q"
] |
iclr_2022_eMudnJsb1T5 | Sampling with Mirrored Stein Operators | We introduce a new family of particle evolution samplers suitable for constrained domains and non-Euclidean geometries. Stein Variational Mirror Descent and Mirrored Stein Variational Gradient Descent minimize the Kullback-Leibler (KL) divergence to constrained target distributions by evolving particles in a dual space... | Accept (Spotlight) | The paper proposes to extend mirror descent to sampling with stein operator when the density is defined on a constrained domain and non euclidean geometry. All reviewers agreed on the novelty and the merits of the paper. Accept | train | [
"8GyPIjVxGi",
"QxIYa3iVBJ",
"WKbt35m5M_-",
"E9zcN5aO8h",
"McfevX6DBty",
"Yxnam_F-Bx9",
"czyG4hcuj_",
"Jlc10rPuP82",
"luNd2AwRyYi",
"fdPZc9lslVk",
"LHSiOs0tS6s",
"raz1UT750VO",
"TxQytmFb2Cg"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer, \n\nThanks for reading the rebuttal. We are glad that you find it useful. As indicated by the rating, do you have any more concerns about the draft that we could try to answer? \n\nBest,\nAuthors",
" I would like to thank the authors for the response which answers most of my questions.",
" Dear... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
10,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
3
] | [
"QxIYa3iVBJ",
"Yxnam_F-Bx9",
"E9zcN5aO8h",
"TxQytmFb2Cg",
"raz1UT750VO",
"LHSiOs0tS6s",
"fdPZc9lslVk",
"luNd2AwRyYi",
"LHSiOs0tS6s",
"iclr_2022_eMudnJsb1T5",
"iclr_2022_eMudnJsb1T5",
"iclr_2022_eMudnJsb1T5",
"iclr_2022_eMudnJsb1T5"
] |
iclr_2022_MQ2sAGunyBP | R4D: Utilizing Reference Objects for Long-Range Distance Estimation | Estimating the distance of objects is a safety-critical task for autonomous driving. Focusing on short-range objects, existing methods and datasets neglect the equally important long-range objects. In this paper, we introduce a challenging and under-explored task, which we refer to as Long-Range Distance Estimation, as... | Accept (Poster) | This paper focuses on using reference objects for long distance estimation by introducing a novel dataset and an attention based learning framework. While the presentation flows well and the methodology is practically useful, it is only marginally significant and novel. Some of the practical data augmentation aspect ra... | test | [
"iphPOR0-38C",
"J2yUuRE9_Hi",
"abxRJimPgD",
"pRBFqBROEij",
"4uhUw6zLlW0",
"a8Te_4TAGZQ",
"XgLyfkmejsX",
"vEi2qMZuGNg",
"nlSIGPghjs1",
"cXg62QqKcIP",
"CqrqsfEGVHm",
"xsbUyAZpjE",
"Rz7iaJvl05F"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your quick followup on this issue!\n\nWe would like to first clarify that the goal of our distance augmentation is different from traditional data augmentations. Traditional data augmentation (such as flipping) is to increase the amount of data. In contrast, the goal of our distance augmentation is ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5,
3
] | [
"abxRJimPgD",
"a8Te_4TAGZQ",
"4uhUw6zLlW0",
"xsbUyAZpjE",
"cXg62QqKcIP",
"Rz7iaJvl05F",
"a8Te_4TAGZQ",
"CqrqsfEGVHm",
"4uhUw6zLlW0",
"iclr_2022_MQ2sAGunyBP",
"iclr_2022_MQ2sAGunyBP",
"iclr_2022_MQ2sAGunyBP",
"iclr_2022_MQ2sAGunyBP"
] |
iclr_2022_3Pbra-_u76D | Rethinking Network Design and Local Geometry in Point Cloud: A Simple Residual MLP Framework | Point cloud analysis is challenging due to irregularity and unordered data structure. To capture the 3D geometries, prior works mainly rely on exploring sophisticated local geometric extractors, using convolution, graph, or attention mechanisms. These methods, however, incur unfavorable latency during inference and the... | Accept (Poster) | This paper proposes a new architecture for point cloud processing, with good empirical results. All reviewers recommended accept. AC does not see a reason to overturn the consensus. | train | [
"2ORO3cl0X1",
"iP7AV0_tHvj",
"5kARtnZFutg",
"6wM5o5w8VfA",
"QkaQz39twbv",
"mgRm2TeP-U",
"1Hfdr0hoLFU",
"B1yLuahGL6Y",
"pZmOWixXXlc",
"NCrJr9jjg03",
"q0BxcHbDJBP",
"ch2rkefPAu2",
"FRzJ3vZUF9Q",
"F-nIwMtxY1",
"1zRUHGssZCD"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Dear Reviewer 7Y1Q,\n\nThanks for agreeing with the idea and motivation of our work.\n\nBesides, we appreciate the suggestions for additional experiments (added in the appendix) to improve our work.\n\nBest,\nAuthors",
" Dear Reviewer vGXQ, \n\nThanks for your constructive suggestions and the score increase. \n... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
8
] | [
-1,
-1,
-1,
-1,
2,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4
] | [
"pZmOWixXXlc",
"mgRm2TeP-U",
"6wM5o5w8VfA",
"NCrJr9jjg03",
"iclr_2022_3Pbra-_u76D",
"F-nIwMtxY1",
"QkaQz39twbv",
"iclr_2022_3Pbra-_u76D",
"iclr_2022_3Pbra-_u76D",
"ch2rkefPAu2",
"pZmOWixXXlc",
"FRzJ3vZUF9Q",
"1zRUHGssZCD",
"1Hfdr0hoLFU",
"iclr_2022_3Pbra-_u76D"
] |
iclr_2022_fExcSKdDo_ | Learning to Dequantise with Truncated Flows | Dequantisation is a general technique used for transforming data described by a discrete random variable $x$ into a continuous (latent) random variable $z$, for the purpose of it being modeled by likelihood-based density models. Dequantisation was first introduced in the context of ordinal data, such as image pixel val... | Accept (Poster) | The paper proposes a variational dequantization method for categorical data, based on flows with learned truncated support. The problem has been studied before, but the paper makes it clear how the proposed method differs from existing ones. The method is empirically evaluated on a large variety of diverse tasks.
The ... | train | [
"ymtAxZtDDeZ",
"RnqJqu2czbq",
"UQWO743VO-X",
"6B-APfxKsLF",
"s9Zc377keSu",
"gH_tmUgEOyP",
"kzbgkU427l",
"ozWMkHg-8dn",
"Ntp2KMo6RHA",
"14pWIdaJso",
"bFNKd7RRbpH",
"3b6Na5DVIog",
"BRlHWfhAKM0",
"4CkbbBNKps7",
"01-talYREw2",
"3b2M089LTwD",
"eakXG7g8ypD",
"niqCYV-b7eU",
"NgtHmtOVjqV... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" > If the prior for different categories are too \"close\" to each other, truncation will be hard to perform and could potentially \"cut off\" part of the high-probability region in the latent space. Will this affect the performance? If yes, by how much.\n\nWe have the following interpretations of your question. P... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"UQWO743VO-X",
"iclr_2022_fExcSKdDo_",
"s9Zc377keSu",
"s9Zc377keSu",
"gH_tmUgEOyP",
"eakXG7g8ypD",
"Ntp2KMo6RHA",
"iclr_2022_fExcSKdDo_",
"14pWIdaJso",
"bFNKd7RRbpH",
"01-talYREw2",
"01-talYREw2",
"eakXG7g8ypD",
"niqCYV-b7eU",
"ozWMkHg-8dn",
"ozWMkHg-8dn",
"RnqJqu2czbq",
"1ih6joVD9... |
iclr_2022_shbAgEsk3qM | Understanding and Leveraging Overparameterization in Recursive Value Estimation | The theory of function approximation in reinforcement learning (RL) typically considers low capacity representations that incur a tradeoff between approximation error, stability and generalization. Current deep architectures, however, operate in an overparameterized regime where approximation error is not necessarily ... | Accept (Poster) | This paper presents a study of the over parametrization of linear representations in the context of recursive value estimation.
The reviewers could not reach a consensus over the quality of the paper, with a fairly wide range of scores even after the rebuttal.
After considering the paper, the rebuttal, and the discus... | train | [
"Adxlxc87_nA",
"M6JVhuNqKA",
"ZKLx0ZXNuDN",
"LKzKZisqbyn",
"TB7XATD_ahr",
"fQ-pXuUP_dS",
"PCC5MxkOVJs",
"iNIU_QMvPr",
"-ZJis1_lC6x",
"6Q-_fhUCJzR",
"hjdnmRfHeQ",
"5adU68CmhsU",
"xYe3F0Q0r6P",
"ULl94e4EnrI",
"0FAepvklru1",
"XEMtLxJd0a2",
"0N9vgW1R9oE",
"fmdaoPyRQsh",
"PPIANp708jl"... | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_r... | [
" Thank you for the response, and it helps me understand the paper better.",
" We are wondering if your question has been answered yet. Please let me know if you need more details. ",
" We are wondering if your question has been answered yet. Please let me know if you need more details. ",
" We are glad that ... | [
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"ZKLx0ZXNuDN",
"cxqmoJNT2ip",
"6Q-_fhUCJzR",
"fQ-pXuUP_dS",
"iclr_2022_shbAgEsk3qM",
"PCC5MxkOVJs",
"iNIU_QMvPr",
"llruq3ylkl",
"xYe3F0Q0r6P",
"hjdnmRfHeQ",
"ULl94e4EnrI",
"iclr_2022_shbAgEsk3qM",
"XEMtLxJd0a2",
"E77k2ZiMCeA",
"cxqmoJNT2ip",
"5adU68CmhsU",
"TB7XATD_ahr",
"yf46jvI8j... |
iclr_2022_SIKV0_MrZlr | Auto-Transfer: Learning to Route Transferable Representations | Knowledge transfer between heterogeneous source and target networks and tasks has received a lot of attention in recent times as large amounts of quality labeled data can be difficult to obtain in many applications. Existing approaches typically constrain the target deep neural network (DNN) feature representations to ... | Accept (Poster) | This paper tackles the problem of how to utilize a network from the source domain to benefit target domain training in terms of sample/training efficiency. In contrast to prior methods (e.g. that perform fine-tuning or distillation), this paper poses it as a bandit problem that decides how to wire intermediate represen... | train | [
"q4RvH55mJJP",
"qw0R4lp5ZbF",
"qEKbc4FiWQI",
"UzmeaPbCo7-",
"Byfjfp1uLoZ",
"2hFsUMYBun",
"EpDd3QW34QO",
"WQ9k_Wj3ulE",
"Uvy1x9Wmb7n",
"YU8nSuHXVz_",
"k7wJaxDvjLh",
"n_30gwlw2ri",
"hwDFUrvdQny",
"JUDBGuvpzs",
"o5q1IQMZMF",
"BMeQuH0y89H",
"eyxKGVUzK03",
"UG_FOeJdUOv"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
" Dear Reviewers,\n\nWe are glad that you have all increased your scores and find our answers and additional results to be helpful. We are grateful for the suggestions that have led to an improved paper.\n\nThank you,\n\nAuthors",
"This work addresses the challenge of automatic knowledge transfer between differen... | [
-1,
6,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
3,
4,
-1,
5,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2022_SIKV0_MrZlr",
"iclr_2022_SIKV0_MrZlr",
"iclr_2022_SIKV0_MrZlr",
"UG_FOeJdUOv",
"iclr_2022_SIKV0_MrZlr",
"BMeQuH0y89H",
"2hFsUMYBun",
"Uvy1x9Wmb7n",
"hwDFUrvdQny",
"iclr_2022_SIKV0_MrZlr",
"iclr_2022_SIKV0_MrZlr",
"o5q1IQMZMF",
"Byfjfp1uLoZ",
"iclr_2022_SIKV0_MrZlr",
"k7wJaxDvj... |
iclr_2022_X_ch3VrNSRg | EE-Net: Exploitation-Exploration Neural Networks in Contextual Bandits | In this paper, we propose a novel neural exploration strategy in contextual bandits, EE-Net, distinct from the standard UCB-based and TS-based approaches. Contextual multi-armed bandits have been studied for decades with various applications. To solve the exploitation-exploration tradeoff in bandits, there are three ma... | Accept (Spotlight) | Summary: This paper studies the neural contextual bandit problem, and proposes a neural-based bandit approach with a novel exploration strategy, called EE-Net. Besides utilizing a neural network (Exploitation network) to learn the reward function, EE-Net also uses another neural network (Exploration network) to adaptiv... | train | [
"3cFODH9cKmx",
"iSNUovoraCL",
"zKDsGIvBHlk",
"1rg6pL5qNQo",
"izPXqODop7I",
"rXTHtqJfzN",
"XGDJTSKJhB",
"vx9EmMoqsFW",
"2ObMbZglVJF",
"hICb2wh4MMS",
"oTNlW5zwbMK",
"0CBKIp6qqib",
"ry2IO10MGD"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the neural contextual bandit problem, and proposes a neural-based bandit approach with a novel exploration strategy, called EE-Net. Besides utilizing a neural network (Exploitation network) to learn the reward function, EE-Net also uses another neural network (Exploration network) to adaptively ... | [
8,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2
] | [
"iclr_2022_X_ch3VrNSRg",
"1rg6pL5qNQo",
"iclr_2022_X_ch3VrNSRg",
"vx9EmMoqsFW",
"rXTHtqJfzN",
"hICb2wh4MMS",
"vx9EmMoqsFW",
"zKDsGIvBHlk",
"ry2IO10MGD",
"3cFODH9cKmx",
"0CBKIp6qqib",
"iclr_2022_X_ch3VrNSRg",
"iclr_2022_X_ch3VrNSRg"
] |
iclr_2022_tyTH9kOxcvh | Modeling Label Space Interactions in Multi-label Classification using Box Embeddings | Multi-label classification is a challenging structured prediction task in which a set of output class labels are predicted for each input. Real-world datasets often have natural or latent taxonomic relationships between labels, making it desirable for models to employ label representations capable of capturing such tax... | Accept (Poster) | The paper studies multi label classification problem. Particularly, they introduce multi-label box model, which uses probabilistic semantics of box embeddings, representing labels as boxes instead of vectors. Their model is evaluated extensively on 12 datasets, and reviewers agreed the paper was well written and well m... | train | [
"rcEhRIFnX9P",
"Zf8FihYfGnX",
"VCucljum6qr",
"ES6faJdegdT",
"t2iJkY8JKtC",
"hfIeS2VGfwO",
"TJp-UElJ8_2",
"4VYv2MiuV7x",
"NBW16VZ8gJE",
"PaPiA00qd_C",
"23ZkdrLVEFq",
"_MDWDPmJlQz",
"Ge5FcWjafl",
"D6itKlu07NK",
"8Uv3jtemNpR",
"xwKueoJDAyH",
"G7e_fde4u5_",
"NWk8l7Rwe2E",
"BDjPqT77la... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_... | [
" It has to be noted that box embeddings have not been used for the task of multi-label classification before. There are no \"obvious\" published baselines that use box embeddings for this task. So the implementation of any box baseline would require careful design decisions and would in itself be a novel contribut... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4
] | [
"Zf8FihYfGnX",
"VCucljum6qr",
"Ge5FcWjafl",
"_MDWDPmJlQz",
"hfIeS2VGfwO",
"TJp-UElJ8_2",
"4VYv2MiuV7x",
"j6mpJdsVn3t",
"PaPiA00qd_C",
"SQ4qv9azxJk",
"NHAj0hKyoTw",
"23ZkdrLVEFq",
"8Uv3jtemNpR",
"SQ4qv9azxJk",
"xwKueoJDAyH",
"G7e_fde4u5_",
"NWk8l7Rwe2E",
"BDjPqT77la",
"iclr_2022_t... |
iclr_2022_e42KbIw6Wb | Pix2seq: A Language Modeling Framework for Object Detection | We present Pix2Seq, a simple and generic framework for object detection. Unlike existing approaches that explicitly integrate prior knowledge about the task, we cast object detection as a language modeling task conditioned on the observed pixel inputs. Object descriptions (e.g., bounding boxes and class labels) are exp... | Accept (Poster) | This paper proposes an elegant approach to object detection where an encoder network reads in an image and a decoder network outputs coordinate and category information via a sequence of textual tokens. This method does away with several object detection specific details and tricks such as region proposals and ROI pool... | train | [
"sXRh3Klqd4F",
"hZsAcAVohpf",
"u1xJJVwa74P",
"qjOWFUq-lfI",
"6dqKVt23OE5",
"FdisPVfpbr",
"yN4ZkW_mm-S",
"i5Y5KMGFAOT",
"fziafzv1moH",
"Vbkb6Fy7VTT",
"UtpqbJrO89",
"jhkgaJu80bk",
"AABuSBIMS6",
"4Uldih_5KR7",
"-2xGGXZRU0W",
"HdAwxeThQ26",
"sjdDRsm_s2f",
"8oEnnd12_9F",
"HULd3IVgdZ",... | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
... | [
" We thank the reviewer for further feedback. Re *multi-scale inference* question - to deal with different image sizes at inference, we currently use a simple strategy, which is to rescale images so that the longer side is 1333. This is similar to, but simpler than, the strategy that baselines used (scaling shorter... | [
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
6,
6
] | [
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3,
3
] | [
"hZsAcAVohpf",
"HdAwxeThQ26",
"i5Y5KMGFAOT",
"4Uldih_5KR7",
"iclr_2022_e42KbIw6Wb",
"yN4ZkW_mm-S",
"Vbkb6Fy7VTT",
"fziafzv1moH",
"AABuSBIMS6",
"UtpqbJrO89",
"jhkgaJu80bk",
"lsaPBMcM_fE",
"SwMMUxLXVNQ",
"-2xGGXZRU0W",
"6dqKVt23OE5",
"sjdDRsm_s2f",
"jG38UnVZbin",
"HULd3IVgdZ",
"icl... |
iclr_2022_ZBESeIUB5k | Stochastic Training is Not Necessary for Generalization | It is widely believed that the implicit regularization of SGD is fundamental to the impressive generalization behavior we observe in neural networks. In this work, we demonstrate that non-stochastic full-batch training can achieve comparably strong performance to SGD on CIFAR-10 using modern architectures. To this end... | Accept (Poster) | This paper carefully shows how all the stochastic elements in neural network training could be removed (by using full batch, and a dataset with fixed augmentation) and still maintain good performance, by adjusting hyper-parameters and adding explicit regularization.
All reviewers were eventually positive and recommend... | test | [
"IIskPtqr_f",
"MeGiR6F5bOd",
"PFzbfwcb-lt",
"lD6Ws9ofzh5",
"PcX9vy1Wnzl",
"_3InxIo7T6_",
"YVgZK81vMcZ",
"oI0DhBEjIrs",
"SF9zOL4Huqp",
"RjQuEfsIFUX",
"5pM-TCEeJCx",
"B2cCMbwMbyG",
"nFDWNY7-6LC",
"0VQsfMZUMj2",
"C0M__nURsqj",
"Ef1wSj7NPOh"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" This is an interesting question, thank you for the feedback. \nWe ran some experiments with cudnn determinism and double precision in earlier stages of the project (and still have these options in our code submission), but did not find a noticable difference between non-deterministic/deterministic and float32/flo... | [
-1,
-1,
5,
-1,
6,
6,
-1,
-1,
-1,
10,
-1,
-1,
-1,
-1,
-1,
8
] | [
-1,
-1,
3,
-1,
4,
3,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
4
] | [
"MeGiR6F5bOd",
"nFDWNY7-6LC",
"iclr_2022_ZBESeIUB5k",
"B2cCMbwMbyG",
"iclr_2022_ZBESeIUB5k",
"iclr_2022_ZBESeIUB5k",
"oI0DhBEjIrs",
"SF9zOL4Huqp",
"5pM-TCEeJCx",
"iclr_2022_ZBESeIUB5k",
"_3InxIo7T6_",
"PFzbfwcb-lt",
"Ef1wSj7NPOh",
"RjQuEfsIFUX",
"PcX9vy1Wnzl",
"iclr_2022_ZBESeIUB5k"
] |
iclr_2022_0jP2n0YFmKG | Towards Training Billion Parameter Graph Neural Networks for Atomic Simulations | Recent progress in Graph Neural Networks (GNNs) for modeling atomic simulations has the potential to revolutionize catalyst discovery, which is a key step in making progress towards the energy breakthroughs needed to combat climate change. However, the GNNs that have proven most effective for this task are memory inten... | Accept (Poster) | The reviewers were split about this paper: on one hand they appreciated the clarity and the experimental improvments in the paper, on the other they were concerned about the novelty of the work. After going through it and the discussion I have decided to vote to accept this paper for the following reasons: (a) the pote... | train | [
"_3WVdW3Hugt",
"KJ7dFNUfXQE",
"GAfjPs49I4C",
"bJonGe8oNeE",
"e4_zwYA5_Gw",
"odEqRRYpCy",
"GWEtqjnO37t",
"L4tV4btWbhk",
"8KEgt0X7yOf",
"WVkKS2gU1-f",
"lIV7Mz01KKp"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a method to train large scale graph neural networks containing up to billions of parameters, called Graph Parallelism. The method is used to train large scale versions of the DimeNet++ and GemNet models containing 10-20x more parameters that the vanilla versions. These large GNN models are eval... | [
8,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
2,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
1
] | [
"iclr_2022_0jP2n0YFmKG",
"L4tV4btWbhk",
"iclr_2022_0jP2n0YFmKG",
"odEqRRYpCy",
"lIV7Mz01KKp",
"GAfjPs49I4C",
"WVkKS2gU1-f",
"_3WVdW3Hugt",
"iclr_2022_0jP2n0YFmKG",
"iclr_2022_0jP2n0YFmKG",
"iclr_2022_0jP2n0YFmKG"
] |
iclr_2022_bTteFbU99ye | Evaluating Distributional Distortion in Neural Language Modeling | A fundamental characteristic of natural language is the high rate at which speakers produce novel expressions. Because of this novelty, a heavy-tail of rare events accounts for a significant amount of the total probability mass of distributions in language (Baayen, 2001). Standard language modeling metrics such as perp... | Accept (Poster) | This paper presents a very interesting study of using an artificial language (generated using a specific algorithm via a transformer model) and training SOTA transformer and LSTM language models on that language; the authors show that these LMs underestimate the probability of sequences from this language and overesti... | train | [
"T0PGEov75k",
"jbZABqhe34",
"FA-ctvd4wHG",
"-wWh7RbMtgt",
"v46Dk3ZJ8ZJ",
"T-BGhxgOtr",
"IDhL0MQ1pV",
"X5ebt6slUVZ",
"g7CJ-BYsP9B",
"fsoWrQkLwh"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thank you for your detailed answers to my questions and for your additions to the paper and to experiments. I aim raising my score and hope the paper will get accepted.",
"The paper conducts experiments to evaluate whether neural sequence models such as LSTMs and Transformers are able to correctly assess the pr... | [
-1,
8,
-1,
8,
-1,
-1,
-1,
-1,
-1,
8
] | [
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
5
] | [
"g7CJ-BYsP9B",
"iclr_2022_bTteFbU99ye",
"IDhL0MQ1pV",
"iclr_2022_bTteFbU99ye",
"iclr_2022_bTteFbU99ye",
"-wWh7RbMtgt",
"-wWh7RbMtgt",
"fsoWrQkLwh",
"jbZABqhe34",
"iclr_2022_bTteFbU99ye"
] |
iclr_2022_p0rCmDEN_- | Visual hyperacuity with moving sensor and recurrent neural computations | Dynamical phenomena, such as recurrent neuronal activity and perpetual motion of the eye, are typically overlooked in models of bottom-up visual perception. Recent experiments suggest that tiny inter-saccadic eye motion ("fixational drift") enhances visual acuity beyond the limit imposed by the density of retinal pho... | Accept (Poster) | This paper explores the idea that fixational drift of a sensor over an image (something that primate eyes do) could be used to achieve visual hyperacuity, i.e. image recognition with low resolution images equivalent to what would be achieved with high resolution images. The authors construct networks where the bottom o... | train | [
"7o4FsS0ZSj",
"IVUcL2GtItb",
"fAqBg-Yg2Po",
"3asPPyLPbXB",
"HpBkg1ck9RT",
"_5Jj0m4i9a8",
"0u2dyzsPLnW",
"d2A1OhlMWw5",
"IJg5w_sD6UF",
"ufa9S17zTf5",
"mhUbYDAaTqZ",
"xs4axjA2zxZ",
"ncWhjtl_h_r",
"GKJ2i8DJcz",
"hRJhBlsk0e"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your detailed responses!\n\nVerifying that the teacher is not the crucial ingredient is great.\n\nI like the new figure S10 as a start to bridge to biology, although I think more could be done, e.g.: test _directly_ if the proposed model's saccades match those observed experimentally (e.g. test on t... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
10,
3
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"3asPPyLPbXB",
"iclr_2022_p0rCmDEN_-",
"hRJhBlsk0e",
"HpBkg1ck9RT",
"IVUcL2GtItb",
"GKJ2i8DJcz",
"ncWhjtl_h_r",
"iclr_2022_p0rCmDEN_-",
"xs4axjA2zxZ",
"mhUbYDAaTqZ",
"iclr_2022_p0rCmDEN_-",
"ncWhjtl_h_r",
"iclr_2022_p0rCmDEN_-",
"iclr_2022_p0rCmDEN_-",
"iclr_2022_p0rCmDEN_-"
] |
iclr_2022_xZ6H7wydGl | Robust and Scalable SDE Learning: A Functional Perspective | Stochastic differential equations provide a rich class of flexible generative
models, capable of describing a wide range of spatio-temporal processes. A host
of recent work looks to learn data-representing SDEs, using neural networks and
other flexible function approximators. Despite these advances, learning remains
co... | Accept (Poster) | This paper proposes an importance-sampling estimator for probabilities of observations of SDEs. The proposed approach has several advantages over conventional methods: it does not require an SDE solver, it has lower gradient variance, and shows nice results with a Gaussian process representation of the function. Review... | train | [
"7FoUseLCt_",
"rqr0DVWP9rc",
"qZQRwJsJl7U",
"x2EnVMVZPH",
"BkhcH-LFP8",
"ZPOcq4Lk4GJ",
"2hfPAOi9nIT",
"DLqmD9RIs3g",
"-ZdDDQBLqQ",
"SVwyiv3Ota9",
"7Uf2qI1LJjl"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response. After following the discussions concerning the points raised by the other authors, I have decided to stand by my original overall score.",
" It has been mentioned by the reviewers that our experiments are fairly small-scale. However, we would like to point out that learning SDEs from o... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
2
] | [
"qZQRwJsJl7U",
"ZPOcq4Lk4GJ",
"SVwyiv3Ota9",
"7Uf2qI1LJjl",
"-ZdDDQBLqQ",
"2hfPAOi9nIT",
"DLqmD9RIs3g",
"iclr_2022_xZ6H7wydGl",
"iclr_2022_xZ6H7wydGl",
"iclr_2022_xZ6H7wydGl",
"iclr_2022_xZ6H7wydGl"
] |
iclr_2022_JfaWawZ8BmX | Anisotropic Random Feature Regression in High Dimensions | In contrast to standard statistical wisdom, modern learning algorithms typically find their best performance in the overparameterized regime in which the model has many more parameters than needed to fit the training data. A growing number of recent works have shown that random feature models can offer a detailed theor... | Accept (Poster) | This paper extends recent and very active literature on analyzing learning algorithms in the simplified setting of Gaussian data and model weights, with the main generalization being to allow for non-isotropic covariance matrices. The main technical results seem to be correct and slightly novel, though reviewers feel ... | test | [
"et8GLZMNxqK",
"rdnsDZ3G_qR",
"akhphkIiH6y",
"ZXWB9yxwR5S",
"OTTtlfgBF_r",
"xW1mHTLx-Lj",
"awrG0s-Krbb",
"UfF0sovie7n",
"X_hRa-L0i1l",
"1Ch9_bQsIqV",
"n-gTeT6Atq",
"lUoj1l2Byvr",
"5s2-nb_79tV",
"ITH479BHa9U",
"vxWjAY6al9b",
"zS8y52I41i8",
"IxMrNq9D7O",
"xgl6_EBxRHi",
"aubg5-FSvw"... | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewers for their additional feedback and suggestions. As a small update, we wanted to note that we have now mapped out the correspondence between our quantities and the observables defined by d’Ascoli et al. ‘21; Loureiro et al. ‘21; etc. The final manuscript will include a detailed derivation of ... | [
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2022_JfaWawZ8BmX",
"iclr_2022_JfaWawZ8BmX",
"iclr_2022_JfaWawZ8BmX",
"OTTtlfgBF_r",
"xW1mHTLx-Lj",
"5s2-nb_79tV",
"UfF0sovie7n",
"X_hRa-L0i1l",
"rdnsDZ3G_qR",
"rdnsDZ3G_qR",
"rdnsDZ3G_qR",
"akhphkIiH6y",
"akhphkIiH6y",
"akhphkIiH6y",
"aubg5-FSvw",
"xgl6_EBxRHi",
"xgl6_EBxRHi",
... |
iclr_2022_OtEDS2NWhqa | Using Graph Representation Learning with Schema Encoders to Measure the Severity of Depressive Symptoms | Graph neural networks (GNNs) are widely used in regression and classification problems applied to text, in areas such as sentiment analysis and medical decision-making processes. We propose a novel form for node attributes within a GNN based model that captures node-specific embeddings for every word in the vocabulary.... | Accept (Poster) | This paper presents a graph neural network model to predict the severity of depression symptoms from text. It proposes to construct a graph with word nodes and use schema structure to capture the context information in the text. A schema encoder is introduced for modeling the constructed graphs.
Strength:
- Interestin... | train | [
"zFG32mu6wt",
"-BojhDWM0tj",
"NmwR-ldyTiE",
"4l06k-eoxDq",
"hHHUuNX4NZw",
"asjLczXpShi",
"kRtmMlLhViu",
"PH1XmYg5_H",
"4Sx0A0MTVwX",
"lkoJYLhhrdZ"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer"
] | [
" Thank you for your comments. Below you can find our detailed response.\n\n1\n\t\nThe model [1] learns a different text input. The text features used for depression prediction were extracted from both clinical questions and the corresponding patients' responses. If any one of the questions is not referred to durin... | [
-1,
-1,
-1,
5,
-1,
-1,
6,
-1,
-1,
8
] | [
-1,
-1,
-1,
4,
-1,
-1,
4,
-1,
-1,
3
] | [
"NmwR-ldyTiE",
"PH1XmYg5_H",
"4Sx0A0MTVwX",
"iclr_2022_OtEDS2NWhqa",
"iclr_2022_OtEDS2NWhqa",
"lkoJYLhhrdZ",
"iclr_2022_OtEDS2NWhqa",
"kRtmMlLhViu",
"4l06k-eoxDq",
"iclr_2022_OtEDS2NWhqa"
] |
iclr_2022_JBAZe2yN6Ub | A First-Occupancy Representation for Reinforcement Learning | Both animals and artificial agents benefit from state representations that support rapid transfer of learning across tasks and which enable them to efficiently traverse their environments to reach rewarding states. The successor representation (SR), which measures the expected cumulative, discounted state occupancy un... | Accept (Poster) | This paper introduces a first-occupancy representation for reinforcement learning problems, with potential benefits on problems with non-stationary rewards. The representation is defined analogously to the successor representations, but captures the expected discounted time to first arrive at a state instead of measur... | train | [
"yEo78txdnpm",
"M08YD2--1K7",
"xiudVc1yDE4",
"NFtGzYjJRU-",
"G-Y1XQuNmT9",
"yPo3MPIv37o",
"p9HgT89BfZoj",
"vlQ5OGBO9-",
"km1ApajVPZ0",
"D4Fokun-qU",
"dylH6X6x_8Uw",
"77cXGb_lt29",
"Qqx9LJXpPb",
"CWRN5tKHAYU-",
"eKu80ww9S_M",
"mjIbIbzeVvK",
"mxAVwC0aOt",
"tkSb2qhQMe"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" In order to further strengthen our empirical evaluation of the FF applied to unsupervised RL, we\n1. Ran APF, APS, DDLUS, and DDLUS-G for 20 seeds on the DeepMind control suite Quadruped environment via the same Unsupervised RL Benchmark task set with the downstream \"Quadruped-Run\" task (a very similar set-up t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"G-Y1XQuNmT9",
"NFtGzYjJRU-",
"D4Fokun-qU",
"Qqx9LJXpPb",
"yPo3MPIv37o",
"p9HgT89BfZoj",
"eKu80ww9S_M",
"km1ApajVPZ0",
"mxAVwC0aOt",
"tkSb2qhQMe",
"mjIbIbzeVvK",
"Qqx9LJXpPb",
"iclr_2022_JBAZe2yN6Ub",
"iclr_2022_JBAZe2yN6Ub",
"iclr_2022_JBAZe2yN6Ub",
"iclr_2022_JBAZe2yN6Ub",
"iclr_20... |
iclr_2022_bVuP3ltATMz | Large Language Models Can Be Strong Differentially Private Learners | Differentially Private (DP) learning has seen limited success for building large deep learning models of text, and straightforward attempts at applying Differentially Private Stochastic Gradient Descent (DP-SGD) to NLP tasks have resulted in large performance drops and high computational overhead.
We show that this per... | Accept (Oral) | This work adapts the widely used DP learning algorithm to language models. Reviewers all agreed that this work tackles an important problem with clear motivation and thorough experiments, and achieved strong performance (memory reduction and effectiveness) on NLP tasks. Thus, we recommend an acceptance. | train | [
"9Sh3rCYEIn5",
"zFnJnN4IhRu",
"1xbtfBtLQRU",
"b54-O2KbskO",
"bpnFRplSVeN",
"8yIkiNHNa88",
"5ATxguj-JhH",
"btCni5kq45A",
"KWbnEjuLGs1",
"MGSpgz1HSbx",
"k_aOjZQmn9u",
"Irr7lt2cG7M",
"MKpE1z-zP2_",
"pqum_LFJ2We",
"2MGIgGIaMvi",
"_tmpKxlZDE"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We have updated our draft based on suggestions from reviewers. \n\nTo summarize the most important aspects:\n- We provided more background about DP-Adam in Appendix A and included results comparing DP-SGD against DP-Adam in the new section Appendix S.\n- We included additional results showing that varying $\\delt... | [
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
8,
8
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
5,
3
] | [
"iclr_2022_bVuP3ltATMz",
"2MGIgGIaMvi",
"zFnJnN4IhRu",
"8yIkiNHNa88",
"iclr_2022_bVuP3ltATMz",
"btCni5kq45A",
"k_aOjZQmn9u",
"KWbnEjuLGs1",
"MGSpgz1HSbx",
"bpnFRplSVeN",
"iclr_2022_bVuP3ltATMz",
"5ATxguj-JhH",
"Irr7lt2cG7M",
"_tmpKxlZDE",
"iclr_2022_bVuP3ltATMz",
"iclr_2022_bVuP3ltATMz... |
iclr_2022_6Tk2noBdvxt | Programmatic Reinforcement Learning without Oracles | Deep reinforcement learning (RL) has led to encouraging successes in many challenging control tasks. However, a deep RL model lacks interpretability due to the difficulty of identifying how the model's control logic relates to its network structure. Programmatic policies structured in more interpretable representations... | Accept (Spotlight) | This paper presents an approach to synthesize programmatic policies, utilizing a continuous relaxation of program semantics and a parameterization of the full program derivation tree, to make it possible to learn both the program parameters and program structures jointly using policy gradient without the need to imitat... | test | [
"oaiedsT4OqL",
"wEUz_WcoI2O",
"W0_Yx9s6leH",
"hbekX_8Beh",
"FxrfgZDskzE",
"K2aHBi-6OZ5",
"fcs9mLnuRi",
"Z1zpMUWTGs",
"KgkzEFXz4BD",
"V6q-SCIPL1p",
"t9iVzP_gMtV",
"mfBvVznaxUH",
"O5p4qcwNbUs",
"bIu7ijvZAO",
"FFqIgSgH_tn",
"TAlBwyJ9lI",
"1Lo6dD5-tmK",
"eGHx3eJpZQS"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" We are grateful for the reviewers' constructive suggestions that have helped us greatly improve the quality of our manuscript. The added experiments comparing our approach against oracle-guided programmatic RL methods (suggested by all the reviewers) significantly strengthened our claim about the advantages of or... | [
-1,
8,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
-1,
5,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2022_6Tk2noBdvxt",
"iclr_2022_6Tk2noBdvxt",
"Z1zpMUWTGs",
"FFqIgSgH_tn",
"iclr_2022_6Tk2noBdvxt",
"eGHx3eJpZQS",
"FxrfgZDskzE",
"V6q-SCIPL1p",
"bIu7ijvZAO",
"t9iVzP_gMtV",
"mfBvVznaxUH",
"O5p4qcwNbUs",
"wEUz_WcoI2O",
"eGHx3eJpZQS",
"TAlBwyJ9lI",
"1Lo6dD5-tmK",
"FxrfgZDskzE",
... |
iclr_2022_eW5R4Cek6y6 | On Predicting Generalization using GANs | Research on generalization bounds for deep networks seeks to give ways to predict test error using just the training dataset and the network parameters. While generalization bounds can give many insights about architecture design, training algorithms etc., what they do not currently do is yield good predictions for act... | Accept (Spotlight) | The paper demonstrates that test error of image classification models can be accurately estimated using samples generated by a GAN. Surprisingly, this relatively simple proposed method outperforms existing approaches including ones from recent competitions. All reviewers agree this is a very interesting finding, even t... | train | [
"vzAtOVRdRt",
"jeb7_CGi1J",
"nHgNe8VfUPi",
"oslrPc9rEOp",
"C7eA4QkWEW5",
"sB61qUVNz7v",
"a4xuqG4PQZi",
"_a-0PYhXlyt",
"yZyNnk1oDRq",
"rqkgkFEVcpY",
"PxxKFE8HH_",
"oZliMbVsSpc"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" In the PGDL competition, the competitors do not know the dataset beforehand so they could not have used a pre-trained GAN to generate test data. I suppose data augmentation was meant as a proxy for the task. In real-world applications, however, you do know the dataset so the proposed method can easily be used.",
... | [
-1,
8,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
8,
5
] | [
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nHgNe8VfUPi",
"iclr_2022_eW5R4Cek6y6",
"yZyNnk1oDRq",
"rqkgkFEVcpY",
"iclr_2022_eW5R4Cek6y6",
"iclr_2022_eW5R4Cek6y6",
"jeb7_CGi1J",
"C7eA4QkWEW5",
"PxxKFE8HH_",
"oZliMbVsSpc",
"iclr_2022_eW5R4Cek6y6",
"iclr_2022_eW5R4Cek6y6"
] |
iclr_2022_EnwCZixjSh | On Evaluation Metrics for Graph Generative Models | In image generation, generative models can be evaluated naturally by visually inspecting model outputs. However, this is not always the case for graph generative models (GGMs), making their evaluation challenging. Currently, the standard process for evaluating GGMs suffers from three critical limitations: i) it does no... | Accept (Poster) | The paper argues that existing evaluation metrics for GGMs are insufficient and perform an extensive empirical study questioning their ability to measure the diversity and fidelity of the generated graphs. To solve these limitations, they propose a new evaluation metric that computes the Maximum Mean Discrepancy (MMD) ... | train | [
"XXk9BEWK6Nk",
"hBXuficzlE",
"fzej2B-fTO7",
"AcX8CT2yQtv",
"AfQlyQVcY0o",
"E6qdvvxL-f-",
"xdYm6SVKLC",
"wKxDcUMaId",
"8Ol_4-b5n29",
"aXniUo-01og",
"wTMLp72AAB-",
"cqDG4jnjV6",
"M6mTApDk8RQ",
"YUJGsqYvm2",
"CQ4fJQu8FUb",
"5CR25VqWAwH",
"5L3usKA0EkY",
"hWrgdMidOXC",
"faAjGM1bzh",
... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
... | [
"The paper proposes the use of an untrained graph neural network (GNN) to generate a graph embedding which is used with other measures to evaluate Generative Graph Models (GGMs). The main advantages of this evaluation process are the use of a single score, the inclusion of node and edge features, and its empirical ... | [
6,
-1,
-1,
-1,
6,
8,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
-1,
3,
3,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2022_EnwCZixjSh",
"xdYm6SVKLC",
"aXniUo-01og",
"CQ4fJQu8FUb",
"iclr_2022_EnwCZixjSh",
"iclr_2022_EnwCZixjSh",
"faAjGM1bzh",
"iclr_2022_EnwCZixjSh",
"iclr_2022_EnwCZixjSh",
"cqDG4jnjV6",
"iclr_2022_EnwCZixjSh",
"YUJGsqYvm2",
"AfQlyQVcY0o",
"5L3usKA0EkY",
"M6mTApDk8RQ",
"5L3usKA0Ek... |
iclr_2022_5ECQL05ub0J | Resonance in Weight Space: Covariate Shift Can Drive Divergence of SGD with Momentum | Most convergence guarantees for stochastic gradient descent with momentum (SGDm) rely on iid sampling. Yet, SGDm is often used outside this regime, in settings with temporally correlated input samples such as continual learning and reinforcement learning. Existing work has shown that SGDm with a decaying step-size can... | Accept (Poster) | This paper studies online learning using SGD with momentum for nonstationary data. For the specific setting of linear regression with Gaussian noise and oscillatory covariate shift, a linear oscillator ODE is derived that describes the dynamics of the learned parameters. This then allows analysis of convergence/diverge... | train | [
"U9QSZX3cGnc",
"kPCx30EVYr3",
"r2yE0xRv4Xg",
"Iu1fnUOauBE",
"Dvk4OjI3D3E",
"wnR9XUKjbCt",
"xTGTxofHSNg",
"TPXtBRZQ9Cv",
"q3SgZ4WL8VU",
"aDqd23xbDGf",
"fdo5MY055Gm",
"TysxODi0SIi",
"5HCIZdOkSDr",
"C8lJCnsuOxX"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate your willingness to discuss. \n\nA small tangent: we also appreciate your comments regarding our work's \"exemplary scientific format.\" From the very beginning of this project, we strove towards the _phenomenon --> hypothesis --> analysis_ structure you described. So it is helpful feedback (and ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
8,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4,
4
] | [
"kPCx30EVYr3",
"aDqd23xbDGf",
"Iu1fnUOauBE",
"TPXtBRZQ9Cv",
"iclr_2022_5ECQL05ub0J",
"C8lJCnsuOxX",
"5HCIZdOkSDr",
"fdo5MY055Gm",
"fdo5MY055Gm",
"TysxODi0SIi",
"iclr_2022_5ECQL05ub0J",
"iclr_2022_5ECQL05ub0J",
"iclr_2022_5ECQL05ub0J",
"iclr_2022_5ECQL05ub0J"
] |
iclr_2022_mwdfai8NBrJ | Policy Smoothing for Provably Robust Reinforcement Learning | The study of provable adversarial robustness for deep neural networks (DNNs) has mainly focused on $\textit{static}$ supervised learning tasks such as image classification. However, DNNs have been used extensively in real-world $\textit{adaptive}$ tasks such as reinforcement learning (RL), making such systems vulnerabl... | Accept (Poster) | The reviewers appreciated the treatment of the topic of certifiable robustness done in this work and although they had a number of concerns, I feel they were adequately addressed by the authors. | train | [
"QU6w3v6ueDe",
"vZTg-RnNNwe",
"Os_ATDz31Ho",
"ebytaUM5sE",
"7Njg2cEX6c5",
"4mwzFBfSDT",
"dbfyZmGJw9B",
"AAxMsFTZ-Im",
"GyTPvGWUeGL",
"WghfAcYqz_S",
"xUzb2EOr4Sc",
"8kGFB-njjf_",
"YRHPcFcFjfL",
"k-VguyRRoGq",
"9_ajt8UNqiH",
"k6aZK_oJStv",
"IOscBRa3nxd"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate your valuable feedback. We will make sure to include your suggestions in the next update of the manuscript.",
" Apologies for the delay. I updated my review.",
"This paper studies certified robustness of a policy in a reinforcement learning setting. The authors propose a method, similar to rando... | [
-1,
-1,
5,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
2,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
2
] | [
"vZTg-RnNNwe",
"ebytaUM5sE",
"iclr_2022_mwdfai8NBrJ",
"Os_ATDz31Ho",
"IOscBRa3nxd",
"AAxMsFTZ-Im",
"iclr_2022_mwdfai8NBrJ",
"WghfAcYqz_S",
"iclr_2022_mwdfai8NBrJ",
"dbfyZmGJw9B",
"k6aZK_oJStv",
"9_ajt8UNqiH",
"k-VguyRRoGq",
"Os_ATDz31Ho",
"iclr_2022_mwdfai8NBrJ",
"iclr_2022_mwdfai8NBrJ... |
iclr_2022_oj2yn1Q4Ett | Decentralized Learning for Overparameterized Problems: A Multi-Agent Kernel Approximation Approach | This work develops a novel framework for communication-efficient distributed learning where the models to be learned are overparameterized. We focus on a class of kernel learning problems (which includes the popular neural tangent kernel (NTK) learning as a special case) and propose a novel {\it multi-agent kernel appr... | Accept (Poster) | In this paper, the authors study the decentralized empirical risk minimization problem with Reproducing Kernel Hilbert Space. I found the problem formulation and the solution quite interesting. The authors also answered the main comments of the reviewers. Even though part of the work is incremental, I feel that there i... | train | [
"drfsssxuLgC",
"RpuEb_86Dma",
"7ztGurDSA87Y",
"bW4vGC4WE3m",
"nAkMJhS51K9",
"DmPvm_gDus",
"CmYemKzrjs",
"1n6_-y8skz",
"rtTHLnS-Z-n",
"WEZ5t-dbweM",
"FyTIj2LDrkH",
"NkcuTzMWnGz"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper discusses a random feature-based multi-agent kernel learning approach. For both generalized inner-product (GIP) and random feature (RF) kernels, the authors propose, in each agent, to exchange the random feature matrix (instead of the model parameters). By considering the problem of kernel ridge regress... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"iclr_2022_oj2yn1Q4Ett",
"NkcuTzMWnGz",
"1n6_-y8skz",
"iclr_2022_oj2yn1Q4Ett",
"CmYemKzrjs",
"drfsssxuLgC",
"FyTIj2LDrkH",
"DmPvm_gDus",
"WEZ5t-dbweM",
"RpuEb_86Dma",
"iclr_2022_oj2yn1Q4Ett",
"iclr_2022_oj2yn1Q4Ett"
] |
iclr_2022_Fza94Y8VS4a | The Evolution of Uncertainty of Learning in Games | Learning in games has become an object of intense interest for ML due to its connections to numerous AI architectures. We study standard online learning in games but from a non-standard perspective. Instead of studying the behavior of a single initial condition and whether it converges to equilibrium or not, we study t... | Accept (Poster) | This paper examines the evolution of densities of initial conditions under the multiplicative weights update rule for learning in two-player zero-sum games. Specifically, the authors estimate the differential entropy (DE) of a density of initial conditions as it evolves over time (what they call "uncertainty"), and the... | test | [
"k9igEkaw_Dg",
"pZPYaOeJy1T",
"q0-mApqPYpA",
"gUwbood5Rp",
"ek3NnNOf50Y",
"HMhVpAhFqw_",
"Dg7BcBFBgp",
"nZGk9_7CEAs",
"wB9LUYbvK5_"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. I'm raising my score to 7. (but since 7 is not available, I will rate it as 8)",
"This paper extends the existing line of research of the dynamics of multiplicative weights update and similar algorithms for games. It shows that for zero-sum two-player games and for population game... | [
-1,
8,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
2,
2,
3
] | [
"ek3NnNOf50Y",
"iclr_2022_Fza94Y8VS4a",
"wB9LUYbvK5_",
"nZGk9_7CEAs",
"pZPYaOeJy1T",
"Dg7BcBFBgp",
"iclr_2022_Fza94Y8VS4a",
"iclr_2022_Fza94Y8VS4a",
"iclr_2022_Fza94Y8VS4a"
] |
iclr_2022_uSE03demja | RISP: Rendering-Invariant State Predictor with Differentiable Simulation and Rendering for Cross-Domain Parameter Estimation | This work considers identifying parameters characterizing a physical system's dynamic motion directly from a video whose rendering configurations are inaccessible. Existing solutions require massive training data or lack generalizability to unknown rendering configurations. We propose a novel approach that marries doma... | Accept (Oral) | This paper proposes a method to solve the inverse problem of identifying parameters of a dynamic physical system from image observations. The main idea is to train a rendering-invariant state-prediction (RISP), which estimates the inverse mapping from the pixel to the state domain. The authors introduce a new loss to t... | train | [
"FxZqUq3c3u",
"8GDqF67GNLM",
"hVTpDCxIiAE",
"7V-bsUFxuSa",
"C9ZB2AOsUrw",
"A_9tUjFtJsF",
"3hUV3ACjaap",
"S5ZvfNThuq4",
"t_h_QbTeZS",
"3P-jw5LNaiX",
"T_jLJ0tiA_J",
"jhND8oLvNOy"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer paRJ,\n\nThank you for your detailed comments. We are glad to see you appreciate our additional real-world experiment. We agree with the suggestion about the discussion to add and will incorporate it into our final manuscript.",
" Dear authors,\n\nThank you for addressing some of my (minor) concer... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"8GDqF67GNLM",
"C9ZB2AOsUrw",
"jhND8oLvNOy",
"iclr_2022_uSE03demja",
"iclr_2022_uSE03demja",
"T_jLJ0tiA_J",
"3P-jw5LNaiX",
"t_h_QbTeZS",
"3P-jw5LNaiX",
"iclr_2022_uSE03demja",
"iclr_2022_uSE03demja",
"iclr_2022_uSE03demja"
] |
iclr_2022_vqGi8Kp0wM | Mind the Gap: Domain Gap Control for Single Shot Domain Adaptation for Generative Adversarial Networks | We present a new method for one shot domain adaptation. The input to our method is trained GAN that can produce images in domain A and a single reference image I_B from domain B. The proposed algorithm can translate any output of the trained GAN from domain A to domain B. There are two main advantages of our method com... | Accept (Poster) | This paper proposes a novel method for the single-shot domain adaptation with the help of Generative Adversarial Nets. The proposed method is interesting, novel, and versatile. Moreover, the performance is impressive and better than the existing methods. However, the writing needs some improvement for better readabilit... | train | [
"JCiTfd3oayy",
"bGxxTlrDE6w",
"CFd0YL-Om1",
"oadTRmocKRt",
"qELK3duSfp",
"eL3RrWTJfYz",
"sGL-NDkkit",
"4DYRSFrRFqK",
"g8PF8-dfaLE",
"xybNqxOad2K",
"zvwsTMtzc-5",
"tf6PkzSRCj1",
"m_CKqhFZec2",
"JTPcdZTVt0J"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
" The additional quantitative results and the comparisons address my major concerns, hence, I would like to raise the rating to 6. Also, I would suggest the authors to report the results in the main paper (or appendix if there are too many contents).",
"This paper mainly deals with domain transfer tasks in pre-tr... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
4
] | [
"CFd0YL-Om1",
"iclr_2022_vqGi8Kp0wM",
"oadTRmocKRt",
"qELK3duSfp",
"eL3RrWTJfYz",
"sGL-NDkkit",
"bGxxTlrDE6w",
"sGL-NDkkit",
"tf6PkzSRCj1",
"iclr_2022_vqGi8Kp0wM",
"JTPcdZTVt0J",
"xybNqxOad2K",
"bGxxTlrDE6w",
"iclr_2022_vqGi8Kp0wM"
] |
iclr_2022_1NUsBU-7HAL | Map Induction: Compositional spatial submap learning for efficient exploration in novel environments | Humans are expert explorers and foragers. Understanding the computational cognitive mechanisms that support this capability can advance the study of the human mind and enable more efficient exploration algorithms. We hypothesize that humans explore new environments by inferring the structure of unobserved spaces throug... | Accept (Poster) | This paper presents a hierarchical Bayesian approach to exploration in grid worlds. The paper considers the hypothesis that humans maintain a hierarchical representation when exploring a space, where the distribution over unknown space can be modeled with a structured probabilistic program. The paper compares the beh... | train | [
"eNCI5o2dRea",
"6TOnYfTEBaC",
"ynbjbrd2W8A",
"nqDa6SBMdE",
"nGlyCzHSKui",
"6hO-xR2fgD-",
"K8_tjfn83Xj",
"QVoMzmCNtP1",
"6N61_sPVC1R",
"TxJUBhwwNg",
"eZ216iDDav3",
"1UJWbqxSQAi",
"EjrDrnvuyQG"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" I am glad to see the results plotted as log likelihoods, and that at least both models are better than uniform-POMCP model. I suggest you update your conclusion in the final version based on the result displayed here. Another way to plot (as a suggestion) is to provide error bars showing the relative difference o... | [
-1,
-1,
6,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
-1,
-1,
3,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"6TOnYfTEBaC",
"6hO-xR2fgD-",
"iclr_2022_1NUsBU-7HAL",
"iclr_2022_1NUsBU-7HAL",
"iclr_2022_1NUsBU-7HAL",
"eZ216iDDav3",
"iclr_2022_1NUsBU-7HAL",
"iclr_2022_1NUsBU-7HAL",
"nqDa6SBMdE",
"ynbjbrd2W8A",
"EjrDrnvuyQG",
"nGlyCzHSKui",
"iclr_2022_1NUsBU-7HAL"
] |
iclr_2022_1xXvPrAshao | Learning Multimodal VAEs through Mutual Supervision | Multimodal VAEs seek to model the joint distribution over heterogeneous data (e.g.\ vision, language), whilst also capturing a shared representation across such modalities. Prior work has typically combined information from the modalities by reconciling idiosyncratic representations directly in the recognition model th... | Accept (Spotlight) | PAPER: This paper introduces a new method to learn joint representations from multimodal data, with potentially missing data. The primary novelty builds from the idea of semi-supervised VAE, introducing the concept of bi-directional information flow, which is termed “mutual supervision”. This approach brings the same a... | train | [
"vQWfd724zf",
"Fv_oXivW4ZD",
"tMlIVxlp67-",
"b264ZzisJrF",
"AaWEqK0UZTH",
"il6LT_RMNVP",
"LhIf2P71-qk",
"R4QkGqU4Wpy",
"e-c_j8rVoB2",
"te3rDj5raTv",
"Ma7SvCDBJuI"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes MEME as a new method of multimodal VAE, which is an extension of semi-supervised VAEs and can handle partial train settings. Experimental results show that the proposed method outperforms the conventional methods, MVAE and MMVAE, in both partial and complete settings. Furthermore, the proposed ... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
6
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"iclr_2022_1xXvPrAshao",
"Ma7SvCDBJuI",
"iclr_2022_1xXvPrAshao",
"iclr_2022_1xXvPrAshao",
"vQWfd724zf",
"Ma7SvCDBJuI",
"te3rDj5raTv",
"e-c_j8rVoB2",
"iclr_2022_1xXvPrAshao",
"iclr_2022_1xXvPrAshao",
"iclr_2022_1xXvPrAshao"
] |
iclr_2022_DmpCfq6Mg39 | Omni-Dimensional Dynamic Convolution | Learning a single static convolutional kernel in each convolutional layer is the common training paradigm of modern Convolutional Neural Networks (CNNs). Instead, recent research in dynamic convolution shows that learning a linear combination of n convolutional kernels weighted with their input-dependent attentions can... | Accept (Spotlight) | This paper presents ODConv, a convolution pattern which uses attention in the convolutions across all dimensions of the weight tensor.
The paper is well motivated and well explained, easy to follow.
This work is built on top of previous work, but reviewers all agree that the contributions of this paper are significant.... | train | [
"4B1P2MS_ehj",
"EWWR5eU5UO4",
"XQ6_51HDBr8",
"Iqml-r6VKgR",
"jRt24YLfZR5",
"98YWaeS9Dno",
"7Z6wzKGqn8",
"zLNsr8fxVs7",
"tD-BSRq2Xow",
"Y2baeaDeKr5",
"HJfKrH3yGXG",
"-MoQO2wNsF",
"C-MTwNvR_m",
"g7pB7k6gaQI",
"39BXv38pWC",
"hOP7MPDfJCc",
"z4cylznLraa",
"G23e4r5NO-8"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your great efforts to provide detailed responses. I am satisfied with most of the responses. Please revise the final manuscript by reflecting all the reviewers' comments. \n\nIt would be good if the authors could further improve by optimizing the method to outperform the competitors reaching the bas... | [
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"EWWR5eU5UO4",
"z4cylznLraa",
"C-MTwNvR_m",
"jRt24YLfZR5",
"7Z6wzKGqn8",
"iclr_2022_DmpCfq6Mg39",
"zLNsr8fxVs7",
"98YWaeS9Dno",
"hOP7MPDfJCc",
"z4cylznLraa",
"z4cylznLraa",
"98YWaeS9Dno",
"G23e4r5NO-8",
"hOP7MPDfJCc",
"iclr_2022_DmpCfq6Mg39",
"iclr_2022_DmpCfq6Mg39",
"iclr_2022_DmpCf... |
iclr_2022_DrZXuTGg2A- | Shuffle Private Stochastic Convex Optimization | In shuffle privacy, each user sends a collection of randomized messages to a trusted shuffler, the shuffler randomly permutes these messages, and the resulting shuffled collection of messages must satisfy differential privacy. Prior work in this model has largely focused on protocols that use a single round of communic... | Accept (Poster) | This work is on stochastic convex optimization (SCO) in shuffle differential privacy (DP) models. In SCO, a learner receives a convex loss function L: Theta x X -> Reals, where Theta is a d-dimensional vector of parameters and X is a set of data points. The objective is to use samples x1, x2, …, xn to find a parameter ... | train | [
"I9WdSmIMKU",
"TZHfpn8UIr",
"-p8AJZuedPI",
"h_69-BjF_HC",
"HqPwbDDoOpu",
"1ALzCTZMQfT",
"Wx19lkYIzp",
"MeiyynTSaYX",
"95CKaq2WUu8",
"OBx2z-cu-gn",
"lVpN7hNnjeq",
"cvaTh4CpXzT",
"n8tIOJ8jlJe",
"wOicYvwWTsf",
"CwXD9RQRM_",
"3fSXo00_IEc",
"6wB1enReTkq",
"W1nSsXjB7c_",
"QtyeixS_yq6",... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"... | [
" Thanks to the authors for the painstaking clarification regarding the different aspects of the paper. I have a much better understanding of the paper now. \n\nexact dependence on privacy budget in the main theorem: \nI did some calculations on my own today and was about to suggest that it seems possible that the ... | [
-1,
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8
] | [
-1,
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"-p8AJZuedPI",
"iclr_2022_DrZXuTGg2A-",
"h_69-BjF_HC",
"MeiyynTSaYX",
"Wx19lkYIzp",
"iclr_2022_DrZXuTGg2A-",
"95CKaq2WUu8",
"OBx2z-cu-gn",
"cvaTh4CpXzT",
"lVpN7hNnjeq",
"n8tIOJ8jlJe",
"wOicYvwWTsf",
"3fSXo00_IEc",
"CwXD9RQRM_",
"1ALzCTZMQfT",
"TZHfpn8UIr",
"QtyeixS_yq6",
"_zNuydBBR... |
iclr_2022_lyLVzukXi08 | Neural Variational Dropout Processes | Learning to infer the conditional posterior model is a key step for robust meta-learning. This paper presents a new Bayesian meta-learning approach called Neural Variational Dropout Processes (NVDPs). NVDPs model the conditional posterior distribution based on a task-specific dropout; a low-rank product of Bernoulli ex... | Accept (Poster) | This paper proposes a novel model-based Bayesian meta-learning approach that combines a novel conditional dropout posterior a new variational prior for the data-efficient learning and adaptation of deep neural networks. It is applied to tasks such as 1D stochastic regression, image inpainting, and classification.
Over... | train | [
"_oR1TgFwsqo",
"4rHb4dSXsU",
"te38WOGDEh",
"iCPCF5phrX",
"uA6l4Kl2sA2",
"8s4rAPLNXcn",
"wZ0RvQZUbSi",
"gbF8fvBDOPH"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank all reviewers for their supportive feedback on this work.\n\nDuring the discussion period, we updated the paper to reflect the opinions of reviewers. \nAlso, many typos and errors were corrected.\n\nThe major changes can be summarized as follows.\n - Updated equation 6 and the description of the mini-bat... | [
-1,
-1,
-1,
-1,
-1,
6,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"iclr_2022_lyLVzukXi08",
"8s4rAPLNXcn",
"gbF8fvBDOPH",
"8s4rAPLNXcn",
"wZ0RvQZUbSi",
"iclr_2022_lyLVzukXi08",
"iclr_2022_lyLVzukXi08",
"iclr_2022_lyLVzukXi08"
] |
iclr_2022_26gKg6x-ie | Adversarial Support Alignment | We study the problem of aligning the supports of distributions. Compared to the existing work on distribution alignment, support alignment does not require the densities to be matched. We propose symmetric support difference as a divergence measure to quantify the mismatch between supports. We show that select discrimi... | Accept (Spotlight) | Thanks for your submission to ICLR.
Three of the four reviewers are ultimately (particularly after discussion) very enthusiastic about the paper, and feel that their concerns have been adequately addressed. The fourth reviewer has not updated his/her score but has indicated that their concerns were at least somewhat ... | train | [
"Us_W__EibPx",
"pzIJvKQ4sIp",
"t2I6YAVk1Ye",
"CDcp6Mc571t",
"SkV32zeqZV4",
"r5MS6bqMqr4",
"_UTYhWpi1dG",
"C56I6Nk4F8R",
"bfCKRcORcl_",
"a26n4fh6cTn",
"ICEhvS4GPC6",
"6n0tFTzHlGx",
"pKz77--Psw",
"o8MAxkkAt3v",
"zwA3JLppkW",
"UTe9Tdv213B",
"vVzbJUjz1yE",
"vtHxXpKG6V6",
"zktYBDeSqaL... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"officia... | [
" We thank the reviewer again for their input and for the opportunity given to us to clarify our contributions. We feel that we have adequately addressed all the concerns expressed about the paper. We respectfully ask that the reviewer revisits their initial assessment to better reflect post discussion evaluation."... | [
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
3
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"CDcp6Mc571t",
"_UTYhWpi1dG",
"CDcp6Mc571t",
"ubaQ26S3J1",
"iclr_2022_26gKg6x-ie",
"SkV32zeqZV4",
"i5fPdTtIHX",
"iclr_2022_26gKg6x-ie",
"zktYBDeSqaL",
"pKz77--Psw",
"fUB56uwS9Vq",
"pKz77--Psw",
"i5fPdTtIHX",
"zktYBDeSqaL",
"zktYBDeSqaL",
"zktYBDeSqaL",
"zktYBDeSqaL",
"zktYBDeSqaL",... |
iclr_2022_B3Nde6lvab | Eliminating Sharp Minima from SGD with Truncated Heavy-tailed Noise | The empirical success of deep learning is often attributed to SGD’s mysterious ability to avoid sharp local minima in the loss landscape, as sharp minima are known to lead to poor generalization. Recently, empirical evidence of heavy-tailed gradient noise was reported in many deep learning tasks; and it was shown in (... | Accept (Poster) | Motivated by empirical observations that SGD performed on deep networks converge to regions of flatter loss curvature relative to large or full batch GD, the authors perform a theoretical analysis of trajectories of SGD with the presence of heavy tailed noise. The primary observation of the theory is that heavy tailed ... | train | [
"BgtePT_YkqV",
"cexaddq5ik0",
"Oa_eFADA6BV",
"Q1kaTyBiGox",
"tsCI4SnP8W1",
"Hkg1Jsa3Gdr",
"wVi3UbJ1r6",
"0W56ufBpDJk",
"g3isRiSd1Cr",
"zmcqvAjtaWM",
"M1zBI-93w8z",
"eOYe101m05L",
"_8AC384ESx",
"DynOEVvx-eM",
"NBry5oAoKQ_",
"goHpAcBnlaH",
"9qzluuzzPDd",
"gVkpAXBWCL",
"lKy87s7meOp"... | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",... | [
" (Continuing the Above Comment)\n\n**Q: One more experiment I think is crucial is to show the dependence of the sharpness (or the expected sharpness) on the hyperparameters that the theory predicts. For example, if the theory predicts an increase of sharpness with an increasing batch size, then the authors should ... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"lKy87s7meOp",
"lKy87s7meOp",
"lKy87s7meOp",
"0W56ufBpDJk",
"eOYe101m05L",
"eOYe101m05L",
"iclr_2022_B3Nde6lvab",
"CyjVEl8uNik",
"Hkg1Jsa3Gdr",
"Hkg1Jsa3Gdr",
"wVi3UbJ1r6",
"BgtePT_YkqV",
"OcVE4ljYSVb",
"OcVE4ljYSVb",
"OcVE4ljYSVb",
"OcVE4ljYSVb",
"OcVE4ljYSVb",
"iclr_2022_B3Nde6lv... |
iclr_2022_lTqGXfn9Tv | Phenomenology of Double Descent in Finite-Width Neural Networks | `Double descent' delineates the generalization behaviour of models depending on the regime they belong to: under- or over-parameterized. The current theoretical understanding behind the occurrence of this phenomenon is primarily based on linear and kernel regression models --- with informal parallels to neural networks... | Accept (Poster) | This paper studies the important statistical phenomenon of double descent, a very timely topic, using influence functions, and thereby derives lower bounds for the population loss. The reviewers generally appreciated the conceptual as well as the technical contributions in the work, but argued that the set of assumptio... | train | [
"tRM1tL3B0D",
"4djELv6eA-c",
"Z-1dOp6QTd",
"2cn6Ya5w2Q3",
"VMFqiMzndHp",
"lG6DNUSwdna",
"RcCYFDEVqqX",
"LUGI38RWRf6",
"l9AFTD8WVJI",
"-arppd0xyP9",
"IZFDTm4rZKN",
"aEHTduMuYw2",
"vGoRg0ydOT4",
"ZQcSGiENIVy",
"RuIfXygSZbd",
"j2BMXDU8mWx",
"uS8shOuF-F6",
"w0F3dFCpwUN",
"QJcogTBjN1e... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"a... | [
"This paper derives an expression of the population risk with influence function, and then use that to derive a lower bound on the population risk, which provides an explanation for double descent phenomena in the finite-width regime at the optimum point. The paper provides experiments to support the theoretical ar... | [
3,
-1,
-1,
-1,
8,
8,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
3,
-1,
-1,
-1,
4,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2022_lTqGXfn9Tv",
"tRM1tL3B0D",
"iclr_2022_lTqGXfn9Tv",
"PCe-vq0x4RG",
"iclr_2022_lTqGXfn9Tv",
"iclr_2022_lTqGXfn9Tv",
"l9AFTD8WVJI",
"iclr_2022_lTqGXfn9Tv",
"-arppd0xyP9",
"aEHTduMuYw2",
"ZQcSGiENIVy",
"vGoRg0ydOT4",
"RuIfXygSZbd",
"gpGAB_OHfJd",
"j2BMXDU8mWx",
"uS8shOuF-F6",
... |
iclr_2022_fILj7WpI-g | Perceiver IO: A General Architecture for Structured Inputs & Outputs | A central goal of machine learning is the development of systems that can solve many problems in as many data domains as possible. Current architectures, however, cannot be applied beyond a small set of stereotyped settings, as they bake in domain & task assumptions or scale poorly to large inputs or outputs. In this w... | Accept (Spotlight) | This paper proposes Perceiver IO, a general neural architecture that handles general purpose inputs and outputs. It operates directly in the raw input domains, and thus does away with modality specific architecture components. The paper contains extensive experiments showing the capabilities of this architecture in dif... | val | [
"mcJqLC1ingm",
"WGNpfTgIIY4",
"La1GFtsPlsO",
"e1eAJ0j1GI8",
"RAq_R3k8iF",
"t2c9TJfOcLU",
"OkqeqFwJWSs",
"pzoOqt-2Rwx",
"ZnYXjCwQYkZ",
"uz3IrYuMwly",
"HkGOlWnfwux",
"BNEcks2-bA"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" The additional pretraining+fine-tuning experiment has finished running, and it produces **86.4%** top-1 eval accuracy on ImageNet. This model is identical to the configuration B model used for 2D Fourier Feature (FF) pretraining other than the initial layers (i.e. the addition of the initial conv + max-pool). We ... | [
-1,
8,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
8,
8
] | [
-1,
3,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"pzoOqt-2Rwx",
"iclr_2022_fILj7WpI-g",
"ZnYXjCwQYkZ",
"pzoOqt-2Rwx",
"iclr_2022_fILj7WpI-g",
"iclr_2022_fILj7WpI-g",
"BNEcks2-bA",
"RAq_R3k8iF",
"WGNpfTgIIY4",
"HkGOlWnfwux",
"iclr_2022_fILj7WpI-g",
"iclr_2022_fILj7WpI-g"
] |
iclr_2022_JxFgJbZ-wft | Variational Predictive Routing with Nested Subjective Timescales | Discovery and learning of an underlying spatiotemporal hierarchy in sequential data is an important topic for machine learning. Despite this, little work has been done to explore hierarchical generative models that can flexibly adapt their layerwise representations in response to datasets with different temporal dynami... | Accept (Poster) | Thanks for your submission to ICLR.
This paper considers a variational inference hierarchical model called Variational Predictive Routing.
Prior to discussion, several reviewers were on the fence about the paper, most notably having concerns about some of the experimental results as well as various clarity issues thr... | train | [
"G79Yxm4fLTo",
"1qRI0egPwnv",
"BTpCa4_7Vk-",
"ALCBecHZz5u",
"oicBZ9A0OFN",
"J2GDj62uPax",
"0x6hPe33RGy",
"VUjaMgQZC0n",
"MVR-EkSjSl",
"oJEzODq17JG",
"D7gQRdkz_-D",
"eN-fynWEB9C",
"rHb2mRidWL",
"Csj3Zmt8qef",
"8doIhkf-nS",
"NU810Lrg3J2",
"m90NdITLzZ",
"FCpASPqdGZ",
"2vapXsU__CB",
... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
... | [
" Dear Reviewer A5x3, \n\nWe are very pleased to hear that -- thank you for raising your score!",
" Dear Reviewers, \n\nWe thank you very much for your feedback on our work. We are happy to see that you acknowledged the novelty and the demonstrated effectiveness of our proposed model in extracting and learning di... | [
-1,
-1,
-1,
8,
-1,
-1,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
-1,
3,
-1,
-1,
4,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"oicBZ9A0OFN",
"iclr_2022_JxFgJbZ-wft",
"Csj3Zmt8qef",
"iclr_2022_JxFgJbZ-wft",
"eN-fynWEB9C",
"VUjaMgQZC0n",
"iclr_2022_JxFgJbZ-wft",
"NU810Lrg3J2",
"D7gQRdkz_-D",
"iclr_2022_JxFgJbZ-wft",
"2vapXsU__CB",
"dP7M5om-Fyl",
"NU810Lrg3J2",
"FCpASPqdGZ",
"2vapXsU__CB",
"0VvzPe3dvsX",
"ALCB... |
iclr_2022_CuV_qYkmKb3 | Scarf: Self-Supervised Contrastive Learning using Random Feature Corruption | Self-supervised contrastive representation learning has proved incredibly successful in the vision and natural language domains, enabling state-of-the-art performance with orders of magnitude less labeled data. However, such methods are domain-specific and little has been done to leverage this technique on real-world \... | Accept (Spotlight) | The paper explores self-supervised learning on tabular data and proposes a novel augmentation method via corrupting a random subset of features. The idea is simple but effective. Experiments include 69 datasets and compare with a number of methods. The result shows its superiority. It would be inspiring more work for S... | train | [
"ZcSuRoLhw8t",
"lLnP5wpsUHw",
"ejgJTQNveTJ",
"ecMFuOxO2c0",
"bZnlNYw97Aa",
"y2Xk6xo4z_l",
"ep66qDSCycJ",
"_tB3Mw_MGg_",
"qam-Z1rpDdI",
"Eg80NyZYCWP",
"OfZO3V20Mpg",
"eMWVAfUCgpA",
"sGtRhIPdEq5",
"QRU4C0pxcD"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the suggestion, we will release the code upon legal clearance.",
" Are you planning to release the codes for SCARF and experiments? It would be helpful for communities.",
" 'Summarizing by averaging accuracy over all datasets may not be very meaningful because more weight will be put on high-accura... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
8,
-1,
-1,
-1,
-1,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
4,
4
] | [
"lLnP5wpsUHw",
"Eg80NyZYCWP",
"bZnlNYw97Aa",
"y2Xk6xo4z_l",
"ep66qDSCycJ",
"iclr_2022_CuV_qYkmKb3",
"ecMFuOxO2c0",
"iclr_2022_CuV_qYkmKb3",
"OfZO3V20Mpg",
"QRU4C0pxcD",
"_tB3Mw_MGg_",
"sGtRhIPdEq5",
"iclr_2022_CuV_qYkmKb3",
"iclr_2022_CuV_qYkmKb3"
] |
iclr_2022_EZNOb_uNpJk | ClimateGAN: Raising Climate Change Awareness by Generating Images of Floods | Climate change is a major threat to humanity and the actions required to prevent its catastrophic consequences include changes in both policy-making and individual behaviour. However, taking action requires understanding its seemingly abstract and distant consequences. Projecting the potential impacts of extreme climat... | Accept (Poster) | This paper aims at raising awareness of climate change by GAN-projecting flooding images of popular places. This is an interesting case. While all reviews agree that this is an interesting direction, they also value the contributions differently. Two of them would like to see more methodological contributions, two focu... | test | [
"fKJz72w1Bo_",
"B3Sp1YlKR0V",
"P7RtGnfTrV1",
"P9etfhx2wM",
"tqs3DGxCdQG",
"hRTx-HwDfYJ",
"Xs4WGVJsN0X",
"kLfMU8DWbm8",
"5AM3FqhxtI",
"gRaujqGNzbL",
"2_hkwng2nXI",
"gYAf6ELzx0Y",
"BPErRWSmNa6",
"-eRUGjVe9gq",
"05XuCAbMHI8",
"4ucn9uGSWGM",
"zD-uKXJFFOj",
"2Y9Hq6RzgL6",
"0zeownr6Zp"... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I'd like to thank the authors for their detailed reply about my concerns.\n\n1. I consider the description about the motivation to be better now, where the expression is more accurate. Essentially, this study is far away from the climate change but is close to provide help on the mentioned \"psychological distanc... | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"Xs4WGVJsN0X",
"P9etfhx2wM",
"iclr_2022_EZNOb_uNpJk",
"tqs3DGxCdQG",
"gRaujqGNzbL",
"0zeownr6Zp",
"GhY6b6PJInG",
"P7RtGnfTrV1",
"P7RtGnfTrV1",
"P7RtGnfTrV1",
"P7RtGnfTrV1",
"P7RtGnfTrV1",
"P7RtGnfTrV1",
"GhY6b6PJInG",
"GhY6b6PJInG",
"GhY6b6PJInG",
"0zeownr6Zp",
"iclr_2022_EZNOb_uNp... |
iclr_2022_-Gk_IPJWvk | Top-N: Equivariant Set and Graph Generation without Exchangeability | This work addresses one-shot set and graph generation, and, more specifically, the parametrization of probabilistic decoders that map a vector-shaped prior to a distribution over sets or graphs. Sets and graphs are most commonly generated by first sampling points i.i.d. from a normal distribution, and then processing t... | Accept (Poster) | Summary of the paper: This work considers the problem of generating sets and graphs conditioned on a latent representation (a.k.a. one-shot set generation) and makes two contributions.
First, it provides sufficient conditions for a learning algorithm to be able to handle permutation equivariance (the (F, l ) equivari... | train | [
"0OL4yC45OUI",
"8tMe71gKKDc",
"CJ56WoZ2LDp",
"QoDJLv1Nzyr",
"0GEFlUxLH1V",
"9iC9qAz_ll",
"yFnncTBhPFl",
"Z-zZgFJ5r_f",
"3KXUXftSYne",
"52LOGO9En2D",
"Ev_psM1DUFk",
"6Kpc2Ah5ACf",
"lGJZlxzcAr",
"5uGFYT-n9sC",
"ZiOsnrKh__u",
"pvEejZHQR8V",
"eix49skDV_o",
"Ppo5gjEJaGO",
"CO_0WjODW5-... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"... | [
"This work proposes a new deterministic set sampling mechanism, Top-n. Top-n learns to select the best 'n' points from a trainable reference set. Unlike the previous set sampling mechanisms such i.i.d. sampling, First-n and MLP projection, Top-n do not suffer from collision problem and can generate sets of various ... | [
6,
-1,
-1,
-1,
-1,
5,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
-1,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2022_-Gk_IPJWvk",
"0OL4yC45OUI",
"0OL4yC45OUI",
"Ppo5gjEJaGO",
"9iC9qAz_ll",
"iclr_2022_-Gk_IPJWvk",
"Ev_psM1DUFk",
"iclr_2022_-Gk_IPJWvk",
"eix49skDV_o",
"iclr_2022_-Gk_IPJWvk",
"6Kpc2Ah5ACf",
"lGJZlxzcAr",
"5uGFYT-n9sC",
"pvEejZHQR8V",
"Vjw9ZeVE6-",
"ZiOsnrKh__u",
"Z-zZgFJ5r_... |
iclr_2022_g1SzIRLQXMM | Wiring Up Vision: Minimizing Supervised Synaptic Updates Needed to Produce a Primate Ventral Stream | After training on large datasets, certain deep neural networks are surprisingly good models of the neural mechanisms of adult primate visual object recognition. Nevertheless, these models are considered poor models of the development of the visual system because they posit millions of sequential, precisely coordinated ... | Accept (Spotlight) | This paper experiments with what is required for a deep neural network to be similar to the visual activity in the ventral stream (as judged by the brainscore benchmark). The authors have several interesting contributions, such as showing that a small number of supervised updates are required to predict most of the var... | val | [
"OzJQ7cizdH0",
"8XgiSituROK",
"DjHgllnUbMI",
"6QcCfqCHr1p",
"VBoa33hEib",
"CTfoW02vLt",
"ulDgcc3mxKS",
"h_kbyQ8dMk",
"QVHZ9ET2JML",
"tk9p-2lfw5O",
"hDwV3d3GCuM",
"mGieKDTWL9Y",
"4bUxFRU-wKY",
"RzRdO_6tQT",
"VGl7TQOdxrG"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank all reviewers for taking the time to read our responses and for their updated feedback!",
"The paper addresses the question of how many weight updates are needed to train a deep network before it takes on biologically realistic representations. The paper uses CORnet-S (a network that has ... | [
-1,
8,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8
] | [
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2022_g1SzIRLQXMM",
"iclr_2022_g1SzIRLQXMM",
"hDwV3d3GCuM",
"iclr_2022_g1SzIRLQXMM",
"QVHZ9ET2JML",
"h_kbyQ8dMk",
"RzRdO_6tQT",
"VGl7TQOdxrG",
"6QcCfqCHr1p",
"6QcCfqCHr1p",
"8XgiSituROK",
"8XgiSituROK",
"6QcCfqCHr1p",
"iclr_2022_g1SzIRLQXMM",
"iclr_2022_g1SzIRLQXMM"
] |
iclr_2022_K0E_F0gFDgA | The MultiBERTs: BERT Reproductions for Robustness Analysis | Experiments with pre-trained models such as BERT are often based on a single checkpoint. While the conclusions drawn apply to the artifact tested in the experiment (i.e., the particular instance of the model), it is not always clear whether they hold for the more general procedure which includes the architecture, train... | Accept (Spotlight) | This paper is a resource and numerical investigation into the variability of BERT checkpoints. It also provides a bootstrap method for making investigations on the checkpoints.
All reviewers appreciate this contribution that can be expected to be used by the NLP community. | train | [
"cyZ7YXkVz6p",
"-Rx_BptTm4l",
"J8cRdOa0fIQ",
"TMJYRxyOsuL",
"Ljz6cUlQ-HJ",
"qeq6CQD7Rjf",
"5MywpyGPUdz",
"jATwwhSnQ1e"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We thank you for your review. We are glad that you find our method theoretically sound and its implementation useful for the community.\n\nWe would like to clarify that this is not a classical analysis paper; our main contributions are a resource (the checkpoints) and a method (the Multi-Bootstrap) that can be us... | [
-1,
8,
6,
-1,
-1,
-1,
8,
6
] | [
-1,
4,
2,
-1,
-1,
-1,
3,
3
] | [
"J8cRdOa0fIQ",
"iclr_2022_K0E_F0gFDgA",
"iclr_2022_K0E_F0gFDgA",
"jATwwhSnQ1e",
"-Rx_BptTm4l",
"5MywpyGPUdz",
"iclr_2022_K0E_F0gFDgA",
"iclr_2022_K0E_F0gFDgA"
] |
iclr_2022_y0VvIg25yk | On the Learning and Learnability of Quasimetrics | Our world is full of asymmetries. Gravity and wind can make reaching a place easier than coming back. Social artifacts such as genealogy charts and citation graphs are inherently directed. In reinforcement learning and control, optimal goal-reaching strategies are rarely reversible (symmetrical). Distance functions sup... | Accept (Poster) | The reviewers agree that the paper is addressing an interesting problem, and provides a valuable contribution for the learning of quasimetrics and would be useful for many real world applications. | train | [
"h0M12hYLP7",
"7DWAjJVjXjp",
"IOrRI2MLf1",
"hHWzJ5U5oNK",
"MeRlqoVkcnU",
"Ke9DgmcZYf3",
"1PxOuWzPhR",
"ExOVOFxqGI",
"HenT5Gnrq9R",
"S5DbWCGJAmR",
"6xzBcBKBjT"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks again for your review. We tried to address your comments in the previous message. Please do not hesitate to let us know if anything needs further clarification or if you still have any concern about any part of the paper.\n\nWe look forward to hearing your thoughts.",
" Thanks for the explanation! My con... | [
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
5,
6,
8
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
3,
2
] | [
"HenT5Gnrq9R",
"ExOVOFxqGI",
"iclr_2022_y0VvIg25yk",
"HenT5Gnrq9R",
"iclr_2022_y0VvIg25yk",
"6xzBcBKBjT",
"S5DbWCGJAmR",
"IOrRI2MLf1",
"iclr_2022_y0VvIg25yk",
"iclr_2022_y0VvIg25yk",
"iclr_2022_y0VvIg25yk"
] |
iclr_2022_z7p2V6KROOV | Extending the WILDS Benchmark for Unsupervised Adaptation | Machine learning systems deployed in the wild are often trained on a source distribution but deployed on a different target distribution. Unlabeled data can be a powerful point of leverage for mitigating these distribution shifts, as it is frequently much more available than labeled data and can often be obtained from ... | Accept (Oral) | This paper presents U-WILDS, an extension of the multi-task, large-scale domain-shift dataset WILDS. The authors propose an extensive array of experiments evaluating the ability of a wide variety of algorithms to leverage the unlabelled data to address domain-shift problems. The vision behind sounds quite ambitious and... | train | [
"PKvZJN4zxe7",
"mjYobExA_84",
"Cxixfre8c4x",
"sBvQagRDHoT",
"lm1104h-lE-",
"QhlHp06TsJ7",
"g31aSwPCDo_",
"j5tGrdtHBf0",
"0zsJA9H5ZKU",
"WWQNnUbS1f8",
"Zv2gLuFBRL",
"BNOqXSBCqZc",
"aFThqV-RHSK",
"AArKPYAak_m",
"2VTfhIovhse",
"tAdj0vQPYjP",
"uDGnEuv_dZJ",
"LXsRHlXgcg8",
"0J23sLRkht... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
"### New large scale dataset for unsupervised transfer learning\n\nThe paper proposed an extension to the popular WILDS benchmark dataset by augmenting the different domain data with additional unlabeled examples. The dataset consists of data of various modalities including images, graph and text from various domai... | [
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2022_z7p2V6KROOV",
"sBvQagRDHoT",
"lm1104h-lE-",
"WWQNnUbS1f8",
"LXsRHlXgcg8",
"g31aSwPCDo_",
"GujhEHqD7mI",
"iclr_2022_z7p2V6KROOV",
"Zv2gLuFBRL",
"aFThqV-RHSK",
"tAdj0vQPYjP",
"iclr_2022_z7p2V6KROOV",
"uDGnEuv_dZJ",
"PKvZJN4zxe7",
"j5tGrdtHBf0",
"BNOqXSBCqZc",
"AArKPYAak_m",
... |
iclr_2022_MMAeCXIa89 | $\pi$BO: Augmenting Acquisition Functions with User Beliefs for Bayesian Optimization | Bayesian optimization (BO) has become an established framework and popular tool for hyperparameter optimization (HPO) of machine learning (ML) algorithms. While known for its sample-efficiency, vanilla BO can not utilize readily available prior beliefs the practitioner has on the potential location of the optimum. Thu... | Accept (Poster) | This paper investigates Bayesian optimization where a prior distribution over the optimal is available. The authors conducted a systematic study on a very intuitive prior-augmented acquisition function that multiplication the prior probability with the EI heuristic --- including an asymptotic analysis on the regret, co... | train | [
"e7JUpXzmL-1",
"1Pf_nXnjSWB",
"o1Ae-2PL_sc",
"RabJDZoH9fB",
"NYetDIXyvo",
"-OetwftPlex",
"A3seDCpMoKd",
"9RyFF1d3Mcp",
"uYjKCOBxy5M",
"6xvfBCppAGS",
"xFA4qkXgKl7",
"TtxFzO_EY4h",
"7pcdPDevJi",
"rmTRuXAbTCDH",
"HpXetFW0agY",
"U2pqzMEQNMq",
"9NIXWPynxkB",
"qDIIkktKr-W",
"DmRL9W6Won... | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
... | [
" We thank the reviewer for the additional response, and for the adjustment in the score. We agree that keeping the initialization identical will improve the comparability of the methods. As such, we provide new results for the HPOBench benchmarks below with fixed, identical seeds. We also started equivalent runs f... | [
-1,
-1,
6,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
-1,
-1,
4,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"RabJDZoH9fB",
"HpXetFW0agY",
"iclr_2022_MMAeCXIa89",
"9RyFF1d3Mcp",
"xFA4qkXgKl7",
"iclr_2022_MMAeCXIa89",
"TtxFzO_EY4h",
"6xvfBCppAGS",
"DmRL9W6Wonf",
"qDIIkktKr-W",
"rmTRuXAbTCDH",
"iclr_2022_MMAeCXIa89",
"A3seDCpMoKd",
"9NIXWPynxkB",
"U2pqzMEQNMq",
"gubNCvduwDA",
"-OetwftPlex",
... |
iclr_2022_l_amHf1oaK | Complete Verification via Multi-Neuron Relaxation Guided Branch-and-Bound | State-of-the-art neural network verifiers are fundamentally based on one of two paradigms: either encoding the whole verification problem via tight multi-neuron convex relaxations or applying a Branch-and-Bound (BaB) procedure leveraging imprecise but fast bounding methods on a large number of easier subproblems. The f... | Accept (Poster) | The authors improve upon existing algorithms for complete neural network verification by combining recent advances in bounding algorithms (better bounding algorithms under branching constraints and relaxations involving multiple neurons) and developing novel branching heuristics. They show the efficacy of their method ... | train | [
"J_go5-l2RYn",
"sc7v2PxOG0h",
"ZcEIqUk0Kih",
"DzD773BuMM",
"szTv2NQov4Y",
"qNifCh8XgU6",
"jTBNrVde2W7",
"wN5lMTKD8tN",
"NwtR1MWsARl",
"BvtqhFnRubN",
"cMklKwO3Wl5",
"UfsM_euEvq",
"kuuijMeI7U",
"3VxnOdGDan",
"gOWk0aN_Rg-",
"vgSzV57Uchg",
"e1af2xY3jNr",
"EaLyKmnofyn",
"phOxeCoHGSy",... | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"a... | [
" We are happy that we were able to address the reviewer's questions and concerns.\n\n**Evaluated Properties** \nWe followed the standard in the field of reporting results on all (first 100 or 1000) samples to avoid introducing any bias via the property selection process. Evaluating on samples known to be robust (... | [
-1,
-1,
6,
6,
-1,
6,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
4,
4,
-1,
5,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"jTBNrVde2W7",
"e1af2xY3jNr",
"iclr_2022_l_amHf1oaK",
"iclr_2022_l_amHf1oaK",
"UfsM_euEvq",
"iclr_2022_l_amHf1oaK",
"gOWk0aN_Rg-",
"NwtR1MWsARl",
"BvtqhFnRubN",
"vgSzV57Uchg",
"iclr_2022_l_amHf1oaK",
"3VxnOdGDan",
"iclr_2022_l_amHf1oaK",
"phOxeCoHGSy",
"qNifCh8XgU6",
"cMklKwO3Wl5",
"... |
iclr_2022_0DcZxeWfOPt | Fast Model Editing at Scale | While large pre-trained models have enabled impressive results on a variety of downstream tasks, the largest existing models still make errors, and even accurate predictions may become outdated over time. Because detecting all such failures at training time is impossible, enabling both developers and end users of such ... | Accept (Poster) | The paper provides a method to edit trained models, meaning fix mistakes in a local way so as to not ruin generalizability. The techniques provided in the paper allow for an efficient way that makes this task possible for very large models.
There is an overall concensus that the problem of model editing in general is a... | train | [
"2g1SUvs8A98",
"k6B5ZgpJ0tK",
"aL7HQJXR0Mc",
"ldcnH1Dpl0u",
"0TeNGZY-ed3",
"C6yzKgWikhx",
"qXTG8pqhjs",
"k-jIbc9daKy",
"5VNw4Fu-RU",
"zsntf-kDGo4",
"S5slLqyzUQPu",
"uCbRUYLLWek",
"lOXGKQ0ocKDC",
"sJGRFI4TgQwo",
"HJNmxiwZuzT",
"H1W2PHtfMFv",
"zQkeHYAJ6y8",
"7noeUelnic4",
"OMfJiywr... | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"... | [
" We really appreciate the reviewers' continued discussion of the paper, which we think has been productive. We agree that the caching baseline as suggested by WxF5 provides useful context for the evaluation of MEND and other approaches to learning to edit. We will include the performance of this baseline in the ma... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"iclr_2022_0DcZxeWfOPt",
"aL7HQJXR0Mc",
"iclr_2022_0DcZxeWfOPt",
"0TeNGZY-ed3",
"C6yzKgWikhx",
"qXTG8pqhjs",
"k-jIbc9daKy",
"sJGRFI4TgQwo",
"uCbRUYLLWek",
"iclr_2022_0DcZxeWfOPt",
"iclr_2022_0DcZxeWfOPt",
"zsntf-kDGo4",
"8o6-GYs3Stq",
"OMfJiywrZMv",
"7noeUelnic4",
"zQkeHYAJ6y8",
"8o6... |
iclr_2022_dNigytemkL | The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks | In this paper, we conjecture that if the permutation invariance of neural networks is taken into account, SGD solutions will likely have no barrier in the linear interpolation between them. Although it is a bold conjecture, we show how extensive empirical attempts fall short of refuting it. We further provide a prelimi... | Accept (Poster) | This paper investigates the linear mode connectivity of the loss landscape of neural networks, i.e. whether a convex combination of two parameters of local optima on the SGD paths has low loss values (i.e. low barrier) up to some permutations. To probe this question, this paper empirically studies the loss gap, named a... | train | [
"D039odYni8P",
"J90NNNl_48K",
"HRt-1By6DBs",
"l7JSXZ8dfz",
"9cdQKJ8WJ4",
"fpBz2PvI85W",
"VprnQ37AFMJ",
"xaZcajOdNlj",
"AiDwjEKlbHo",
"5ZgvfPiuL16",
"h8WTxlC-2Mf",
"KvEXMNqeS04",
"F08iB8msDh9",
"9nRXhtGMcp1",
"gQtA4IlltY",
"Qa40q9wE_QF",
"_Cqbf71XPB"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer"
] | [
" We would like to thank you again for your suggestions and for increasing your score.\nWe agree to the reviewer that SA could not indeed prove/disprove the conjecture. We took the first step with SA to bring supporting evidence in the similarity of the barriers between S and S\" in all settings, before and after p... | [
-1,
-1,
-1,
-1,
8,
-1,
8,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
-1,
-1,
4,
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"fpBz2PvI85W",
"xaZcajOdNlj",
"5ZgvfPiuL16",
"VprnQ37AFMJ",
"iclr_2022_dNigytemkL",
"KvEXMNqeS04",
"iclr_2022_dNigytemkL",
"gQtA4IlltY",
"iclr_2022_dNigytemkL",
"h8WTxlC-2Mf",
"AiDwjEKlbHo",
"9cdQKJ8WJ4",
"Qa40q9wE_QF",
"VprnQ37AFMJ",
"_Cqbf71XPB",
"iclr_2022_dNigytemkL",
"iclr_2022_... |
iclr_2022_Zf4ZdI4OQPV | Attacking deep networks with surrogate-based adversarial black-box methods is easy | A recent line of work on black-box adversarial attacks has revived the use of transfer from surrogate models by integrating it into query-based search. However, we find that existing approaches of this type underperform their potential, and can be overly complicated besides. Here, we provide a short and simple algorith... | Accept (Poster) | The paper shows that the transfer attack is query efficient and the success rate can be kept high with the zeroth-order score-based attack as a backup. Experiments show state-of-the-art results.
Pros:
- Simple method based on a simple idea.
- State of the art performance.
Cons:
- Proposal is a straightforward comb... | val | [
"LE-2pB-Gf6k",
"RhIZNOjGN2b",
"EYjua4Aid2",
"yVc-dmUHnDw",
"XCwQElHJ9zD",
"SkGkzB3Ci3V",
"TeW5bvgtJbP",
"Hh0n65-wTjz",
"67_WQZX2m4",
"e4Hvqe2cvVp",
"gIYcS1gNf5e",
"plaqFkwQaOA",
"x6LrQzOOTZ",
"b9ap86QVb5E",
"ZEtPpl8-tMT",
"7MdtbE7_vLf",
"1oaEcwX6DqA",
"8ulDkqXdx-v",
"WKFo3Wmdle",... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",... | [
" > \"First, the additional results in Appendix (A.5, A.6, and A.7) show that other baseline approaches can achieve the same or better success rate than the proposed approach if the query budget is provided up to 1000 or more. The proposed approach saturates early and shows no further improvements when more queries... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
2
] | [
"EYjua4Aid2",
"iclr_2022_Zf4ZdI4OQPV",
"ySkYAXcJJpN",
"fQbxHVKGwv1",
"iclr_2022_Zf4ZdI4OQPV",
"gIYcS1gNf5e",
"RhIZNOjGN2b",
"LGXxFni7GM-",
"RhIZNOjGN2b",
"plaqFkwQaOA",
"Cu2Gp36dfNz",
"b9ap86QVb5E",
"8ulDkqXdx-v",
"RhIZNOjGN2b",
"fQbxHVKGwv1",
"Bam6tV7D5Kg",
"LGXxFni7GM-",
"RhIZNOj... |
iclr_2022_fwzUgo0FM9v | Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models | Federated learning has quickly gained popularity with its promises of increased user privacy and efficiency. Previous works have shown that federated gradient updates contain information that can be used to approximately recover user data in some situations. These previous attacks on user privacy have been limited in... | Accept (Poster) | The authors provide an interesting improvement on privacy attacks in federated learning, demonstrating the ability to extract individual points even over large batches. While there were some concerns about the technical difficulty of the approach, reviewers were broadly in support of the work. As I tend to agree, this ... | train | [
"HgxxT5zHOfi",
"9nW1KDOj3jG",
"7q1hUpNyloV",
"GSwZzTsr5wr",
"MK_kfniKPST",
"1oUL84gJTmj",
"EMicd6kBw8b",
"RFYThDSRMY",
"2s9t02wCKXS",
"ctJEUlX5XNQ",
"sEUfQNp6UU0"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a new attack on federated learning, demonstrating that model updates shared in a federated setting can still leak user data. Although federated learning has been used as a privacy technique, this paper shows that other mitigation techniques should also be used. Attacks before were based on up... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
6
] | [
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4
] | [
"iclr_2022_fwzUgo0FM9v",
"iclr_2022_fwzUgo0FM9v",
"RFYThDSRMY",
"sEUfQNp6UU0",
"ctJEUlX5XNQ",
"HgxxT5zHOfi",
"2s9t02wCKXS",
"iclr_2022_fwzUgo0FM9v",
"iclr_2022_fwzUgo0FM9v",
"iclr_2022_fwzUgo0FM9v",
"iclr_2022_fwzUgo0FM9v"
] |
iclr_2022_aisKPsMM3fg | Neural Stochastic Dual Dynamic Programming | Stochastic dual dynamic programming (SDDP) is a state-of-the-art method for solving multi-stage stochastic optimization, widely used for modeling real-world process optimization tasks. Unfortunately, SDDP has a worst-case complexity that scales exponentially in the number of decision variables, which severely limits ap... | Accept (Poster) | This paper applies deep learning to a problem from OR, namely multistage stochastic optimization (MSSO). The main contribution is a method for learning a neural mapping from MSSO problem instances to value functions, which can be used to warm-start the SDDP solver, a state-of-the-art method for solving MSSO. The method... | train | [
"5RMOUvs7cFr",
"ViRaniKeb0T",
"JkBhH0kY0_",
"gyZYQ0XPbpt",
"gpup0QJE7l4",
"KLHgmjG53n",
"C2oYreZdfY",
"V8sFEBYHY-Q",
"m3U82r3ByjT",
"2KVNNBGirJf",
"Ai0yrc3DJnO",
"hmr9XXTIfHw",
"wx8JTMolV7q"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer,\n\nThanks again for your effort in reviewing the paper!\n\nWe made the clarifications to try to resolve your concerns w.r.t the paper. This is a kind reminder to please check our response when you get a chance. We hope our clarification addressed your concerns, so that you may re-evaluate our submi... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4
] | [
"V8sFEBYHY-Q",
"m3U82r3ByjT",
"iclr_2022_aisKPsMM3fg",
"2KVNNBGirJf",
"C2oYreZdfY",
"wx8JTMolV7q",
"wx8JTMolV7q",
"hmr9XXTIfHw",
"Ai0yrc3DJnO",
"JkBhH0kY0_",
"iclr_2022_aisKPsMM3fg",
"iclr_2022_aisKPsMM3fg",
"iclr_2022_aisKPsMM3fg"
] |
iclr_2022_pMQwKL1yctf | Language modeling via stochastic processes | Modern language models can generate high-quality short texts. However, they often meander or are incoherent when generating longer texts. These issues arise from the next-token-only language modeling objective. To address these issues, we introduce Time Control (TC), a language model that implicitly plans via a latent ... | Accept (Oral) | All reviewers found that the proposed LM with Brownian motion is interesting and novel. Several reviewers raised (minor) concerns about experiments, but have been generally resolved by the authors. | train | [
"Ct0RJ731kF3",
"1T7HoNYk9ae",
"jF3FjkMa5Bw",
"1iv392fz-eE",
"ganUXAPeshx",
"ev9cLOs9ajo",
"BA0GPlMN0_l",
"NeLJY03tCV",
"7_d4nPUism2",
"mILODfUxog"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes to model the evolution of sentences in a document via a stochastic process; specifically a Brownian Bridge process. The paper start off by assuming that the generated sequences by autoregressive models like GPT-2 follow Brownian motion in that they tend to get incoherent and \"meander\" in the ... | [
8,
8,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2022_pMQwKL1yctf",
"iclr_2022_pMQwKL1yctf",
"7_d4nPUism2",
"ganUXAPeshx",
"1T7HoNYk9ae",
"BA0GPlMN0_l",
"Ct0RJ731kF3",
"mILODfUxog",
"iclr_2022_pMQwKL1yctf",
"iclr_2022_pMQwKL1yctf"
] |
iclr_2022_qI4542Y2s1D | FILM: Following Instructions in Language with Modular Methods | Recent methods for embodied instruction following are typically trained end-to-end using imitation learning. This often requires the use of expert trajectories and low-level language instructions. Such approaches assume that neural states will integrate multimodal semantics to perform state tracking, building spatial m... | Accept (Poster) | This paper develops a modular system named FILM, for egocentric instruction execution task in the ALFRED environment, which uses structured representations that build a semantic map of the scene, perform exploration with a semantic search policy, to achieve the natural language goal. They achieve strong performance whi... | train | [
"8ItpbzJy0tO",
"2PtIGLS0joY",
"gIieScSqSVg",
"fPGIhtTXLoe",
"XCVkXzBlFQ3",
"asBMRQfRu2J",
"znGC4iA-dmt",
"w-Nxm9v1jMd",
"ytiQB1t51O",
"jeY2K41C_04",
"D5CWQ7ZtScF",
"j653ses240q",
"_W3BbXPqkJV",
"JLnuwya-0q6",
"S-6z1O67Wu",
"HJiSxDmuoR7",
"HrWDX126yN9",
"YQCm2W_5kni"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"public",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Hi, thanks again for engaging with the work!\n\nI think this point is really interesting and brings up a section we will include in the final version of the paper about the relationship between scripts/templates and world dynamics. I think another angle through which to view your comment is \"If you know the wor... | [
-1,
-1,
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
-1,
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"asBMRQfRu2J",
"iclr_2022_qI4542Y2s1D",
"XCVkXzBlFQ3",
"iclr_2022_qI4542Y2s1D",
"iclr_2022_qI4542Y2s1D",
"znGC4iA-dmt",
"JLnuwya-0q6",
"jeY2K41C_04",
"_W3BbXPqkJV",
"S-6z1O67Wu",
"j653ses240q",
"iclr_2022_qI4542Y2s1D",
"fPGIhtTXLoe",
"YQCm2W_5kni",
"HrWDX126yN9",
"XCVkXzBlFQ3",
"iclr... |
iclr_2022_9pEJSVfDbba | Embedded-model flows: Combining the inductive biases of model-free deep learning and explicit probabilistic modeling | Normalizing flows have shown great success as general-purpose density estimators. However, many real world applications require the use of domain-specific knowledge, which normalizing flows cannot readily incorporate. We propose embedded-model flows (EMF), which alternate general-purpose transformations with structured... | Accept (Poster) | This paper proposes a method for incorporating inductive biases into the model architecture of normalizing flows through a suitable probabilistic program. All reviewers agree the paper makes an interesting contribution to the growing normalizing flow literature. The paper is well written and the idea is novel. Addition... | train | [
"2ic7uwX06Ht",
"ubQlXwXQgoq",
"AFY5g0TekRo",
"jMeJhIrbSdO",
"UXJs3o1waop",
"yv8mVFSTWEB",
"AS65idOdEKs",
"2YqH_l6F51i",
"d4TboFVIFWL",
"IFvzhI1kSIW",
"WqUfwzdtkWA",
"aJDUUJXadQ",
"PNBVbFDbOvX",
"fkiynCM7k4f",
"dpeu4kwtQF3",
"mR-TGxaZwiP",
"f-uW8PUlRR2",
"P4rDyYY2cchO",
"Vlcfrgn5F... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
... | [
" Exactly. \nIndeed causality is often a guiding principle when designing the model and the ordering of its connection. For example, in timeseries analysis the future values is usually assumed to depend on the past values (e.g. the autoregressive models used in econometrics, Euler discretized equations of motion fr... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"ubQlXwXQgoq",
"AFY5g0TekRo",
"jMeJhIrbSdO",
"PNBVbFDbOvX",
"AS65idOdEKs",
"iclr_2022_9pEJSVfDbba",
"d4TboFVIFWL",
"aJDUUJXadQ",
"fkiynCM7k4f",
"_KblaB6wY-d",
"mR-TGxaZwiP",
"P4rDyYY2cchO",
"iclr_2022_9pEJSVfDbba",
"dpeu4kwtQF3",
"FhDuha676i3",
"n5EtbXrg4Zy",
"Vlcfrgn5F53",
"f-uW8P... |
iclr_2022_y_op4lLLaWL | Variational autoencoders in the presence of low-dimensional data: landscape and implicit bias | Variational Autoencoders (VAEs) are one of the most commonly used generative models, particularly for image data. A prominent difficulty in training VAEs is data that is supported on a lower dimensional manifold. Recent work by Dai and Wipf (2020) proposes a two-stage training algorithm for VAEs, based on a conjecture ... | Accept (Poster) | The paper analyzes the behavior of VAEs in modeling data lying on a low dimensional manifold. It formally proves some of the conjectures/informal-statements in an earlier work by Dai and Wipf (2019) in the case of linear VAE and linear manifold, and disproves the same for the nonlinear case. In particular, it proves, b... | train | [
"4GitGDSZ93l",
"moOmfDpOUtw",
"CCnNr81TTp4",
"sDkVpQAb8Ez",
"m5w_WqYhSg8",
"SeV7vOa3wa",
"NubLWIwenFA",
"itzCJwAO_ph",
"Omc34x-anmt",
"RINewuKvo00",
"wFdq4vGmJ6J",
"JIVJJINUU8Y",
"zd-YiQIxzGg",
"eZPFv_J-YSc",
"AaVYVJ5tPs6"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" **Problem statement, motivation & the Level of formaility:** I understand that the paper relies on the previous work (Dai & Wipf, 2019), however, it would be better to start the paper by re-stating the previous work, and then go to the next step. I know that there is a paragraph which introduces the work of Dai &... | [
-1,
-1,
-1,
-1,
-1,
5,
8,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"JIVJJINUU8Y",
"CCnNr81TTp4",
"sDkVpQAb8Ez",
"m5w_WqYhSg8",
"zd-YiQIxzGg",
"iclr_2022_y_op4lLLaWL",
"iclr_2022_y_op4lLLaWL",
"SeV7vOa3wa",
"NubLWIwenFA",
"NubLWIwenFA",
"AaVYVJ5tPs6",
"eZPFv_J-YSc",
"SeV7vOa3wa",
"iclr_2022_y_op4lLLaWL",
"iclr_2022_y_op4lLLaWL"
] |
iclr_2022_N9W24a4zU | Steerable Partial Differential Operators for Equivariant Neural Networks | Recent work in equivariant deep learning bears strong similarities to physics. Fields over a base space are fundamental entities in both subjects, as are equivariant maps between these fields. In deep learning, however, these maps are usually defined by convolutions with a kernel, whereas they are partial differential ... | Accept (Poster) | The paper develops steerable partial differential operator and show how it can be used to build equivariant network. Experimentation on rotated MNIST and STL10 show the merits of the proposed method. Reviewers agreed on the significance of the work and that it brings new perspective on equivariance that would be intere... | train | [
"1gUp22ydFnA",
"CbRKlXAOyoW",
"tdvonZ87rrV",
"1Ks8USxAZOw",
"rIf2tmXw-Kf",
"zjIXvDGrXba",
"8n6ogxqxNO",
"JKFgENFS7N",
"vSNFzWI2ln",
"L48R88N6955",
"RXxQOuBxhqz",
"DumP6-xrYj6",
"_p2aeMLKBjs",
"SDQAgBxeuIr",
"fwXhB6FsSoN",
"xuO-TiHkEgh"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for engaging with our responses and for adapting your score after our revision!\n\nJust to avoid confusion regarding isotropy: steerable PDOs are generally not isotropic in the sense of being invariant under all rotations. As an example, see the small stencils in the middle of Fig. 2 in the paper (Fig. ... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"1Ks8USxAZOw",
"iclr_2022_N9W24a4zU",
"JKFgENFS7N",
"_p2aeMLKBjs",
"SDQAgBxeuIr",
"vSNFzWI2ln",
"RXxQOuBxhqz",
"CbRKlXAOyoW",
"L48R88N6955",
"DumP6-xrYj6",
"xuO-TiHkEgh",
"fwXhB6FsSoN",
"SDQAgBxeuIr",
"CbRKlXAOyoW",
"iclr_2022_N9W24a4zU",
"iclr_2022_N9W24a4zU"
] |
iclr_2022_ZaVVVlcdaN | Reducing the Communication Cost of Federated Learning through Multistage Optimization | Federated learning (FL) aims to minimize the communication complexity of training a model over heterogeneous data distributed across many clients. A common approach is local methods, where clients take multiple optimization steps over local data before communicating with the server (e.g., FedAvg). Local methods can ex... | Accept (Poster) | The paper analyzes a 2-stage method for federated learning, first using FL with local steps, followed by a final phase of 'always-communicate' centralized SGD. For the convex case, the paper studies the influence of the data heterogeneity, a key parameter in FL, on the convergence of related schemes. Surprisingly the r... | train | [
"RObtSDWX3ly",
"2-IP0SNGaks",
"UF4UdJGsabZ",
"mg6S7oStpfL",
"FwSI6cPHRZs",
"ghgEDOqbe1k",
"-17iLUCqva2",
"bnkclB2F_z4",
"P9oVl2Ua3BN",
"w9uQeKGn4C7",
"fJlt1RFZiSz",
"LLQvgCWb9Vh",
"MieOkfKjZzl",
"y4iyCVHuwcH",
"48pvapSQwfA",
"A_423Yyqcp"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposed a provable multi-stage algorithm to match the lower bound established by Woodworth for intermediate heterogeneity levels in federated optimization.\n The paper tries to answer a theoretical problem in federated optimization.\nThe paper proposed a multi-stage algorithm to match the lower bound es... | [
6,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2022_ZaVVVlcdaN",
"UF4UdJGsabZ",
"MieOkfKjZzl",
"A_423Yyqcp",
"48pvapSQwfA",
"-17iLUCqva2",
"iclr_2022_ZaVVVlcdaN",
"LLQvgCWb9Vh",
"RObtSDWX3ly",
"fJlt1RFZiSz",
"A_423Yyqcp",
"-17iLUCqva2",
"48pvapSQwfA",
"bnkclB2F_z4",
"iclr_2022_ZaVVVlcdaN",
"iclr_2022_ZaVVVlcdaN"
] |
iclr_2022_8WawVDdKqlL | Label Encoding for Regression Networks | Deep neural networks are used for a wide range of regression problems. However, there exists a significant gap in accuracy between specialized approaches and generic direct regression in which a network is trained by minimizing the squared or absolute error of output labels. Prior work has shown that solving a regressi... | Accept (Spotlight) | The paper focused on deep regression problems and proposed a label encoding technique which can be thought as a sibling of the famous error-correcting output codes but designed for regression problems. The main idea is well illustrated in Figure 1 at the top of page 3, where the encoder and decoder are the main objects... | train | [
"m87CobrDl0W",
"_4K4aI4b81b",
"8WEoQZ-QiiH",
"JouxM3mKmu7",
"qO7FTHDyoKS",
"or192ntLpKG",
"uZz2CHOQ7ec",
"CWWLy_IVdFH",
"rII11UcYmlk",
"Q2Bk_jVifcW",
"-zngf-JopuW",
"950rCp_ubYg",
"XU7-nf_Iqun",
"dxuysSLEncs",
"ta_wlQIhVLD"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper tackles regression problems using an error-correcting code (ECOC) approach, i.e., reducing a *regression* problem into multiple binary *classification* problems.\nECOC have been so far mainly used for classification tasks, and their application to regression problems in this paper is elegant and seems no... | [
8,
-1,
-1,
-1,
8,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
-1,
-1,
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2022_8WawVDdKqlL",
"XU7-nf_Iqun",
"m87CobrDl0W",
"dxuysSLEncs",
"iclr_2022_8WawVDdKqlL",
"rII11UcYmlk",
"iclr_2022_8WawVDdKqlL",
"uZz2CHOQ7ec",
"uZz2CHOQ7ec",
"ta_wlQIhVLD",
"ta_wlQIhVLD",
"m87CobrDl0W",
"m87CobrDl0W",
"qO7FTHDyoKS",
"iclr_2022_8WawVDdKqlL"
] |
iclr_2022_hcoswsDHNAW | Fast AdvProp | Adversarial Propagation (AdvProp) is an effective way to improve recognition models, leveraging adversarial examples. Nonetheless, AdvProp suffers from the extremely slow training speed, mainly because: a) extra forward and backward passes are required for generating adversarial examples; b) both original samples and t... | Accept (Poster) | This paper improves the training speed and decrease the computation cost of AdvProp, which is a method that leverages the adversarial example to improve the image recognition accuracy. The method achieves the speedup by leveraging a collection of practical heuristics, including reusing some gradient computation during ... | test | [
"h8rC098HFEe",
"PQLC-KAGWc4",
"5zBSCLwKdS",
"Rs69JH0ZLjK",
"S0b9S6Gn9Wu",
"RkFGPmDUGLN",
"ayo3yDChlv",
"BFj4CF-z1e-",
"UMX25DFc0i-",
"RMSVtzjOTjs"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks the author for providing the rebuttal. While I agree some of the novelty of this paper belongs to the engineering work, it seems to me that speeding up the original AdvProp paper is not so trivial. Moreover, as far as I understand, this method can be applied in most of the classification task, which could ... | [
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
4
] | [
"PQLC-KAGWc4",
"ayo3yDChlv",
"RMSVtzjOTjs",
"UMX25DFc0i-",
"BFj4CF-z1e-",
"iclr_2022_hcoswsDHNAW",
"iclr_2022_hcoswsDHNAW",
"iclr_2022_hcoswsDHNAW",
"iclr_2022_hcoswsDHNAW",
"iclr_2022_hcoswsDHNAW"
] |
iclr_2022_KxbhdyiPHE | Learning Altruistic Behaviours in Reinforcement Learning without External Rewards | Can artificial agents learn to assist others in achieving their goals without knowing what those goals are? Generic reinforcement learning agents could be trained to behave altruistically towards others by rewarding them for altruistic behaviour, i.e., rewarding them for benefiting other agents in a given situation. Su... | Accept (Spotlight) | This work proposed a method for encouraging an agent showing altruistic behaviour towards another agent (leader) without having access to the leader's reward function. The basic idea is based on the hypothesis that having the ability to reach many future states (i.e., called choice) is useful for the leader agent, no m... | train | [
"3KxCXK1nya5",
"pTumXbBqD3E",
"8RxM8QD19D",
"Mi76xpM4Ui",
"scOTeDDvazK",
"F61TQTiJBm",
"oYEe-fBbzo8",
"TpMpOjkeX6",
"qh8rSIxSFa",
"gEIh6uf_q-D",
"fqHu1OANL2_",
"sz1yftWXOG",
"5IzdvJHSlMW",
"UpiIiAq4TyU",
"mZYq70YCcT",
"FKZu1w5o3DP",
"n9An5pNG9Z",
"8sd12l0SqqX",
"BzRjFLKLokw",
"... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"... | [
" Hi,\n\nThanks for participating in the discussion with the authors. Have you reached any conclusion? Do you want to maintain the same score or are you willing to update it?\n\nArea Chair",
"This paper introduces a method for developing altruistic agents in a multi-agent RL (MARL) setting. The core idea is that ... | [
-1,
6,
-1,
-1,
-1,
6,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
-1,
3,
-1,
-1,
-1,
4,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"3REnKPz85hV",
"iclr_2022_KxbhdyiPHE",
"nDPSyJf4tsY",
"4h6lkZ-dRU",
"oYEe-fBbzo8",
"iclr_2022_KxbhdyiPHE",
"BzRjFLKLokw",
"FKZu1w5o3DP",
"iclr_2022_KxbhdyiPHE",
"sz1yftWXOG",
"5IzdvJHSlMW",
"UpiIiAq4TyU",
"mZYq70YCcT",
"3REnKPz85hV",
"3REnKPz85hV",
"qh8rSIxSFa",
"F61TQTiJBm",
"F61T... |
iclr_2022_y1PXylgrXZ | Certified Robustness for Deep Equilibrium Models via Interval Bound Propagation | Deep equilibrium layers (DEQs) have demonstrated promising performance and are competitive with standard explicit models on many benchmarks. However, little is known about certifying robustness for these models. Inspired by interval bound propagation (IBP), we propose the IBP-MonDEQ layer, a DEQ layer whose robustness ... | Accept (Poster) | Note: This meta review is written by the SAC, but it's synced with the AC.
Summary (adopted from Reviewer wCmR): This paper presents a modification of monotone deep equilibrium layers that allows to compute the bounds on the output via the IBP algorithm. This also allows to train a certifiably robust DEQ model with a ... | train | [
"tuzQ2tFZMmA",
"_PwaY3atNOU",
"g5I8SvpmA9",
"4N2FrmOVNJP",
"XjfSgx7Bza5",
"sSa7o2B8zSM",
"N9HWY5KFMsR",
"IcHDbP4QkI6",
"Zn2pRiOfC9q",
"_Qnn0Z4r-M9",
"C48mZW9igKm",
"Oee901fJ2r",
"1NBynTdG7iP"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper considers the certified adversarial robustness of Deep Equilibrium Models (DEQ) and derives the Interval Bound Propagation (IBP) on DEQ for training certifiably robust models, namely IBP-MonDEQ. Strengths:\n* This paper derives parameterizations for DEQ such that the fixed-point solution of the DEQ au... | [
5,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
3,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2022_y1PXylgrXZ",
"g5I8SvpmA9",
"4N2FrmOVNJP",
"N9HWY5KFMsR",
"Zn2pRiOfC9q",
"iclr_2022_y1PXylgrXZ",
"C48mZW9igKm",
"1NBynTdG7iP",
"Oee901fJ2r",
"tuzQ2tFZMmA",
"sSa7o2B8zSM",
"iclr_2022_y1PXylgrXZ",
"iclr_2022_y1PXylgrXZ"
] |
iclr_2022_kF9DZQQrU0w | Information Bottleneck: Exact Analysis of (Quantized) Neural Networks | The information bottleneck (IB) principle has been suggested as a way to analyze deep neural networks. The learning dynamics are studied by inspecting the mutual information (MI) between the hidden layers and the input and output. Notably, separate fitting and compression phases during training have been reported. Thi... | Accept (Poster) | Initially, some reviewers have raised several points of criticism regarding certain aspects of the model whose novelty/significance was a bit unclear. After the rebuttal and the discussion phase, however, everyone agreed that most of these concerns could be addressed in a convincing way, and finally all reviewers were ... | train | [
"FfnsNnXhvKz",
"eN4pTpIBLBr",
"P33ylpjfFOr",
"vA97yk7ZVbD"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper discusses the information plane of quantized neural networks. The authors further investigate whether or not there are compression phases during training, a question over which there has been much controversy in the past. UPDATE: Reading the revised manuscript, I am willing to improve my rating. The pape... | [
6,
-1,
6,
8
] | [
5,
-1,
4,
3
] | [
"iclr_2022_kF9DZQQrU0w",
"iclr_2022_kF9DZQQrU0w",
"iclr_2022_kF9DZQQrU0w",
"iclr_2022_kF9DZQQrU0w"
] |
iclr_2022_g5ynW-jMq4M | Properties from mechanisms: an equivariance perspective on identifiable representation learning | A key goal of unsupervised representation learning is ``inverting'' a data generating process to recover its latent properties. Existing work that provably achieves this goal relies on strong assumptions on relationships between the latent variables (e.g., independence conditional on auxiliary information). In this pa... | Accept (Spotlight) | The paper provides new insights about how to identify latent variable distributions, making explicit assumptions about invariances. A lot of this is studied in the literature of non-linear ICA, although the emphasis here is on dropping the "I". I think more could be said about how allowing for dependencies among latent... | test | [
"cWV-bkAGh86",
"tUzeR3M4RwW",
"PcsIT5_qNYa"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This work gives several identifiability results for representation learning from sequences of observed variables when the mechanism governing the underlying latent variables is known, and the observation renderer is bijective. The identifiability of an inverse renderer is shown to depend only upon the equivariance... | [
6,
6,
8
] | [
3,
2,
4
] | [
"iclr_2022_g5ynW-jMq4M",
"iclr_2022_g5ynW-jMq4M",
"iclr_2022_g5ynW-jMq4M"
] |
iclr_2022_oAy7yPmdNz | CoordX: Accelerating Implicit Neural Representation with a Split MLP Architecture | Implicit neural representations with multi-layer perceptrons (MLPs) have recently gained prominence for a wide variety of tasks such as novel view synthesis and 3D object representation and rendering. However, a significant challenge with these representations is that both training and inference with an MLP over a larg... | Accept (Poster) | Implicit neural representations are a new and promising method to represent images and scenes. Implicit neural representations enable good performance on task like view synthesis. Those networks generate an image of scene pixel-by-pixel and are therefore computationally expensive. The paper proposes a method to acceler... | train | [
"mnkryI-8rkj",
"UEPGR539W36",
"IP91uT66K01",
"Bb-r5fXy2F9",
"mDVYPKgIu-H",
"BH9_-tcBpz",
"YT5406jOA7h",
"FZBD1tKPPgc",
"UvrKpQgQ7kEf",
"LZAf6h6IYHLk",
"OqpxuOQRu-",
"JeK7O_npIM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the response and updates to the paper. I have also updated my score and written a small post-rebuttal summary. ",
"The paper proposes a novel coordinate-based network architecture which proposes to process each of the input coordinates independently in the first layer instead of together in a fully... | [
-1,
8,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6
] | [
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"FZBD1tKPPgc",
"iclr_2022_oAy7yPmdNz",
"UvrKpQgQ7kEf",
"iclr_2022_oAy7yPmdNz",
"iclr_2022_oAy7yPmdNz",
"OqpxuOQRu-",
"JeK7O_npIM",
"UEPGR539W36",
"LZAf6h6IYHLk",
"Bb-r5fXy2F9",
"iclr_2022_oAy7yPmdNz",
"iclr_2022_oAy7yPmdNz"
] |
iclr_2022_Te5ytkqsnl | Missingness Bias in Model Debugging | Missingness, or the absence of features from an input, is a concept fundamental to many model debugging tools. However, in computer vision, pixels cannot simply be removed from an image. One thus tends to resort to heuristics such as blacking out pixels, which may in turn introduce bias into the debugging process. We s... | Accept (Poster) | This work identifies an interesting bias that can occur when applying occlusion based interpretability methods to debug image classifiers. For context, the motivation behind many of these methods is that by occluding various parts of the image, one can ask counterfactuals such as "what would the model have predicted if... | train | [
"p-wa7Awi401",
"o21y8iisrB",
"NtlbFoVg4eJ",
"rtVmlGwMawD",
"tG1M58nckir",
"izIEi7vbZad",
"QtbOLzJhKV9",
"fkCgGkktaXc",
"yuQMHovDXX6",
"7AMalxzdlyF",
"XS6eS5JLUo",
"sghrtUx-CkS"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I am thankful to the reviewers for the answer. I will keep my score the same, as I believe some experiments with different statistics (e.g. medical imaging ones) are important. In addition, a recommendation for the revised manuscript would be to make it clearer to the reader what the differences in implementation... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"o21y8iisrB",
"NtlbFoVg4eJ",
"rtVmlGwMawD",
"tG1M58nckir",
"QtbOLzJhKV9",
"sghrtUx-CkS",
"XS6eS5JLUo",
"7AMalxzdlyF",
"iclr_2022_Te5ytkqsnl",
"iclr_2022_Te5ytkqsnl",
"iclr_2022_Te5ytkqsnl",
"iclr_2022_Te5ytkqsnl"
] |
iclr_2022_b-ny3x071E5 | Bootstrapped Meta-Learning | Meta-learning empowers artificial intelligence to increase its efficiency by learning how to learn. Unlocking this potential involves overcoming a challenging meta-optimisation problem. We propose an algorithm that tackles this problem by letting the meta-learner teach itself. The algorithm first bootstraps a target fr... | Accept (Oral) | This paper addresses a meta-learning method which involves bilevel optimization. It is claimed that two limitations (myopia of MG and restricted consideration of geometry of search space) that most of existing methods have can be resolved by the MBG with a properly chosen pseudo-metric. The algorithm first bootstraps a... | train | [
"FTMBi7QbqPD",
"yAHaPq-8sm7",
"qXjd69H-bHX",
"LKbM0s4Gmxg",
"4N3ckN1OJ5Q",
"YCKKCIJ8VQ",
"uWiMqSov1ks",
"V5qyem3mEM-",
"2CCKOiZBwlb",
"wzABu32iLEB",
"vZRspCCttHX",
"SqRo93rAyvl",
"FxRFF2uVWU",
"ZPUeQVh_un8"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the raised score and the kind words, we are glad that we could inspire you! Please see below for replies to your follow-up questions:\n\n- *NFL:* recall that BMG is a strict generalization of MG (Eq. 3). Hence, the comparison is less between two distinct methods, but rather between a specific known ... | [
-1,
10,
-1,
10,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8
] | [
-1,
4,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"qXjd69H-bHX",
"iclr_2022_b-ny3x071E5",
"4N3ckN1OJ5Q",
"iclr_2022_b-ny3x071E5",
"YCKKCIJ8VQ",
"uWiMqSov1ks",
"V5qyem3mEM-",
"wzABu32iLEB",
"ZPUeQVh_un8",
"LKbM0s4Gmxg",
"FxRFF2uVWU",
"yAHaPq-8sm7",
"iclr_2022_b-ny3x071E5",
"iclr_2022_b-ny3x071E5"
] |
iclr_2022_kSwqMH0zn1F | PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication | Graph Convolutional Networks (GCNs) is the state-of-the-art method for learning graph-structured data, and training large-scale GCNs requires distributed training across multiple accelerators such that each accelerator is able to hold a partitioned subgraph. However, distributed GCN training incurs prohibitive overhead... | Accept (Poster) | The paper proposes PipeGCN, a system that uses pipeline parallelism to accelerate distributed training of large-scale graph convolutional neural networks. Like some pipeline-parallel methods (but unlike others), PipeGCN involves asynchrony in the sense that its features and feature-gradients can be stale. The paper pro... | train | [
"uVGqM6qpFU",
"f9BbQFxQsPM",
"M1HdqQw8QIR",
"F9Ay9vzAbuG",
"a6_PrX7ksS8",
"o_x6meEMIem",
"LVWxNMABgV",
"RAsCvXPrjAe",
"i9Olyfzu1R",
"3gA-VYVI0sL",
"7o8QfLY2y2K",
"k-a-l7fzmWE",
"c4cKJoNigBC",
"nu37qPRX61Y",
"ZOqiGv0Sqk",
"mRmlrfVy21A",
"Y7QuRnCusL1"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Really appreciated for your reply. It is indeed of great value to further scale the model to 1024-dim with our proposed methods. As your request, we are actively conducting this experiment now. However, due to the closing window of rebuttal (in 6 hours), it is likely that our experiment cannot be finished on time... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"f9BbQFxQsPM",
"a6_PrX7ksS8",
"iclr_2022_kSwqMH0zn1F",
"k-a-l7fzmWE",
"i9Olyfzu1R",
"ZOqiGv0Sqk",
"RAsCvXPrjAe",
"mRmlrfVy21A",
"3gA-VYVI0sL",
"7o8QfLY2y2K",
"M1HdqQw8QIR",
"c4cKJoNigBC",
"nu37qPRX61Y",
"Y7QuRnCusL1",
"iclr_2022_kSwqMH0zn1F",
"iclr_2022_kSwqMH0zn1F",
"iclr_2022_kSwqM... |
iclr_2022_CIaQKbTBwtU | Learning to Generalize across Domains on Single Test Samples | We strive to learn a model from a set of source domains that generalizes well to unseen target domains. The main challenge in such a domain generalization scenario is the unavailability of any target domain data during training, resulting in the learned model not being explicitly adapted to the unseen target domains. W... | Accept (Poster) | This paper proposes a new method for domain generalization by adopting a single test example. Authors formulate the problem using a variational bayesian framework which ends up in an adaptation technique requiring a single feed-forward computation. The provided empirical results indicate that the proposed method has co... | train | [
"UYyEpkx0Bn2",
"F8AUh_phe0z",
"Do3eVlzReAr",
"5hrFoDlhQI",
"c-PkqUv1qJ",
"eAYkY7CBo2t",
"qsjgMFQG_X",
"pBJe6F_gsy4",
"BEWksRNMEl3",
"H-xoQjksg4v",
"ufAF4vce6cG",
"qIYGBf1VUIG",
"6WGe1evnVQ",
"qLiAlA949m",
"8qK4teC-QYK",
"_THR8GtCD3R",
"5quF5LU2N5l",
"_lgvEzOlAJd",
"dQR9Dz-aZ4T",
... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"a... | [
"- The paper describes a method for domain generalization that performs test-time adaptation using a single test example at a time (as opposed to a transductive setting used in other works where a whole batch of test examples are used).\n- The method is cast as a meta learning task. The training data is split into ... | [
5,
8,
8,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
3,
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2022_CIaQKbTBwtU",
"iclr_2022_CIaQKbTBwtU",
"iclr_2022_CIaQKbTBwtU",
"iclr_2022_CIaQKbTBwtU",
"pBJe6F_gsy4",
"ufAF4vce6cG",
"iclr_2022_CIaQKbTBwtU",
"qsjgMFQG_X",
"5quF5LU2N5l",
"qsjgMFQG_X",
"dQR9Dz-aZ4T",
"_THR8GtCD3R",
"qLiAlA949m",
"8qK4teC-QYK",
"UYyEpkx0Bn2",
"5hrFoDlhQI",
... |
iclr_2022_XJiajt89Omg | Space-Time Graph Neural Networks | We introduce space-time graph neural network (ST-GNN), a novel GNN architecture, tailored to jointly process the underlying space-time topology of time-varying network data. The cornerstone of our proposed architecture is the composition of time and graph convolutional filters followed by pointwise nonlinear activation... | Accept (Poster) | This paper proposes a new time-varying convolutional architecture (ST-GNN) for dynamic graphs. The reviewers were positive about the presentation and detailed theory, especially on the stability analysis. The shared criticism was on experimental validation synthetic datasets that the reviewers did not find appealing. ... | val | [
"EZwaDHMzhvb",
"FITOueBzT_f",
"vmQEVMy9wi_",
"zojw_BQbhh_",
"yrwc0dOIuN",
"kWFkx3XYW4q"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank the reviewer for their valuable comments.\n\n1) In comment 1.2 and 2.1 we explain the uniqueness of the proposed architecture, which makes it a perfect fit for physical networks and decentralized applications. Comparing with the aforementioned architectures is impractical (as explained prev... | [
-1,
-1,
-1,
8,
5,
5
] | [
-1,
-1,
-1,
3,
3,
3
] | [
"kWFkx3XYW4q",
"yrwc0dOIuN",
"zojw_BQbhh_",
"iclr_2022_XJiajt89Omg",
"iclr_2022_XJiajt89Omg",
"iclr_2022_XJiajt89Omg"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.