paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18
values | meta_review stringlengths 29 10k | label stringclasses 3
values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2022_tYRrOdSnVUy | Non-Transferable Learning: A New Approach for Model Ownership Verification and Applicability Authorization | As Artificial Intelligence as a Service gains popularity, protecting well-trained models as intellectual property is becoming increasingly important. There are two common types of protection methods: ownership verification and usage authorization. In this paper, we propose Non-Transferable Learning (NTL), a novel appro... | Accept (Oral) | The paper addresses two important aspects of deep learning: model transferability and authorization for use. It presents original solutions for both of these problems. All of the reviewers agree that the paper is a valuable contributions. Minor concerns and critical remarks have been addressed by the authors during the... | train | [
"FoM4SMnlalE",
"JdNRUeMFIfA",
"wdgyx7Y9EZR",
"VipDHQeV0H",
"PqBbP6BEQ2b",
"MPbNFjI5HU",
"X_6__xdh_8T",
"GzqMaf4a6N",
"E7_0Br4s8-",
"bpax7xtQ2qI"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" The authors' response has addressed my concerns. I lean to accept this paper. ",
" I have carefully read the responses and other reviewers' comments. My concerns have been properly addressed. I think this paper will contribute a lot to the machine learning field. Thus, I decide to raise my score to 8.",
"In t... | [
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
8,
8
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
2,
5
] | [
"X_6__xdh_8T",
"GzqMaf4a6N",
"iclr_2022_tYRrOdSnVUy",
"iclr_2022_tYRrOdSnVUy",
"E7_0Br4s8-",
"bpax7xtQ2qI",
"bpax7xtQ2qI",
"wdgyx7Y9EZR",
"iclr_2022_tYRrOdSnVUy",
"iclr_2022_tYRrOdSnVUy"
] |
iclr_2022_CmsfC7u054S | Reinforcement Learning in Presence of Discrete Markovian Context Evolution | We consider a context-dependent Reinforcement Learning (RL) setting, which is characterized by: a) an unknown finite number of not directly observable contexts; b) abrupt (discontinuous) context changes occurring during an episode; and c) Markovian context evolution. We argue that this challenging case is often met in ... | Accept (Poster) | The paper proposes a Bayesian approach to learning in contextual MDPs where the contexts can dynamically vary during the episode.
The authors did well in their rebuttal and alleviated most of the reviewers' concerns. During the discussion there was an agreement that the paper should be accepted.
Please take all reviewe... | train | [
"t9Ryf6btFJQ",
"Y8NOy4HphbK",
"65R8DnTrCAU",
"Pi75fdo5O55",
"YmsdtoM5r4",
"lhzeu8s-vBe",
"v4fpvku-K5",
"Pmq71Cpy35j",
"7_IJ3GjpsXO",
"o6rffcDQzot",
"9RB9szhB_k8",
"jYjIelxxOt",
"l6FKEK_zLiE",
"sFm0fBj5IKA",
"IHWL-MTFTns",
"0UYuXv4gyjr",
"gyJUIZkUJdY",
"8TZGfoBPiv0",
"OcijM05VhNP"... | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",... | [
" We are happy to respond to any further requests for clarification!",
" We are happy to respond to any further requests for clarification!",
" We are happy to respond to any further requests for clarification!",
" We are happy to respond to any further requests for clarification! ",
" Thank you for the res... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4
] | [
"sFm0fBj5IKA",
"XOM48CiYgTo",
"fGikr0nkyY4",
"G5F_-y2pGw9",
"lhzeu8s-vBe",
"Pmq71Cpy35j",
"iclr_2022_CmsfC7u054S",
"9RB9szhB_k8",
"XOM48CiYgTo",
"iclr_2022_CmsfC7u054S",
"E-Kkb0_8gUO",
"o6rffcDQzot",
"IHWL-MTFTns",
"iclr_2022_CmsfC7u054S",
"OcijM05VhNP",
"fGikr0nkyY4",
"sFm0fBj5IKA",... |
iclr_2022_LtKcMgGOeLt | When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations | Vision Transformers (ViTs) and MLPs signal further efforts on replacing hand-wired features or inductive biases with general-purpose neural architectures. Existing works empower the models by massive data, such as large-scale pre-training and/or repeated strong data augmentations, and still report optimization-related ... | Accept (Spotlight) | ### Description
The paper demonstrates that efficient architectures such as transformers and MLP-mixers, which do not utilize translational equivariance in the design, when regularized with SAM (sharpness aware minimization) can achieve same or better performance as convolutional networks, in the vision problems where... | train | [
"D9YxUAc35sE",
"9-1iXQ6PUuK",
"jGlBRduuCXY",
"nLFk_5ifJQx",
"z0F5EKQpDty",
"DZAwpiBdrsd",
"n3hRBpOuHMY",
"YQdbb8X7uxn",
"JFYrWncclgf",
"-7YfFLOR_rP",
"s39MFq8scP",
"yaPKWbtr-hH",
"caoKcbuc53H",
"p7tb8qOmlbK",
"Czu0wWYp0y",
"u37YePG1ovA"
] | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer y3Uf,\n\nWe added more materials in the revised paper and made it more rigorous.\nWe are just wondering are there any updates or follow-up questions about our paper?\nThanks again for your detailed and constructive reviews.\n\n",
" We appreciate your effort and thanks a lot for your positive feedb... | [
-1,
-1,
8,
-1,
8,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
-1,
-1,
4,
-1,
4,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"u37YePG1ovA",
"jGlBRduuCXY",
"iclr_2022_LtKcMgGOeLt",
"DZAwpiBdrsd",
"iclr_2022_LtKcMgGOeLt",
"caoKcbuc53H",
"iclr_2022_LtKcMgGOeLt",
"z0F5EKQpDty",
"iclr_2022_LtKcMgGOeLt",
"u37YePG1ovA",
"n3hRBpOuHMY",
"YQdbb8X7uxn",
"yaPKWbtr-hH",
"Czu0wWYp0y",
"iclr_2022_LtKcMgGOeLt",
"iclr_2022_L... |
iclr_2022_0UXT6PpRpW | Large-Scale Representation Learning on Graphs via Bootstrapping | Self-supervised learning provides a promising path towards eliminating the need for costly label information in representation learning on graphs. However, to achieve state-of-the-art performance, methods often need large numbers of negative examples and rely on complex augmentations. This can be prohibitively expens... | Accept (Poster) | To perform self-supervised graph representation learning that is scalable to large graphs, the authors propose Bootstrapped Graph Latents (BGRL) that learns its graph representation by predicting alternative augmentations of the input, avoiding the need to construct negative examples. The weakness of the paper lies in ... | train | [
"Xa2Cxo5Yr9h",
"BRHoMjlTD98",
"fPpTjR9Wgy8",
"y6x8fWQ_CWQ",
"NEutf4lWc2M",
"RwQXT6xl_FM",
"rPBJN1NBXLN",
"RR3EGuh1eg",
"q4IVunRFrWl",
"D-qceqnpJOf",
"m5RaOYEF_YH",
"ul-TEI5sw5",
"CrPFve9h9V7",
"XBjTCPkN4yW"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your continued positive view of our work. We are glad these points addressed your questions and will definitely add a discussion of this to the final version of the paper.",
" Thank you for your positive response and for updating the score.\nWe are glad these points addressed your concerns and we wil... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
5
] | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3
] | [
"NEutf4lWc2M",
"fPpTjR9Wgy8",
"m5RaOYEF_YH",
"iclr_2022_0UXT6PpRpW",
"q4IVunRFrWl",
"iclr_2022_0UXT6PpRpW",
"XBjTCPkN4yW",
"CrPFve9h9V7",
"ul-TEI5sw5",
"iclr_2022_0UXT6PpRpW",
"y6x8fWQ_CWQ",
"iclr_2022_0UXT6PpRpW",
"iclr_2022_0UXT6PpRpW",
"iclr_2022_0UXT6PpRpW"
] |
iclr_2022_FlwzVjfMryn | Multi-objective Optimization by Learning Space Partition | In contrast to single-objective optimization (SOO), multi-objective optimization (MOO) requires an optimizer to find the Pareto frontier, a subset of feasible solutions that are not dominated by other feasible solutions. In this paper, we propose LaMOO, a novel multi-objective optimizer that learns a model from observe... | Accept (Poster) | Multi-objective learning is an increasingly important topic. This paper presents a method for better finding parts of the Pareto frontier through a new method to estimate the distance to the frontier and use this proxy to refine the state space partition. The reviewers found this paper interesting and compelling and g... | val | [
"fSSnf416GR4",
"z80mtefYqJ",
"OojDY-ucTlC",
"p5pK2jFQLbf",
"NIs_QrJ3lA",
"zgq4Oq0usWP",
"U0YLFAO9eW_",
"bC4mim1i19a",
"adOlQ0IKUio",
"HwU2tmWpT08",
"5iPvo4X90kq",
"ugR8G9rDzH6",
"cxp_98vU9OM",
"Q3r2wNH7fGA",
"x2975YHbTqk",
"-V7pAyCx62o",
"aQsYZ4xdqzm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the further response. I keep the positive score to this work.",
" Thank you for the reply. It resolved my concerns.",
" Authors have addressed all the issues according to my previous comments.",
" Thanks for the insightful comments. \n\n**1. Novelty**\n\nWe totally agree that the idea of learn... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
6
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"p5pK2jFQLbf",
"ugR8G9rDzH6",
"Q3r2wNH7fGA",
"zgq4Oq0usWP",
"iclr_2022_FlwzVjfMryn",
"5iPvo4X90kq",
"cxp_98vU9OM",
"iclr_2022_FlwzVjfMryn",
"NIs_QrJ3lA",
"NIs_QrJ3lA",
"NIs_QrJ3lA",
"x2975YHbTqk",
"-V7pAyCx62o",
"aQsYZ4xdqzm",
"iclr_2022_FlwzVjfMryn",
"iclr_2022_FlwzVjfMryn",
"iclr_2... |
iclr_2022_RhB1AdoFfGE | Sample and Computation Redistribution for Efficient Face Detection | Although tremendous strides have been made in uncontrolled face detection, accurate face detection with a low computation cost remains an open challenge. In this paper, we point out that computation distribution and scale augmentation are the keys to detecting small faces from low-resolution images. Motivated by these ... | Accept (Poster) | This paper received 4 quality reviews, with the final rating of 8 by 2 reviewers, and 6 by the other 2 reviewers. All reviews recognize the contributions of this work, especially its superior performance. The AC concurs with these contributions and recommends acceptance. | val | [
"z37y-9RDtPz",
"-IT3p0nHdwD",
"Bv-1bvaqK0",
"4oNWb3rdq-",
"MzAfXlHlRjO",
"tjt-ZGV6TA",
"7VskjTIdl2Q",
"Fp5U0F0xZYB",
"XHX6UNwOHKY",
"SFSAUZzgGlL",
"WSkMsmY6xR_"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The author has solved my problem.",
" I understand it. Thank you for helpful comments.",
" The authors addressed my questions. Thanks.",
" We thank the reviewer for the positive comments and detailed suggestions to improve our paper. Below, we list our replies to the questions raised.\n\n**Q1**: It lacks a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"7VskjTIdl2Q",
"tjt-ZGV6TA",
"4oNWb3rdq-",
"WSkMsmY6xR_",
"SFSAUZzgGlL",
"XHX6UNwOHKY",
"Fp5U0F0xZYB",
"iclr_2022_RhB1AdoFfGE",
"iclr_2022_RhB1AdoFfGE",
"iclr_2022_RhB1AdoFfGE",
"iclr_2022_RhB1AdoFfGE"
] |
iclr_2022_a34GrNaYEcS | Distributionally Robust Models with Parametric Likelihood Ratios | As machine learning models are deployed ever more broadly, it becomes increasingly important that they are not only able to perform well on their training distribution, but also yield accurate predictions when confronted with distribution shift. The Distributionally Robust Optimization (DRO) framework proposes to addre... | Accept (Poster) | The paper builds upon parametric distributionally robust optimization (PDRO) and proposes ratio PDRO (R-PDRO) where the ratio of the worst case distribution and training distribution is parameterized by a discriminative network. This has a benefit over PDRO which needs to do generative modeling of worst case distributi... | train | [
"pVfEyDZ22oc",
"lz3azPtDTnv",
"cPy4KCf0Jet",
"Q3-klvPODA8",
"fJ9r1lJm7SQ",
"I6MBkc_q3Mh",
"h1CpN-yR-Je",
"mFXRvzzAyIF",
"bxZwwIO9kC",
"RAzebp9mLO",
"1tAl0cBqOt_",
"f-LX2RszFMZ",
"JqkLZeFwdfw",
"qToVC3JQoGX"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their positive response to our rebuttal as well as their (very) extensive second pass on the paper. We will make sure to address their latest batch of comments in the final, camera-ready version of the submission.\n\nWe briefly address some of their remaining open questions:\n\n> Regardi... | [
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
6
] | [
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"cPy4KCf0Jet",
"h1CpN-yR-Je",
"Q3-klvPODA8",
"mFXRvzzAyIF",
"iclr_2022_a34GrNaYEcS",
"iclr_2022_a34GrNaYEcS",
"RAzebp9mLO",
"fJ9r1lJm7SQ",
"qToVC3JQoGX",
"f-LX2RszFMZ",
"JqkLZeFwdfw",
"iclr_2022_a34GrNaYEcS",
"iclr_2022_a34GrNaYEcS",
"iclr_2022_a34GrNaYEcS"
] |
iclr_2022_14F3fI6MGxX | A Generalized Weighted Optimization Method for Computational Learning and Inversion | The generalization capacity of various machine learning models exhibits different phenomena in the under- and over-parameterized regimes. In this paper, we focus on regression models such as feature regression and kernel regression and analyze a generalized weighted least-squares optimization method for computational l... | Accept (Poster) | This paper considers a generalized weighted least-squares optimization method for the random Fourier feature model. Generalization error analysis is carried out under both the over-parametrized and under-parametrized schemes, and under both noise-free and noisy scenarios.
Reviewers generally agree that this is a solid... | train | [
"cqDfbxpVO0",
"4ZvplFpUHc0",
"s9uR77CEi7y",
"jogJV1dmYx",
"DO7rwOVROx",
"WpTx_i9JvAA",
"BqaixXralFT",
"8HQ99wzelaf",
"KSBkfY5G2hR",
"5pd-LuAp1my",
"nRiS6_Kcnd8",
"sJVB3xC-eX1"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We really appreciate the reviewer for raising the score and the further comments. Please see below for our replies that address some of the concerns.\n\n1. One of the main reasons we consider the least-squares formulation is its connection to objective functions in optimization used in the literature based on oth... | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
2,
2,
4
] | [
"4ZvplFpUHc0",
"jogJV1dmYx",
"iclr_2022_14F3fI6MGxX",
"s9uR77CEi7y",
"sJVB3xC-eX1",
"5pd-LuAp1my",
"nRiS6_Kcnd8",
"KSBkfY5G2hR",
"iclr_2022_14F3fI6MGxX",
"iclr_2022_14F3fI6MGxX",
"iclr_2022_14F3fI6MGxX",
"iclr_2022_14F3fI6MGxX"
] |
iclr_2022_T8vZHIRTrY | Understanding Domain Randomization for Sim-to-real Transfer | Reinforcement learning encounters many challenges when applied directly in the real world. Sim-to-real transfer is widely used to transfer the knowledge learned from simulation to the real world. Domain randomization---one of the most popular algorithms for sim-to-real transfer---has been demonstrated to be effective i... | Accept (Spotlight) | This manuscript introduces a theoretical framework to analyze the sim2real transfer gap of policies learned via domain randomization algorithms. This work focusses on understanding the success of existing domain randomization algorithms through providing a theoretical analysis. The theoretical sim2real gap analysis req... | train | [
"xwnmkriExnP",
"mBObQw8ptFo",
"STG4WD1Jysd",
"8vJ9QU1f_lh",
"PpmuquaErR7",
"XMR06McACp",
"cCfiwNZWUnq",
"w19u5nHlES4",
"LzlnW9g5v_7",
"E0AmF8Ofp9",
"BvbsCwxROLN",
"yQoITDBrhds",
"kYBjpU73POs"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks so much for your renewed response. In light of all the detailed comments, I slightly improved my rating. In summary, I still see this work slightly below the acceptance threshold. My main remaining issue with the work is that it is not yet clearly pointed out what the takeaways for a practitioner would be ... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
10,
8,
8
] | [
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
2
] | [
"PpmuquaErR7",
"iclr_2022_T8vZHIRTrY",
"w19u5nHlES4",
"cCfiwNZWUnq",
"XMR06McACp",
"LzlnW9g5v_7",
"BvbsCwxROLN",
"kYBjpU73POs",
"mBObQw8ptFo",
"yQoITDBrhds",
"iclr_2022_T8vZHIRTrY",
"iclr_2022_T8vZHIRTrY",
"iclr_2022_T8vZHIRTrY"
] |
iclr_2022_SidzxAb9k30 | Near-Optimal Reward-Free Exploration for Linear Mixture MDPs with Plug-in Solver | Although model-based reinforcement learning (RL) approaches are considered more sample efficient, existing algorithms are usually relying on sophisticated planning algorithm to couple tightly with the model-learning procedure. Hence the learned models may lack the ability of being re-used with more specialized planners... | Accept (Spotlight) | This paper addresses the reward-free exploration problem with function approximation under linear mixture MDP assumption. The analysis shows that the proposed algorithm is (nearly) minimax optimal. The proposed approach can work with any planning solver to provide an ($\epsilon + \epsilon_{opt}$)-optimal policy for an... | train | [
"1t4IKap87Hl",
"WiPy-9Qvst7",
"Yw0VXuw1TN",
"rTr_bRozi3",
"uofqynNwGJM",
"DzmT0NxElbw",
"lgkIFrT69dC",
"ap2pgiEN8Z",
"f34svts7o4v",
"_QfF8-cMGcp",
"8Q3Tls6JyUX",
"Vmf_cYVL8L7",
"fD1YjQsgZiU"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thanks for your support. We appreciate the suggestions from all the reviewers during the review period. We will address all the comments, including fixing typos, clarifying definitions and statements, and moving implementation details to the main paper in the next version.",
"This paper addresses the reward-fre... | [
-1,
8,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
-1,
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2
] | [
"Yw0VXuw1TN",
"iclr_2022_SidzxAb9k30",
"f34svts7o4v",
"iclr_2022_SidzxAb9k30",
"DzmT0NxElbw",
"lgkIFrT69dC",
"_QfF8-cMGcp",
"Vmf_cYVL8L7",
"WiPy-9Qvst7",
"rTr_bRozi3",
"iclr_2022_SidzxAb9k30",
"fD1YjQsgZiU",
"iclr_2022_SidzxAb9k30"
] |
iclr_2022_KJggliHbs8 | Node Feature Extraction by Self-Supervised Multi-scale Neighborhood Prediction | Learning on graphs has attracted significant attention in the learning community due to numerous real-world applications. In particular, graph neural networks (GNNs), which take \emph{numerical} node features and graph structure as inputs, have been shown to achieve state-of-the-art performance on various graph-related... | Accept (Poster) | In this submission, the authors presented a framework (GIANT) for self-supervised learning to improve LM by leveraging graph information. Reviewers agree that the method is somewhat novel, the (partial) theoretical analysis is interesting, and the evaluations are strong. We thank the authors for doing an excellent job ... | val | [
"hGnTnRmS878",
"SKvGiVyo_2O",
"wFC7sneqODG",
"n8XOfKKZ4Ts",
"7Fc71R_HK_Z",
"xKmWGPT-C9",
"sfdKeeX8hE-",
"D8T1nF0j4wS",
"bvsvrhE9HXS",
"jiZ0TQ9DQM",
"fYnqFVIFM24"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper develops a self-supervised learning framework to extract node features with the aid of graph. Connections between neighborhood prediction and the XMC problem are also established. Experiments on large-scale data show the superiority of the proposed method. Strengths:\n1. The problem is well motivated.\n... | [
8,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
4
] | [
"iclr_2022_KJggliHbs8",
"hGnTnRmS878",
"hGnTnRmS878",
"fYnqFVIFM24",
"bvsvrhE9HXS",
"hGnTnRmS878",
"iclr_2022_KJggliHbs8",
"fYnqFVIFM24",
"jiZ0TQ9DQM",
"sfdKeeX8hE-",
"iclr_2022_KJggliHbs8"
] |
iclr_2022_w4cXZDDib1H | ViDT: An Efficient and Effective Fully Transformer-based Object Detector | Transformers are transforming the landscape of computer vision, especially for recognition tasks. Detection transformers are the first fully end-to-end learning systems for object detection, while vision transformers are the first fully transformer-based architecture for image classification. In this paper, we integrat... | Accept (Poster) | The paper introduces an object detection method that integrates vision and detection transformers through a novel Reconfigured Attention Module (RAM). Among other questions, the reviewers raised concerns about fair comparison with baselines, limited novelty of the RAM module, completeness of experiments, and missing de... | val | [
"_iW2hkqxOBU",
"JKIDb3znZPY",
"Sv7-nZmO6e3",
"7O0tBldVGoY",
"ilNdLpKoyD-",
"8XLSXcWdzKe",
"kIt5I7ZB2Yr",
"TsrKBY8OwO",
"6zDewbqiA0b",
"hDa7ljGdaDj",
"gvguDb5_5Et",
"v-GweTqyDap",
"Ym2Zfz8GtDS",
"9AsSJ7WmeqK",
"NsaBZQW4Nz",
"LSAwrKOtpFB",
"Zw2j2kFbRya",
"wNQX6Jta5W",
"98iZhs7oy6k"... | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
"This paper proposes ViDT, a high-performance Detection Transformer with an impressive accuracy-speed trade-off. A lot of experiments as well as ablation studies are conducted to prove the effectiveness of the proposed detector. The design principle of ViDT can also generalize and inspire future detector design. Mo... | [
8,
-1,
5,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
5,
-1,
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2022_w4cXZDDib1H",
"7O0tBldVGoY",
"iclr_2022_w4cXZDDib1H",
"wNQX6Jta5W",
"NsaBZQW4Nz",
"Ym2Zfz8GtDS",
"iclr_2022_w4cXZDDib1H",
"8XLSXcWdzKe",
"NsaBZQW4Nz",
"gvguDb5_5Et",
"v-GweTqyDap",
"Ym2Zfz8GtDS",
"9AsSJ7WmeqK",
"kIt5I7ZB2Yr",
"Sv7-nZmO6e3",
"_iW2hkqxOBU",
"kIt5I7ZB2Yr",
... |
iclr_2022_gjNcH0hj0LM | Coherence-based Label Propagation over Time Series for Accelerated Active Learning | Time-series data are ubiquitous these days, but lack of the labels in time-series data is regarded as a hurdle for its broad applicability. Meanwhile, active learning has been successfully adopted to reduce the labeling efforts in various tasks. Thus, this paper addresses an important issue, time-series active learning... | Accept (Poster) | The authors design a framework for active learning on time-series data. The framework, called Temporal Coherence-based Label Propagation (TCLP) leverages temporal coherence to propagate expert labels to nearby points by a plateau model. In addition to describing the framework clearly with simple pseudocode, several exp... | train | [
"InjH78aq0pl",
"4Nvcs4HMCgB",
"nQQWnpDrV6M",
"8JOWGNBY6X2",
"lsnqdV_75m",
"Tu2RAsYQuyJ",
"blncNz5Tk1z",
"hKIML3QLkia",
"pI1h7WkJ9a",
"8pqQQr_w3H",
"oFjTPmrA7PX",
"80Zp8EGM0u",
"HWv4nm79SBl",
"dzHJPV81_kU",
"6_y2AC8Z_6Y",
"wnowLABQLIk",
"4VHF6Tw8Twm",
"TkYjRiPmWeP",
"oU2cxuebJ6I",... | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" **1. First, the $z$ in Eqns (5), (6), (7), (8) are dependent on the class $k$ (by definition). Hence, it would be more appropriate to denote them by $z_k$. Second, the definition of $z$ ($P(. >= 0.5)$ ) in (7) and (8) is slightly different from (5) and (6) ($P(. >= \\delta)$).**\n\nThank you for the further revie... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
10,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
2,
3
] | [
"Tu2RAsYQuyJ",
"8JOWGNBY6X2",
"dUL9OIxk0xa",
"blncNz5Tk1z",
"HWv4nm79SBl",
"8pqQQr_w3H",
"HWv4nm79SBl",
"690Ahp7uFSU",
"nQQWnpDrV6M",
"dzHJPV81_kU",
"mttGFVp9dH3",
"oU2cxuebJ6I",
"6_y2AC8Z_6Y",
"hKIML3QLkia",
"pI1h7WkJ9a",
"oFjTPmrA7PX",
"80Zp8EGM0u",
"iclr_2022_gjNcH0hj0LM",
"ic... |
iclr_2022_CCu6RcUMwK0 | Neural Link Prediction with Walk Pooling | Graph neural networks achieve high accuracy in link prediction by jointly leveraging graph topology and node attributes. Topology, however, is represented indirectly; state-of-the-art methods based on subgraph classification label nodes with distance to the target link, so that, although topological information is pres... | Accept (Poster) | This paper proposes a new link prediction algorithm based on a pooling scheme called WalkPool. The main idea is to jointly encode node representations and graph topology information into node features and conduct the learning end-to-end. The paper shows the superiority of the method against the baselines.
Strength
* T... | train | [
"WwfueCCHrQG",
"oUEzKtNC2tb",
"l-6rq6O6dJ",
"IFRH1m5JOY",
"UAfENLLJKoN",
"Jpiu55R3W3t",
"SHMlXTKuK0",
"_Ml2hpv-Uc2",
"EUceMT81tRS",
"GR9H7VHv_pc",
"GKdUrN0Y2v_",
"vmjI0ZsZf9u",
"7mRXoSd-fHP",
"g8ZrOsz40ht",
"o2FfHGqrZUu",
"bOrn5ujjFxw",
"7RA_0Pji2LW",
"-CfsYg1iQB1"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer C9p9,\n\nAt the risk of being annoying, may we ask whether our responses and additional experiments on datasets with attributes speak to your initial concerns and suggestions?\n\nBest wishes,\n\nThe authors",
" I would like to thank the authors for addressing my comments. Enhancing the experimenta... | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
8
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"bOrn5ujjFxw",
"o2FfHGqrZUu",
"iclr_2022_CCu6RcUMwK0",
"g8ZrOsz40ht",
"EUceMT81tRS",
"GR9H7VHv_pc",
"vmjI0ZsZf9u",
"iclr_2022_CCu6RcUMwK0",
"7RA_0Pji2LW",
"bOrn5ujjFxw",
"l-6rq6O6dJ",
"l-6rq6O6dJ",
"iclr_2022_CCu6RcUMwK0",
"-CfsYg1iQB1",
"-CfsYg1iQB1",
"iclr_2022_CCu6RcUMwK0",
"iclr_... |
iclr_2022_M6M8BEmd6dq | PEARL: Data Synthesis via Private Embeddings and Adversarial Reconstruction Learning | We propose a new framework of synthesizing data using deep generative models in a differentially private manner.
Within our framework, sensitive data are sanitized with rigorous privacy guarantees in a one-shot fashion, such that training deep generative models is possible without re-using the original data.
Hence, no ... | Accept (Poster) | The paper presents a new framework of synthesizing differential private data using deep generative models. Reviewers liked the significance of the problem. They raised some concerns which was appropriately addressed in the rebuttal. We hope the authors will take feedback into account and prepare a stronger camera read... | val | [
"wtP8g4ubnFs",
"ztqDycIQTlf",
"JsIaFFcfap7",
"ntsG9Utqi-n",
"OKloB0Y3Sr",
"p333PeSmmqQ",
"rmuhPMXAvVB"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper studied an important topic in the field of data synthesis: how to train a private deep generative model without reusing the original data. In this paper, the authors proposed a new framework that uses deep generative models to synthesize data in different private ways. Unlike popular gradient cleaning m... | [
6,
-1,
-1,
-1,
-1,
8,
6
] | [
3,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2022_M6M8BEmd6dq",
"rmuhPMXAvVB",
"wtP8g4ubnFs",
"p333PeSmmqQ",
"iclr_2022_M6M8BEmd6dq",
"iclr_2022_M6M8BEmd6dq",
"iclr_2022_M6M8BEmd6dq"
] |
iclr_2022_6tmjoym9LR6 | Stability Regularization for Discrete Representation Learning | We present a method for training neural network models with discrete stochastic variables.
The core of the method is \emph{stability regularization}, which is a regularization procedure based on the idea of noise stability developed in Gaussian isoperimetric theory in the analysis of Gaussian functions.
Stability regul... | Accept (Poster) | The paper introduces a method to train neural networks based on so-called stability regularisation. The method encourages the outputs of functions of Gaussian random variables to be close to discrete and does not require temperature annealing like the Gumbel Softmax. All reviewers agreed that the proposed method was no... | train | [
"MNQoJg99IIc",
"ho-1lXDD1R6",
"O6PoiTnpxcu",
"pORJuzTUMF4",
"e45Bnqub8JW",
"TcZpKCce22o",
"d_FYFQ3jl7J",
"N-eLdLHpR3L",
"v-dXsHU4xes",
"4JuZn73KquE",
"Bb1VL9QpJBl"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. We address the remaining question below.\n>> First, I still have difficulty understanding how the extension of Theorem 1 to the categorical case...\n\nIn the categorical case, the output of a stability layer is the output of a softmax so it is not possible to get multiple 1’s or all 0’... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
3
] | [
"ho-1lXDD1R6",
"TcZpKCce22o",
"iclr_2022_6tmjoym9LR6",
"Bb1VL9QpJBl",
"4JuZn73KquE",
"N-eLdLHpR3L",
"v-dXsHU4xes",
"iclr_2022_6tmjoym9LR6",
"iclr_2022_6tmjoym9LR6",
"iclr_2022_6tmjoym9LR6",
"iclr_2022_6tmjoym9LR6"
] |
iclr_2022_6HN7LHyzGgC | Uncertainty Modeling for Out-of-Distribution Generalization | Though remarkable progress has been achieved in various vision tasks, deep neural networks still suffer obvious performance degradation when tested in out-of-distribution scenarios. We argue that the feature statistics (mean and standard deviation), which carry the domain characteristics of the training data, can be pr... | Accept (Poster) | The paper considers the important problem of performance degradation under distribution shift and proposes a simple yet effective method to alleviate this problem. They do so by considering feature statistic to be non-deterministic and rather a multivariate Gaussian distribution. The model can be integrated into netwo... | train | [
"tZOarMArlB",
"Qi-TzHcRHp3",
"c6CpTbV3-_y",
"P9OTdx3M38b",
"442PZ2lre-p",
"FhZR88qG5m",
"2VK93izSsS5",
"KS9u3nVh0GF",
"M1ifEAhZz4b",
"SwlKCOkpG4r"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Overall, the authors have addressed my main concerns and I am happy to keep my original rate of '6: marginally above the acceptance threshold'.",
" Thanks for the authors’ responses. The authors conducted multiple experiments that address all my concerns. Overall, I think it is a good work and I will keep my or... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"2VK93izSsS5",
"c6CpTbV3-_y",
"P9OTdx3M38b",
"442PZ2lre-p",
"SwlKCOkpG4r",
"M1ifEAhZz4b",
"KS9u3nVh0GF",
"iclr_2022_6HN7LHyzGgC",
"iclr_2022_6HN7LHyzGgC",
"iclr_2022_6HN7LHyzGgC"
] |
iclr_2022__XNtisL32jv | Temporal Efficient Training of Spiking Neural Network via Gradient Re-weighting | Recently, brain-inspired spiking neuron networks (SNNs) have attracted widespread research interest because of their event-driven and energy-efficient characteristics. It is difficult to efficiently train deep SNNs due to the non-differentiability of its activation function, which disables the typically used gradient d... | Accept (Poster) | The paper proposes a new loss function for the training of spiking neural networks leading to significant improvements in generalization performance across a variety of datasets and network architectures. While conceptually simple, the approach leads to substantial performance gains, and some intuition is provided to e... | train | [
"hq-BQSzvT92",
"vDin23ly3Ee",
"fivMEDdwnAF",
"Ui9trPsq9qD",
"tkeF1_xrVE",
"ElTdH3tEI-",
"-21jvIvvznq",
"nTSwi9yAsLg",
"eDACWKKfd_n",
"fztnWSDsIo-",
"uOnErQWLAHl",
"9tdBFCJIIkZ",
"mZroW0ZQn5v",
"RNJ9xnr5WX4",
"NYwCySVNSqg",
"bJtAyfRYG6v",
"7q5scBpG48l",
"ta-fBhSopBM",
"l1M2cOLJzII... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"author",
"author",
"public",
"author",
"author",
"public",
"public",
"author",
"public",
"author",
"author",
"author",
"author",
"public",
"offici... | [
"This paper proposes a new training approach, the temporal efficient training (TET). This algorithm utilizes a new loss function to improve the generalizability of SNNs. Further, a new training pipeline is presented to reduce the simulation time of SNNs. This work outperforms the SOTA on the static datasets and neu... | [
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"iclr_2022__XNtisL32jv",
"iclr_2022__XNtisL32jv",
"-21jvIvvznq",
"GB_xgaD9KDI",
"eDACWKKfd_n",
"nTSwi9yAsLg",
"9tdBFCJIIkZ",
"qszgp2ZsDtng",
"YD9-fT1ywei",
"9tdBFCJIIkZ",
"iclr_2022__XNtisL32jv",
"RNJ9xnr5WX4",
"NYwCySVNSqg",
"bJtAyfRYG6v",
"7q5scBpG48l",
"uOnErQWLAHl",
"ta-fBhSopBM"... |
iclr_2022_fCG75wd39ze | LORD: Lower-Dimensional Embedding of Log-Signature in Neural Rough Differential Equations | The problem of processing very long time-series data (e.g., a length of more than 10,000) is a long-standing research problem in machine learning. Recently, one breakthrough, called neural rough differential equations (NRDEs), has been proposed and has shown that it is able to process such data. Their main concept is t... | Accept (Poster) | This paper proposes a novel method for training neural rough differential equations, a recent model for processing very long time-series data. The method involves a lower-dimensional embedding of the log-signature, which is obtained via pretrained autoencoder to reduce overhead. The results show significant and consist... | train | [
"yav3RWuXsiY",
"shKGMPOAI1t",
"yxIb3c8kiqO",
"e0uY8nc-Hrw"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose to increase the efficiency of neural rough differential equations by encoding the log-signature into a lower dimensional space that is predictive of higher order signatures during a pre-training stage. \nThis enables the authors to work with lower dimensional log-signatures during the main trai... | [
6,
8,
8,
8
] | [
4,
4,
4,
5
] | [
"iclr_2022_fCG75wd39ze",
"iclr_2022_fCG75wd39ze",
"iclr_2022_fCG75wd39ze",
"iclr_2022_fCG75wd39ze"
] |
iclr_2022_TySnJ-0RdKI | Backdoor Defense via Decoupling the Training Process | Recent studies have revealed that deep neural networks (DNNs) are vulnerable to backdoor attacks, where attackers embed hidden backdoors in the DNN model by poisoning a few training samples. The attacked model behaves normally on benign samples, whereas its prediction will be maliciously changed when the backdoor is ac... | Accept (Poster) | Inspired by the observtion that the poisoned samples tend to cluster together in the feature space of the attacked DNN model, which is mostly due to the end-to-end supervised training paradigm, the authors propose a novel defense method based on contrastive learning and decouple end-to-end training to defend against ba... | train | [
"dRLhLDqPed",
"KuglqKVAzC",
"8j4hnob-ciY",
"fZJe169x2sU",
"r4Y19WjQI67",
"WNi_vribrJ",
"lF-yPl-oaig",
"Nt8sL9L0Ad",
"jJxjR_V9t4x",
"HOHVVxZ6EnW",
"abFobeXtDi",
"jq_i5DOWpo",
"Yc352LAz8Lo",
"dq1ybsCKji6",
"KYhcIBJqJBw",
"wYq2mkUoW8",
"DDFrKsIHy6",
"n7lnfnE5TyB"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We greatly appreciate your positive feedback and further insightful comments and questions. We totally understand that you may be extremely busy at this time. But we still hope that you could have a quick look at our responses to your concerns. We appreciate any feedback you could give to us. We also hope that yo... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"fZJe169x2sU",
"DDFrKsIHy6",
"fZJe169x2sU",
"jJxjR_V9t4x",
"KYhcIBJqJBw",
"iclr_2022_TySnJ-0RdKI",
"n7lnfnE5TyB",
"WNi_vribrJ",
"wYq2mkUoW8",
"DDFrKsIHy6",
"DDFrKsIHy6",
"n7lnfnE5TyB",
"wYq2mkUoW8",
"WNi_vribrJ",
"WNi_vribrJ",
"iclr_2022_TySnJ-0RdKI",
"iclr_2022_TySnJ-0RdKI",
"iclr... |
iclr_2022_KLaDXLAzzFT | Near-optimal Offline Reinforcement Learning with Linear Representation: Leveraging Variance Information with Pessimism | Offline reinforcement learning, which seeks to utilize offline/historical data to optimize sequential decision-making strategies, has gained surging prominence in recent studies. Due to the advantage that appropriate function approximators can help mitigate the sample complexity burden in modern reinforcement learning ... | Accept (Poster) | In this paper, the authors motivate the paper well by the gap between the upper bound of the popular offline RL algorithm and the lower bound of the offline RL. By exploiting the special linear structure, the authors designed a variance-aware pessimistic value iteration, in which the variance estimation is used for rew... | train | [
"_Bp2k9u-9oB",
"mrLG-Ikm4lB",
"dHgAPywSos2",
"xfXlhFqkRp3",
"_z8NmShKbh3",
"oxRj1COrLSW",
"Uw4PcecaDz",
"Z8guUxjZUVr",
"ImaV3rSrbX5",
"OzlBKOlnTA6"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer,\n\nMay I ask if we successfully address your concern? If there are some issues with it, please feel free to let us know about it.\n\nThank you,\nAuthors\n\n",
" Dear reviewer,\n\nTo further respond to the broader considerations you mentioned in the initial review, we wish to mention that, as a by... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
2,
4
] | [
"oxRj1COrLSW",
"_z8NmShKbh3",
"Z8guUxjZUVr",
"ImaV3rSrbX5",
"OzlBKOlnTA6",
"Uw4PcecaDz",
"iclr_2022_KLaDXLAzzFT",
"iclr_2022_KLaDXLAzzFT",
"iclr_2022_KLaDXLAzzFT",
"iclr_2022_KLaDXLAzzFT"
] |
iclr_2022_ufGMqIM0a4b | InfinityGAN: Towards Infinite-Pixel Image Synthesis | We present InfinityGAN, a method to generate arbitrary-sized images. The problem is associated with several key challenges. First, scaling existing models to an arbitrarily large image size is resource-constrained, both in terms of computation and availability of large-field-of-view training data. InfinityGAN trains an... | Accept (Poster) | 3 reviewers recommend accept, 1 rates the paper marginally above acceptance. The authors provided satisfactory answers to criticism -- all in all this is a paper worth accepting at ICLR. Please make sure that criticism in the reviews is adequately addressed in the final version, e.g. include various experimental result... | train | [
"29KKIoFrpIj",
"33NuSNS1nKQ",
"UN1rmmhpiAK",
"Lln1Ev569A",
"cx0KhIylgH_",
"N8hTFU4Tqto",
"VAVmqm6pGiz",
"b3UhD6fFA_C",
"3y_kwMxhUog",
"ctX0e-NXOLN",
"_1u1NoYQxXF",
"H-fw9wFAmHl",
"S96z-nNhq_T",
"sRRp8Wo10b5o",
"vM4iatWajak",
"BcsimUP83Y",
"5akhY-zmxJbQ",
"8Udj0eD96FHT",
"suHw4yDM... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"a... | [
" I thank the authors for their exhaustive response to my questions. I think the new additions to the manuscript improve its scope and quality, I am now more positive about this manuscript and have updated my review and rating accordingly. ",
" Thank you for your response and the additional details provided, I ha... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
8
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"cx0KhIylgH_",
"suHw4yDMghR",
"nqB4K_KMOnG7",
"lCgUnRfsTIki",
"iclr_2022_ufGMqIM0a4b",
"cx0KhIylgH_",
"cx0KhIylgH_",
"3y_kwMxhUog",
"BcsimUP83Y",
"sRRp8Wo10b5o",
"5akhY-zmxJbQ",
"8Udj0eD96FHT",
"vM4iatWajak",
"cx0KhIylgH_",
"cx0KhIylgH_",
"cx0KhIylgH_",
"cx0KhIylgH_",
"cx0KhIylgH_"... |
iclr_2022_nKWjE4QF1hB | AlphaZero-based Proof Cost Network to Aid Game Solving | The AlphaZero algorithm learns and plays games without hand-crafted expert knowledge. However, since its objective is to play well, we hypothesize that a better objective can be defined for the related but separate task of solving games. This paper proposes a novel approach to solving problems by modifying the training... | Accept (Poster) | This paper modifies the AlphaZero algorithm to generate proof tree size heuristics and shows empirical improvements over standard search algorithms. This is an interesting distinction that might lead to algorithms with distinct play styles and a deeper understanding of the games that we apply our agents to.
The two po... | val | [
"aney4L8nlgj",
"9jzzAJgpADE",
"lC4XjQpp5x",
"S28ljeR2-R",
"-GHKLVFy7De",
"q3pWusA3GE8",
"0c7viAQhad1",
"PMMOZ1klns",
"hK7oxLGsYkI",
"LI1ogbKbeXW",
"ZgY57ZdqcvJ",
"tmMGdqosM_H",
"fQWlo-gSC7N",
"Bbf02-L3oA_",
"I_G6elJhyP",
"JUbyT7Jp0Hh",
"aZy6o2J26Wm",
"9JcPoz5cV7H",
"LgsffSQ87nh",... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"... | [
" > I suggest the authors clearly state the solution concept by explaining what properties an optimal strategy should have.\n- We have already added the definition for a game's solution in footnote 1 as reviewer EgiY suggested. Section 2.1 illustrates more details about the properties of an optimal strategy, includ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"-GHKLVFy7De",
"-GHKLVFy7De",
"tmMGdqosM_H",
"-GHKLVFy7De",
"q3pWusA3GE8",
"hK7oxLGsYkI",
"Bbf02-L3oA_",
"LI1ogbKbeXW",
"zK4MyNdvkYU",
"tU2HAtFJaAZ",
"iclr_2022_nKWjE4QF1hB",
"JUbyT7Jp0Hh",
"iclr_2022_nKWjE4QF1hB",
"I_G6elJhyP",
"9JcPoz5cV7H",
"LgsffSQ87nh",
"iclr_2022_nKWjE4QF1hB",
... |
iclr_2022_xENf4QUL4LW | Sample Selection with Uncertainty of Losses for Learning with Noisy Labels | In learning with noisy labels, the sample selection approach is very popular, which regards small-loss data as correctly labeled data during training. However, losses are generated on-the-fly based on the model being trained with noisy labels, and thus large-loss data are likely but not certain to be incorrect. There ar... | Accept (Poster) | The manuscript discusses weaknesses in previous sample selection criteria in learning with noisy labels, and proposes a new selection criterion by incorporating the uncertainty of losses, together with theoretical justification. To select samples, the manuscript uses the lower bounds of the confidence intervals derived... | train | [
"Qa5TBF_0KGK",
"9mUC1azkCt2",
"7iwNXBWBVq",
"eRa1CaF4Eb",
"bq5NIsf6E3r",
"3uEMZX2-DWo",
"rPhzndOBIf3",
"p8eC91MX79r",
"x9740Ze-7z",
"Ncy7uoTa8LZ",
"SCf1Yw-1Lr",
"nPxDZH6LrUF",
"JCdrntPWNd8",
"4OU9ZIMYz27",
"AkSDp6buVPy",
"G7dJJNUlxXB"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper discusses the potential weaknesses in previous sample selection criteria in learning with noisy labels. And then propose a new selection criterion by incorporating the uncertainty of losses, together with theoretical justification. Experiments on both synthetic noisy balanced/imbalanced datasets and rea... | [
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
6,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
5,
4,
4
] | [
"iclr_2022_xENf4QUL4LW",
"p8eC91MX79r",
"x9740Ze-7z",
"4OU9ZIMYz27",
"3uEMZX2-DWo",
"G7dJJNUlxXB",
"Qa5TBF_0KGK",
"AkSDp6buVPy",
"JCdrntPWNd8",
"4OU9ZIMYz27",
"4OU9ZIMYz27",
"iclr_2022_xENf4QUL4LW",
"iclr_2022_xENf4QUL4LW",
"iclr_2022_xENf4QUL4LW",
"iclr_2022_xENf4QUL4LW",
"iclr_2022_x... |
iclr_2022_nbC8iTTXIrk | Optimization inspired Multi-Branch Equilibrium Models | Works have shown the strong connections between some implicit models and optimization problems. However, explorations on such relationships are limited. Most works pay attention to some common mathematical properties, such as sparsity. In this work, we propose a new type of implicit model inspired by the designing of t... | Accept (Poster) | The paper proposes a multi-scale network that uses DEQ models to incorporate samples at multiple resolutions. The authors also propose a training strategy to improve the performance of the model. The authors investigate the interest of the approach through ablation and explainability, weighing the value of hierarchical... | train | [
"hywX6uxUbiZ",
"8mc2pkDKWYb",
"uDsJq1WHNxY",
"dm2GYk0RCp",
"paAwL0Xa4D5",
"T3ho1Pszjt",
"BciEqQOiDWC",
"XDHPq43GAEN",
"KOxisoOERs",
"aM8UGeRdVZS",
"g18hgKrWxp9",
"2kt3rZQrBDs"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your comments. We have tried our best to address the concerns. Since our work only proposes a new equilibrium model, we can choose the root-finding algorithm or other methods for the forward and backward propagation. But the choice of the algorithms may influence the results as your concern. For this a... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6
] | [
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
1
] | [
"g18hgKrWxp9",
"iclr_2022_nbC8iTTXIrk",
"BciEqQOiDWC",
"XDHPq43GAEN",
"iclr_2022_nbC8iTTXIrk",
"g18hgKrWxp9",
"8mc2pkDKWYb",
"aM8UGeRdVZS",
"2kt3rZQrBDs",
"iclr_2022_nbC8iTTXIrk",
"iclr_2022_nbC8iTTXIrk",
"iclr_2022_nbC8iTTXIrk"
] |
iclr_2022_wMpS-Z_AI_E | A Theoretical Analysis on Feature Learning in Neural Networks: Emergence from Inputs and Advantage over Fixed Features | An important characteristic of neural networks is their ability to learn representations of the input data with effective features for prediction, which is believed to be a key factor to their superior empirical performance. To better understand the source and benefit of feature learning in neural networks, we consider... | Accept (Poster) | The authors theoretically analyze learning of two-layer neural
networks by gradient descent with respect to a data distribution that
exposes how useful features are learned during training.
Overall, the reviewers felt that the analysis yielded useful insight,
and was original.
During the discussion period, a reviewer... | train | [
"brSLdkd_-3u",
"tD-2lkYOWUB",
"1BscAXhGsV",
"o4ZKOldpkRa",
"GUNAF4vGDz",
"Z70OlU4RmZ8",
"sMBPEmMq94W",
"bVIKWtMckSW",
"E1jdzI8oUga",
"IJJwQyltpt",
"aczudtI-axD",
"2QhRCkgYkMM",
"zHGdEphAf6r",
"22dECjxuxs0",
"Nj3_E57fQ_L",
"AslcHB-J76E",
"cdsf79q_s0J",
"uEXUY_AT7-a",
"pc2Ze1iRTC2"... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_... | [
" Thanks for the discussion! We will incorporate these intuitions in the future revision. ",
" **Why only mention this corollary**\n\nThe main reason why we didn't have a theorem for infinite-dimensional feature maps is that we don't have a unified formal statement for different kernels. As mentioned above, in ge... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
8,
5
] | [
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"o4ZKOldpkRa",
"GUNAF4vGDz",
"iclr_2022_wMpS-Z_AI_E",
"tD-2lkYOWUB",
"2QhRCkgYkMM",
"ue4QYUSGKFH",
"cdsf79q_s0J",
"E5PKmkS4kQb",
"ydnIC7YwhL",
"ydnIC7YwhL",
"22dECjxuxs0",
"zHGdEphAf6r",
"uEXUY_AT7-a",
"iclr_2022_wMpS-Z_AI_E",
"1BscAXhGsV",
"Z70OlU4RmZ8",
"pc2Ze1iRTC2",
"1BscAXhGsV... |
iclr_2022_5xEgrl_5FAJ | BiBERT: Accurate Fully Binarized BERT | The large pre-trained BERT has achieved remarkable performance on Natural Language Processing (NLP) tasks but is also computation and memory expensive. As one of the powerful compression approaches, binarization extremely reduces the computation and memory consumption by utilizing 1-bit parameters and bitwise operation... | Accept (Poster) | This paper present a way to fully binarize a BERT model. The authors convincingly demonstrate that a naive binarization results in large quality losses and then propose amendments. It is pretty impressive that it is possible to get a fully binarized model to work at all.
At the same time, the quality losses are still s... | train | [
"4inq0vvKBiQ",
"VP46-3WcB8h",
"_2__Z_P_4P0",
"fLrdwBKDro9",
"kgPn2F8ae_2",
"o1BbEHHCSji",
"trdXcsV4UHP",
"qCuLKlVdoIV",
"aVHJol3mtT",
"GRR_bF78goe",
"zTzjK1kg6nd",
"BgXEByc1uDd",
"rBQKC5vvc2",
"LKnvJu9OhJ",
"4AW1DBmC7bh",
"Fg771hU713",
"EgDFY-3cX5L",
"1rlEtM6kZZv",
"olM7maLYjrs",... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"public",
"author",
"author",
"author",
"author",
"author",
... | [
"This paper addresses the problem of fully binarizing BERT model including the network weights, embeddings and activations. Through theoretical and empirical analysis, the authors find: 1) the direct binarization of softmax-ed attention matrix is problematic; 2) distillation of the attention scores from the full-pr... | [
6,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6
] | [
4,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"iclr_2022_5xEgrl_5FAJ",
"Fg771hU713",
"fLrdwBKDro9",
"trdXcsV4UHP",
"o1BbEHHCSji",
"iclr_2022_5xEgrl_5FAJ",
"42PNG1CKjZd",
"iclr_2022_5xEgrl_5FAJ",
"BgXEByc1uDd",
"aVHJol3mtT",
"GRR_bF78goe",
"-6PDgKnfHB",
"LKnvJu9OhJ",
"olM7maLYjrs",
"iclr_2022_5xEgrl_5FAJ",
"EgDFY-3cX5L",
"1rlEtM6... |
iclr_2022_5-2mX9_U5i | Sqrt(d) Dimension Dependence of Langevin Monte Carlo | This article considers the popular MCMC method of unadjusted Langevin Monte Carlo (LMC) and provides a non-asymptotic analysis of its sampling error in 2-Wasserstein distance. The proof is based on a refinement of mean-square analysis in Li et al. (2019), and this refined framework automates the analysis of a large cla... | Accept (Poster) | This paper provides a near-optimal analysis of the unadjusted Langevin Monte Carlo (LMC) algorithm with respect to the W2 distance. The main statement is that the mixing time is ~ d^{1/2}/eps under standard assumptions. The authors also give a nearly matching lower bound under these assumptions. The reviewers agreed th... | train | [
"b1TestQ1EZy",
"qrktFEqYo4Q",
"4ZY62XXkIgt",
"kEBVt4ONWUz",
"9fUjQUflAK-",
"5fMhaaWEMV",
"lrlqzYXHkmY",
"ryFRIp4LHS"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We sincerely thank the reviewer for carefully reading the paper and providing so many valuable comments which greatly improve this work. The confirmation of the significance of our work is also deeply appreciated. Below is an itemized list of responses.\n\n> numerical verification on how $\\mathbb{E}\\|\\bar{\\bo... | [
-1,
-1,
-1,
-1,
6,
8,
6,
8
] | [
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"ryFRIp4LHS",
"lrlqzYXHkmY",
"5fMhaaWEMV",
"9fUjQUflAK-",
"iclr_2022_5-2mX9_U5i",
"iclr_2022_5-2mX9_U5i",
"iclr_2022_5-2mX9_U5i",
"iclr_2022_5-2mX9_U5i"
] |
iclr_2022_uPv9Y3gmAI5 | Language model compression with weighted low-rank factorization | Factorizing a large matrix into small matrices is a popular strategy for model compression. Singular value decomposition (SVD) plays a vital role in this compression strategy, approximating a learned matrix with fewer parameters. However, SVD minimizes the squared error toward reconstructing the original matrix without... | Accept (Poster) | The paper studies the problem of task-specific model compression obtained from fine-tuning large pre-trained language models. The work follows the line of research in which model size is reduced by decomposing the matrices in the model into smaller factors. Two-step approaches apply SVD and then fine-tuned the model on... | train | [
"NUvoltr_vqw",
"-onpufoQbH",
"h-TBbwqe4cF",
"nwG4sbZhdv",
"QA6ghgz78ej",
"uzO9PJakvbw",
"BthllhLTkxD",
"SSkZIHSsxN"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The proposed paper focuses on a new technique to compress weight matrices in Machine Learning, e.g., weight matrices of layers in DNN. \nOne idea is to simply exploit truncated SVD. While this minimizes the Euclidean norm of the error, it doesn't necessarily lead to a lower task error since the truncated part migh... | [
6,
6,
-1,
6,
-1,
-1,
-1,
-1
] | [
4,
4,
-1,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2022_uPv9Y3gmAI5",
"iclr_2022_uPv9Y3gmAI5",
"NUvoltr_vqw",
"iclr_2022_uPv9Y3gmAI5",
"nwG4sbZhdv",
"BthllhLTkxD",
"NUvoltr_vqw",
"-onpufoQbH"
] |
iclr_2022_tT9t_ZctZRL | Towards Deepening Graph Neural Networks: A GNTK-based Optimization Perspective | Graph convolutional networks (GCNs) and their variants have achieved great success in dealing with graph-structured data. Nevertheless, it is well known that deep GCNs suffer from the over-smoothing problem, where node representations tend to be indistinguishable as more layers are stacked up. The theoretical research ... | Accept (Poster) | In this paper, the authors established interesting theoretical results regarding the behavior Graph Neural Tangent Kernel (GNTK). They also provide sufficient evidence (some of which during rebuttal) that their approach is valid. We have had many discussions and I suggest that the authors apply reviewers' comments to t... | train | [
"9bN-6yIkJI",
"f2YAtVATkID",
"QbZ6lA3KJ9l",
"Z2EKctgitS6",
"5l-TJW5ag8",
"LHoR_-bJbz",
"uReEIMEaWev",
"LEOBvr4E2ww",
"p07l8kuHmr",
"yYY9Ec8uIna",
"Vn4bFlM7Suw",
"GXQ_otQf31",
"83pxwuYA7Uv",
"5b9gXvz_hmp",
"qT5GJnY8Z4F",
"Ichkq1rg4t",
"_lGIyC1OEDk",
"sj61XGRFUiW",
"t61EHiZItoD",
... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official... | [
" Thank you for constructive comments! We hope our answers below address all your concerns.\n\n### 1. The proposed solution edge drop has been widely applied in many models. The optimal dropout rate is not a significant breakthrough. It could be easily found by the hyperparameters search.\n\nThe comparison between ... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
5
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
2,
3
] | [
"ENbMRWPOoXd",
"rEaORI_kRju",
"LHoR_-bJbz",
"iclr_2022_tT9t_ZctZRL",
"ki4GYV2gC2O",
"Z2EKctgitS6",
"rEaORI_kRju",
"iclr_2022_tT9t_ZctZRL",
"iclr_2022_tT9t_ZctZRL",
"iclr_2022_tT9t_ZctZRL",
"ENbMRWPOoXd",
"Z2EKctgitS6",
"rEaORI_kRju",
"yYY9Ec8uIna",
"rEaORI_kRju",
"spm33NRidi",
"yYY9E... |
iclr_2022_LI2bhrE_2A | Iterative Refinement Graph Neural Network for Antibody Sequence-Structure Co-design | Antibodies are versatile proteins that bind to pathogens like viruses and stimulate the adaptive immune system. The specificity of antibody binding is determined by complementarity-determining regions (CDRs) at the tips of these Y-shaped proteins. In this paper, we propose a generative model to automatically design the... | Accept (Spotlight) | This paper proposes use of a novel generative modelling approach, over both sequences and structure of proteins, to co-design the CDR region of antibodies so achieve good binding/neutralization. The reviewers are in agreement that the problem is one of importance, and that the technical and empirical contributions are ... | train | [
"S5jx5C3ANjE",
"WFIRV-tvwTc",
"C7lljDutn1S",
"ynPV9oD49Dl",
"EKF57rZZivx",
"wb38rxdj6Ga",
"dvZyS6Bu-zk"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response. I look forward to seeing future in-vitro and in-vivo results from these models. ",
" Dear reviewer,\n\nThank you for your insightful comments and positive review!\n\nQ1: The experimental evaluation may be problematic, it is not convincing to use machine learning methods to predict the ... | [
-1,
-1,
-1,
-1,
8,
8,
8
] | [
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"ynPV9oD49Dl",
"EKF57rZZivx",
"dvZyS6Bu-zk",
"wb38rxdj6Ga",
"iclr_2022_LI2bhrE_2A",
"iclr_2022_LI2bhrE_2A",
"iclr_2022_LI2bhrE_2A"
] |
iclr_2022_uqBOne3LUKy | Is Importance Weighting Incompatible with Interpolating Classifiers? | Importance weighting is a classic technique to handle distribution shifts. However, prior work has presented strong empirical and theoretical evidence demonstrating that importance weights can have little to no effect on overparameterized neural networks. \emph{Is importance weighting truly incompatible with the traini... | Accept (Poster) | The paper revisits importance sampling as an approach for combating distribution shift when training over-parameterized neural networks. Contrary to recent results that suggest that importance sampling is perhaps incompatible with over-parameterization, the authors find that the exponential tail of losses such as the l... | train | [
"qQKzsWQGzJm",
"qGW6SPz7hSn",
"KsW1C6lYwr",
"Vsir-xRW3Kj",
"XRpQl4MhuCy",
"C3zgeDro1rx",
"ZoZrSfZrueL",
"SP0gDGmim60",
"7yG3mfqxJ9",
"3418NcPQ_1a",
"hMwyzz18peu",
"XdIMdm9b2dm",
"Y4bNnr9a_LR",
"3mKkswhHQR",
"18izF6aYe9r",
"rKZXby1L_r"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We wanted to reach out once again and ask whether our rebuttal addressed your concerns. It felt to us that your main criticism of the main paper was the lack of detailed empirical evaluation. We have thoroughly addressed this in our rebuttal (see our top-level comment in OpenReview) and we hope that you can raise... | [
-1,
-1,
5,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"XdIMdm9b2dm",
"3418NcPQ_1a",
"iclr_2022_uqBOne3LUKy",
"iclr_2022_uqBOne3LUKy",
"C3zgeDro1rx",
"ZoZrSfZrueL",
"3mKkswhHQR",
"rKZXby1L_r",
"XRpQl4MhuCy",
"KsW1C6lYwr",
"Vsir-xRW3Kj",
"18izF6aYe9r",
"7yG3mfqxJ9",
"iclr_2022_uqBOne3LUKy",
"iclr_2022_uqBOne3LUKy",
"iclr_2022_uqBOne3LUKy"
] |
iclr_2022_T4-65DNlDij | Deep Attentive Variational Inference | Stochastic Variational Inference is a powerful framework for learning large-scale probabilistic latent variable models. However, typical assumptions on the factorization or independence of the latent variables can substantially restrict its capacity for inference and generative modeling. A major line of active researc... | Accept (Poster) | This paper adds an attention mechanism to deep variational autoencoders. The authors develop a global + local attention method and achieve better log likelihoods than a variety of recent methods on MNIST and OMNIGLOT. Overall the reviewers found this paper strong (8, 8, 8, 6), particularly after the author rebuttal. ... | train | [
"5mxuL8mGLUL",
"j56vL679dKY",
"_DZu3in4oM",
"JbGr1kZAjhG",
"Vm1AV6X-454",
"vAOxfeCZ_zd",
"9RB5dXp8A75",
"-u5TZTbYEWA",
"u-IVcrphBdU",
"Gv2Szf4TZ9z",
"NDVMPjp75T",
"qg-nmFeMX3q",
"6RcHS7vyuY",
"ZfKs64kIb3n"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper identifies a common problem in previous VAE related models: adding more stochastic layers to an already very deep model yields small predictive improvement while substantially increasing the inference and training time. Therefore, a new model that proposes to use attention mechanisms to build more expre... | [
8,
-1,
-1,
8,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
-1,
-1,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2022_T4-65DNlDij",
"5mxuL8mGLUL",
"qg-nmFeMX3q",
"iclr_2022_T4-65DNlDij",
"iclr_2022_T4-65DNlDij",
"u-IVcrphBdU",
"iclr_2022_T4-65DNlDij",
"Vm1AV6X-454",
"Vm1AV6X-454",
"5mxuL8mGLUL",
"ZfKs64kIb3n",
"JbGr1kZAjhG",
"JbGr1kZAjhG",
"iclr_2022_T4-65DNlDij"
] |
iclr_2022_gbe1zHyA73 | Constrained Physical-Statistics Models for Dynamical System Identification and Prediction | Modeling dynamical systems combining prior physical knowledge and machine learning (ML) is promising in scientific problems when the underlying processes are not fully understood, e.g. when the dynamics is partially known. A common practice to identify the respective parameters of the physical and ML components is to f... | Accept (Poster) | The paper proposes a method for hybrid model-based/ML learning, where a model is decomposed into an interpretable parametric prior and a neural net residual. In this case, the prediction error minimization does no identify the parametric component, and an alternating optimization method is proposed to augments predict... | val | [
"KFai8sOYW8-",
"OnjXFZGpjA",
"BfkEidC3CGA",
"H7eJJB5ge2c",
"qWcp0Bjapj",
"Ih1ujnhTPOn",
"B2ALvfPWGLm",
"azly3VeHV6x",
"Prc2bBuSPo-",
"dO2QIo_CTq3",
"RIGnAUWMEF_",
"MCNzADzNK8q"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I'd like to thank the authors for addressing my questions. I will keep the current score.",
"Authors propose a general framework for learning and ensuring identifiability with hybrid models. They derive strong theoretical guarantees as well as a proof of convergence in a simple affine setting. They validate the... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
3
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"Ih1ujnhTPOn",
"iclr_2022_gbe1zHyA73",
"iclr_2022_gbe1zHyA73",
"BfkEidC3CGA",
"MCNzADzNK8q",
"RIGnAUWMEF_",
"azly3VeHV6x",
"OnjXFZGpjA",
"dO2QIo_CTq3",
"iclr_2022_gbe1zHyA73",
"iclr_2022_gbe1zHyA73",
"iclr_2022_gbe1zHyA73"
] |
iclr_2022_c-4HSDAWua5 | SketchODE: Learning neural sketch representation in continuous time | Learning meaningful representations for chirographic drawing data such as sketches, handwriting, and flowcharts is a gateway for understanding and emulating human creative expression. Despite being inherently continuous-time data, existing works have treated these as discrete-time sequences, disregarding their true nat... | Accept (Poster) | The SketchODE submission is a continuously-valued model for chirographic drawing data such as handwritten digits or sketches. It relies on variational sequence-to-sequence model where the latent code z is a global encoding of the drawing dynamical, and contains a neural controlled differential equation encoder to encod... | train | [
"rLK8OrgTssZ",
"wrQBsI_rRSr",
"gHZiuSCSpK8",
"H5GG-6iEpv",
"-fRFXe1lb9R",
"Qlv9Ofvpvb8",
"lye7u_Fr3mp",
"qdQMX8Nf8hJ",
"u-k9eEY-Civ",
"AMvcXzNXtk",
"rjlYSv2tCjM",
"PZPqv7tIT97",
"nsKDN0sGzs-"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for appreciating our response, and for increasing the score.\n\nWe would like to shed light on the one remaining doubt mentioned in your latest comment, i.e. that the multi-stroke SketchODE will behave similarly to [1,2] in terms of latent space interpolation. Actually, the multi-stroke Sketch... | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
8
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"-fRFXe1lb9R",
"gHZiuSCSpK8",
"qdQMX8Nf8hJ",
"iclr_2022_c-4HSDAWua5",
"lye7u_Fr3mp",
"H5GG-6iEpv",
"H5GG-6iEpv",
"nsKDN0sGzs-",
"PZPqv7tIT97",
"rjlYSv2tCjM",
"iclr_2022_c-4HSDAWua5",
"iclr_2022_c-4HSDAWua5",
"iclr_2022_c-4HSDAWua5"
] |
iclr_2022_hgKtwSb4S2 | A generalization of the randomized singular value decomposition | The randomized singular value decomposition (SVD) is a popular and effective algorithm for computing a near-best rank $k$ approximation of a matrix $A$ using matrix-vector products with standard Gaussian vectors. Here, we generalize the theory of randomized SVD to multivariate Gaussian vectors, allowing one to incorpor... | Accept (Poster) | This paper studies a generalization of the randomized SVD algorithm with non-standard Gaussian vectors, which is then used to incorporate any covariance matrix and to Hilbert-Schmidt operators. It uses a new kernel related to products of weighted Jacobi polynomials; and extensive numerical experiments further strength... | train | [
"o10u7_sv5fU",
"BMWlwO0XlB",
"xrYnu2RB3OG",
"fsUWRwlxxF",
"BY2E7ARVwWt",
"DDE3FrVDwQf",
"yMwNk4JFb8",
"jOAhojbZcom",
"WPoXthTdjZA"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the randomised SVD algorithm with non-standard Gaussian vectors which is then used to approximate Hilbert-Schmidt operators using a new kernel based on out products of weighted Jacobi polynomials. \n0) In abstract: \"Here, we generalize the theory of randomized SVD to to multivariate Gaussian ve... | [
8,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2022_hgKtwSb4S2",
"yMwNk4JFb8",
"DDE3FrVDwQf",
"BY2E7ARVwWt",
"WPoXthTdjZA",
"jOAhojbZcom",
"o10u7_sv5fU",
"iclr_2022_hgKtwSb4S2",
"iclr_2022_hgKtwSb4S2"
] |
iclr_2022_Dl4LetuLdyK | A Fine-Grained Analysis on Distribution Shift | Robustness to distribution shifts is critical for deploying machine learning models in the real world. Despite this necessity, there has been little work in defining the underlying mechanisms that cause these shifts and evaluating the robustness of algorithms across multiple, different distribution shifts. To this end,... | Accept (Oral) | The paper proposes a general framework to reason about fine-grained distribution shifts, evaluating a large set of different approaches in a variety of settings. All reviewers recommend acceptance. While concerns were raised, including questions about the generality of the framework, unsurprising “tips”, and unclear ta... | train | [
"t3a01KWkr1Z",
"d4k6B-0OJa1",
"CJo5yt-n2cU",
"HEAmu7OW1mZ",
"i_PhTmu2TTY",
"STGiego4Ikz",
"sPnMNcDwMX",
"inzLu1asSZ",
"XxS2W6tTxyr",
"OogjCtI0XVl",
"T9RsTfXmqqR",
"6wv-MmHoO_n",
"ynd_KotBWCL",
"uCPu97lYB88",
"wrtYmu0LQG",
"c0bEgrQS8ZD",
"FLYcuIu22U2"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
" Cool! I think all my questions were answered. Although I think using unsupervised domain adaptation algorithms for domain generalization is odd, when dedicated algorithms do exist towards domain generalization [1,2,3]. And keeping in mind that this is purely a empirical paper with reasonable technical novelty and... | [
-1,
8,
-1,
10,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"inzLu1asSZ",
"iclr_2022_Dl4LetuLdyK",
"i_PhTmu2TTY",
"iclr_2022_Dl4LetuLdyK",
"STGiego4Ikz",
"sPnMNcDwMX",
"ynd_KotBWCL",
"T9RsTfXmqqR",
"FLYcuIu22U2",
"iclr_2022_Dl4LetuLdyK",
"uCPu97lYB88",
"HEAmu7OW1mZ",
"HEAmu7OW1mZ",
"d4k6B-0OJa1",
"d4k6B-0OJa1",
"d4k6B-0OJa1",
"OogjCtI0XVl"
] |
iclr_2022_RQ428ZptQfU | A Deep Variational Approach to Clustering Survival Data | In this work, we study the problem of clustering survival data — a challenging and so far under-explored task. We introduce a novel semi-supervised probabilistic approach to cluster survival data by leveraging recent advances in stochastic gradient variational inference. In contrast to previous work, our proposed metho... | Accept (Poster) | Four knowledgeable referees recommend Accept. I also think the paper provides a unique contribution to the field of deep survival models and I, therefore, recommend Accept | train | [
"u2iUSo3JUwS",
"zdqKCx2KMi9",
"jqZapPX6ma",
"CG6TsOzA-O",
"SPyS8n-0-9B",
"NFVmo2tcaEi",
"ksn9pqryw9A",
"p5foeoKW1_",
"cWAPWfTAHv",
"k3C0bjaPnhc",
"XBBB79fXlpU"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This work tackles the problem of clustering in the context of survival data using a generative model. A variational autoencoder is used for modelling the data, while the latent representation is leveraged to model the survival outcome conditionned on the assigned cluster following a Weibull distribution. This appr... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
8
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3
] | [
"iclr_2022_RQ428ZptQfU",
"ksn9pqryw9A",
"CG6TsOzA-O",
"XBBB79fXlpU",
"u2iUSo3JUwS",
"k3C0bjaPnhc",
"cWAPWfTAHv",
"iclr_2022_RQ428ZptQfU",
"iclr_2022_RQ428ZptQfU",
"iclr_2022_RQ428ZptQfU",
"iclr_2022_RQ428ZptQfU"
] |
iclr_2022_JJxiD-kg-oK | Blaschke Product Neural Networks (BPNN): A Physics-Infused Neural Network for Phase Retrieval of Meromorphic Functions | Numerous physical systems are described by ordinary or partial differential equations whose solutions are given by holomorphic or meromorphic functions in the complex domain. In many cases, only the magnitude of these functions are observed on various points on the purely imaginary $j\omega$-axis since coherent measure... | Accept (Poster) | The Authors propose a neural-network based approach for the phase retrieval problem. Solving the phase retrieval problem is key for important application areas such as crystallography or radioastronomy.
After adding more baselines and other changes, 3 out of 4 reviewers recommended acceptance. Reviewer kQWk recommende... | train | [
"iceyYmh4fzV",
"jw32-0_E7KP",
"Vxq8hNF9M6k",
"T2AyT1uaz6T",
"NSdq6eYDTnF",
"K8qGlVoENPI",
"kBaPOq8S4IS",
"vmHWw1zZiPA",
"QmzsfndvXw4",
"-mtJIL1PTdI",
"Vv2-lPqqkb",
"pXg5Y3C1Fh6"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" I wish to thank the authors for their response to my review. I welcome the comparisons to the other methods that I mentioned, including MUSIC and AAA. I am happy to update my score. ",
"In complex analysis, the Blaschke product is an important family of bounded analytic functions in the open unit disk that is c... | [
-1,
6,
5,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
3,
3,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4
] | [
"jw32-0_E7KP",
"iclr_2022_JJxiD-kg-oK",
"iclr_2022_JJxiD-kg-oK",
"Vv2-lPqqkb",
"QmzsfndvXw4",
"iclr_2022_JJxiD-kg-oK",
"iclr_2022_JJxiD-kg-oK",
"pXg5Y3C1Fh6",
"K8qGlVoENPI",
"jw32-0_E7KP",
"Vxq8hNF9M6k",
"iclr_2022_JJxiD-kg-oK"
] |
iclr_2022_J_PHjw4gvXJ | Improving the Accuracy of Learning Example Weights for Imbalance Classification | To solve the imbalance classification, methods of weighting examples have been proposed. Recent work has studied to assign adaptive weights to training examples through learning mechanisms, that is, the weights, similar to classification models, are regarded as parameters that need to be learned. However, the algorithm... | Accept (Poster) | To solve imbalance classification problem, this paper proposes a method to learn example weights together with the parameters of a neural network. The authors proposed a novel mechanism of learning with a constraint, which allows accurate training of the weights and model at the same time. Then they combined this new l... | train | [
"RqAEXE_TgAq",
"7Cj4X8wcSCO",
"glF1N6EGgxL",
"2S8kX07NUie",
"xUFMDP29LWO",
"fot8Ztye6Jn",
"gJX_bv6_tpF",
"pma1_TOL0Bc",
"93p6WxwXEb"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"They propose a novel mechanism of learning with a constraint, which can accurately train the weights and model. Then, they propose a combined method of our learning mechanism and the work by Hu et al., which can promote each other to perform better. The proposed method provides some useful results, but there are ... | [
6,
-1,
-1,
-1,
-1,
-1,
6,
8,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"iclr_2022_J_PHjw4gvXJ",
"glF1N6EGgxL",
"93p6WxwXEb",
"pma1_TOL0Bc",
"RqAEXE_TgAq",
"gJX_bv6_tpF",
"iclr_2022_J_PHjw4gvXJ",
"iclr_2022_J_PHjw4gvXJ",
"iclr_2022_J_PHjw4gvXJ"
] |
iclr_2022_u6TRGdzhfip | Reliable Adversarial Distillation with Unreliable Teachers | In ordinary distillation, student networks are trained with soft labels (SLs) given by pretrained teacher networks, and students are expected to improve upon teachers since SLs are stronger supervision than the original hard labels. However, when considering adversarial robustness, teachers may become unreliable and ad... | Accept (Poster) | This paper proposes a new knowledge distillation (KD) method for adversarial training. The key observation is inspiring: soft-labels provided by the teacher gradually becomes less and less reliable during the adversarial training of student model. Based on that, they propose to partially trust the soft labels provided... | train | [
"jsXb0djmAj-",
"mJDfftZQHGX",
"a7rltBjAYGF",
"yRHs_L2e2Vr",
"iEYZ-j8delx",
"3da2BJCTgk",
"HyTFVauEo1P",
"57jE_jdSPr",
"Z0WfcYwG9TF",
"13wHuuDTeI9",
"0qv42PQyAK",
"AryDFf4Kudp",
"6DJY1VSUa8f",
"v3gxaSi5Oln",
"d3zC9kP-oEm"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" That solves my concerns. I've updated my score to 6. Thank you!",
"This paper proposes a new knowledge distillation (KD) method for adversarial training. The authors first observed that the soft-labels provided by the teacher gradually becomes less and less reliable during the adversarial training of student mo... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"iEYZ-j8delx",
"iclr_2022_u6TRGdzhfip",
"d3zC9kP-oEm",
"iclr_2022_u6TRGdzhfip",
"mJDfftZQHGX",
"mJDfftZQHGX",
"mJDfftZQHGX",
"mJDfftZQHGX",
"6DJY1VSUa8f",
"v3gxaSi5Oln",
"v3gxaSi5Oln",
"d3zC9kP-oEm",
"iclr_2022_u6TRGdzhfip",
"iclr_2022_u6TRGdzhfip",
"iclr_2022_u6TRGdzhfip"
] |
iclr_2022_4Ycr8oeCoIh | When, Why, and Which Pretrained GANs Are Useful? | The literature has proposed several methods to finetune pretrained GANs on new datasets, which typically results in higher performance compared to training from scratch, especially in the limited-data regime. However, despite the apparent empirical benefits of GAN pretraining, its inner mechanisms were not analyzed in-... | Accept (Poster) | This paper empirically studies when, why, and which pretrained GANs are useful.
All the reviewers are positive about this work, that they all consider very valuable for practitioners and the community.
First building intuition through toy examples, authors conduct a large-scale study of transfer learning in GANs (w... | train | [
"vVyih5J8zL8",
"Wdt8zxBv77V",
"JIoEUX2QYxO",
"3rB8rpSy2z",
"vu4OyvbEON",
"30bAoq-L-zM",
"szTltjJxXBx",
"3CJ84EFCZGj",
"QJnP1qSHsOT",
"0ndPqhPKvNP"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the clarification!",
" *What was the motivation for not using ADA when the dataset size was more than 50k?*\n\nFor large datasets, using ADA can deteriorate the performance. For instance, see Figure 7c in [1] (LSUN-Cats line). Therefore, we employ ADA for small datasets only.\n\n*Also, why only us... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"Wdt8zxBv77V",
"JIoEUX2QYxO",
"3rB8rpSy2z",
"QJnP1qSHsOT",
"QJnP1qSHsOT",
"3CJ84EFCZGj",
"0ndPqhPKvNP",
"iclr_2022_4Ycr8oeCoIh",
"iclr_2022_4Ycr8oeCoIh",
"iclr_2022_4Ycr8oeCoIh"
] |
iclr_2022_8uz0EWPQIMu | On the Pitfalls of Analyzing Individual Neurons in Language Models | While many studies have shown that linguistic information is encoded in hidden word representations, few have studied individual neurons, to show how and in which neurons it is encoded.
Among these, the common approach is to use an external probe to rank neurons according to their relevance to some linguistic attribute... | Accept (Poster) | This paper analyses interpretation methods that use probes to evaluate the information in individual neurons of a deep network and shows that it confounds probe quality and ranking quality, and encoded information and used information. The paper proposes a new method which does not suffer from the same drawbacks. The r... | train | [
"vW6UVu8MNQN",
"oE-Va9TB0R",
"cESJDUNSZWo",
"O34_sb3W7v",
"U1OKkrSUm0n",
"tdU_BRMvaKB",
"vvC1MLwth_W",
"NCPStRKsuiI",
"PD4nB7ZShO",
"vJBpG3r6zoC",
"E9vP7x10Xd7",
"JPx5MElvZhn",
"_mNvF6Yj64E",
"-g18KLm_3o4",
"mMYppXTyqH",
"a908a9GCRe",
"iYYFOe6JyLO"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Hi, as the discussion deadline is getting close, we would like to know whether we addressed your concerns, or if there are any more issues we should clarify. \nWe would be happy to hear from you.",
" Thank you for your comment and for improving your score! We're happy you found our extra experiments helpful, an... | [
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
8
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"mMYppXTyqH",
"cESJDUNSZWo",
"E9vP7x10Xd7",
"iclr_2022_8uz0EWPQIMu",
"mMYppXTyqH",
"iYYFOe6JyLO",
"O34_sb3W7v",
"O34_sb3W7v",
"vJBpG3r6zoC",
"tdU_BRMvaKB",
"O34_sb3W7v",
"a908a9GCRe",
"mMYppXTyqH",
"mMYppXTyqH",
"iclr_2022_8uz0EWPQIMu",
"iclr_2022_8uz0EWPQIMu",
"iclr_2022_8uz0EWPQIMu... |
iclr_2022_sTNHCrIKDQc | Graphon based Clustering and Testing of Networks: Algorithms and Theory | Network-valued data are encountered in a wide range of applications, and pose challenges in learning due to their complex structure and absence of vertex correspondence. Typical examples of such problems include classification or grouping of protein structures and social networks. Various methods, ranging from graph ke... | Accept (Poster) | This paper presents a new method for clustering multiple graphs, without vertex correspondence, by combing existing approaches on graphon estimation and spectral clustering. All reviewers agree that this is a neat paper with new theoretical and empirical results. The main concerns were also properly addressed during re... | train | [
"V9kozqf1arM",
"R2yJj2APpO8",
"ob5KUjYiZvX",
"-I4Q6P5EibT",
"v9dLOl00x4m",
"xvUGlqcJDle",
"Af2fLrHDy2Z",
"5EyUvpdR-83",
"3vtrIeAncH2",
"iMAE3vHuZR3"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"Motivated by the sort-and-smooth graphon estimator of Chan and Airoldi, 2014, this paper proposes two new clustering algorithms (graph distance based spectral clustering and similarity based semidefinite programming) for multiple graphs observed without vertex correspondence. The idea is to use the graphon approxi... | [
8,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2022_sTNHCrIKDQc",
"iclr_2022_sTNHCrIKDQc",
"iclr_2022_sTNHCrIKDQc",
"xvUGlqcJDle",
"V9kozqf1arM",
"Af2fLrHDy2Z",
"ob5KUjYiZvX",
"iMAE3vHuZR3",
"v9dLOl00x4m",
"iclr_2022_sTNHCrIKDQc"
] |
iclr_2022_RAW9tCdVxLj | Zero-CL: Instance and Feature decorrelation for negative-free symmetric contrastive learning | For self-supervised contrastive learning, models can easily collapse and generate trivial constant solutions. The issue has been mitigated by recent improvement on objective design, which however often requires square complexity either for the size of instances ($\mathcal{O}(N^{2})$) or feature dimensions ($\mathcal{O}... | Accept (Poster) | The initial reviews for this paper were somewhat diverging, however the paper did not receive any significant negative criticism to push it towards below the acceptance threshold. The reviewers have found some minor issues about the paper. Following the reviewer recommendations, the meta reviewer recommends acceptance. | val | [
"XxP2_zzb4lv",
"4Pa8_vUFKoq",
"Q-49Hhy5KPA",
"TIOOBcKp8ev",
"xGK_LqLrFz7",
"uD1h6btW26",
"v_pp2DasyB",
"g6zxeNCMWJT",
"sNPmkazwuwo",
"r1AfPolc6aS",
"8EZEpy4oC-4",
"GmWY6M4e-Cx",
"-ivbKlyGZsc",
"IOdyh2GAiI3",
"mGzn80xGLCO"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a novel contrastive loss by whitening the embedding vectors in two ways: along the instance dimension and along the feature dimension. The results are comparable to recent works.\n\n This loss function is somewhat new. However, the justification is relatively weak. The work is good, though at t... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4
] | [
"iclr_2022_RAW9tCdVxLj",
"mGzn80xGLCO",
"iclr_2022_RAW9tCdVxLj",
"iclr_2022_RAW9tCdVxLj",
"uD1h6btW26",
"v_pp2DasyB",
"g6zxeNCMWJT",
"r1AfPolc6aS",
"mGzn80xGLCO",
"XxP2_zzb4lv",
"IOdyh2GAiI3",
"-ivbKlyGZsc",
"iclr_2022_RAW9tCdVxLj",
"iclr_2022_RAW9tCdVxLj",
"iclr_2022_RAW9tCdVxLj"
] |
iclr_2022_iMSjopcOn0p | MT3: Multi-Task Multitrack Music Transcription | Automatic Music Transcription (AMT), inferring musical notes from raw audio, is a challenging task at the core of music understanding. Unlike Automatic Speech Recognition (ASR), which typically focuses on the words of a single speaker, AMT often requires transcribing multiple instruments simultaneously, all while prese... | Accept (Spotlight) | This work concerns Automatic Music Transcription (AMT) -- transcribing notes given the audio of the music. The paper demonstrates that a single general-purpose transformer model can perform AMT for many instruments across several different transcription datasets. The method represents the first unified AMT model that c... | train | [
"3gJKLpQUyLz",
"t_twB6U18R",
"5-hCPB-ILV",
"dYLOCLv8Spm",
"FwGeNS8CAwN",
"M3JyQk386Oo",
"j52w6BsNETg",
"bH6D9lijBK",
"XJU-6Oqusz"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for their responses and rigor. I would like to congratulate them another time for this work. Hopefully, it will be pivotal for AMT research for the next couple of years.\n\nI went through the corrections and I am happy with the changes and responses. In particular:\n\n- I agree o... | [
-1,
-1,
-1,
-1,
-1,
8,
8,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
5,
4
] | [
"FwGeNS8CAwN",
"XJU-6Oqusz",
"bH6D9lijBK",
"j52w6BsNETg",
"M3JyQk386Oo",
"iclr_2022_iMSjopcOn0p",
"iclr_2022_iMSjopcOn0p",
"iclr_2022_iMSjopcOn0p",
"iclr_2022_iMSjopcOn0p"
] |
iclr_2022_xa6otUDdP2W | Effective Model Sparsification by Scheduled Grow-and-Prune Methods | Deep neural networks (DNNs) are effective in solving many real-world problems. Larger DNN models usually exhibit better quality (e.g., accuracy) but their excessive computation results in long inference time. Model sparsification can reduce the computation and memory cost while maintaining model quality. Most existing ... | Accept (Poster) | The paper proposes a methodology for alternatively growing and pruning a subset of layers within a network in order to eventually produce a trained, sparse model. After discussion, all reviewers favor accept. Empirical performance of the sparse models appears strong, but requires significant computational expense dur... | train | [
"eEIK3bKtczw",
"fyNwx-I07ri",
"9Ycr-hPg01u",
"TZcnc34-rc",
"KQ7ZJFsqJ06",
"xdHOLlHj50o",
"iPJ4Yz7ACJ",
"8rnHuOWFlSy",
"Q_lUYzFkip",
"Jku5J1Kmb0t",
"zaT93Xd_drz",
"ISgBhslQMT",
"VqnjmNVANyJ",
"ouigbUSXFNj",
"pYPSiV56PG3",
"Ry55Bw2tibk",
"yn-BXcbh6K6",
"E6oe1l5V1O3",
"FWYHpoUfAP5",... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" Dear reviewer xnEk,\n\nWe want to thank you for your appreciation of our work and for raising the score! Your comments are very constructive, e.g., the discussion on methods that guarantee full exploration, partition number and boundaries, etc. We will make sure that all the discussion points are carefully addre... | [
-1,
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
3,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"9Ycr-hPg01u",
"iclr_2022_xa6otUDdP2W",
"TZcnc34-rc",
"KQ7ZJFsqJ06",
"Q_lUYzFkip",
"iclr_2022_xa6otUDdP2W",
"xdHOLlHj50o",
"C8pBQoL-Mj",
"fyNwx-I07ri",
"ISgBhslQMT",
"iclr_2022_xa6otUDdP2W",
"pYPSiV56PG3",
"ouigbUSXFNj",
"E6oe1l5V1O3",
"zaT93Xd_drz",
"zaT93Xd_drz",
"xdHOLlHj50o",
"... |
iclr_2022_WVX0NNVBBkV | Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness? | While additional training data improves the robustness of deep neural networks against adversarial examples, it presents the challenge of curating a large number of specific real-world samples. We circumvent this challenge by using additional data from proxy distributions learned by advanced generative models. We firs... | Accept (Poster) | In this work, authors use proxy distributions learned by advanced generative models to improve adversarial robustness. In the discussion period, authors did a good job in addressing reviewers' questions and comments. All reviewers think the paper is above the accept threshold, so do I. | train | [
"eMxip8XVrd",
"SG0NbkQ7jG",
"LFCV1sbk1cu",
"oFAJ6Vkd-J1",
"4SWYxPhu5XJ",
"ij-XgXQWGOs",
"DoQWjxbSTFV",
"nlKvY5HqqZR",
"5yKutb56kWn",
"iEcJUWhtfQs",
"BqPsrAX5YV7",
"E93eaqCuq17",
"ZQQsSdXRdq-",
"ug6WJPoIR52",
"GFahAskuo9",
"gO2T0JsAatw",
"1JFbyS6bPkg",
"GiT74xYE6Tf"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to clarify that our goal is *not* to develop a metric that solely determines the best generative model. As the reviewer noted, this can be done more efficiently by training robust classifiers on the synthetic data. \n\nHowever, this approach only characterizes *which* generative models are most help... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
6
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"SG0NbkQ7jG",
"ZQQsSdXRdq-",
"GFahAskuo9",
"iclr_2022_WVX0NNVBBkV",
"nlKvY5HqqZR",
"iEcJUWhtfQs",
"ug6WJPoIR52",
"5yKutb56kWn",
"ij-XgXQWGOs",
"E93eaqCuq17",
"iclr_2022_WVX0NNVBBkV",
"oFAJ6Vkd-J1",
"GiT74xYE6Tf",
"1JFbyS6bPkg",
"gO2T0JsAatw",
"iclr_2022_WVX0NNVBBkV",
"iclr_2022_WVX0N... |
iclr_2022_AcrlgZ9BKed | A Reduction-Based Framework for Conservative Bandits and Reinforcement Learning | We study bandits and reinforcement learning (RL) subject to a conservative constraint where the agent is asked to perform at least as well as a given baseline policy. This setting is particular relevant in real-world domains including digital marketing, healthcare, production, finance, etc. In this paper, we present a ... | Accept (Poster) | Summary: The paper studies RL and bandits in the conservative setting where the performance of the new, learnt policy should never be significantly worse than that of a baseline.
Discussions: The main concern of the reviewers was about novelty, and specifically what new techniques and ideas were brought in this work ... | test | [
"CqsmOwqCrrE",
"AuAdtSFSxR",
"IWoVEUrtZ4C",
"rc9RVaiSfwI",
"od0FQndNODZ",
"gsI60UDBDu",
"3SWMcZBx5_h",
"GIJqtKFjoxe",
"o3ZUYGzJy0",
"qpYdqe6ftzn",
"P3r4Eyq8QxF"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" I thank the reviewers for the clarifications and raising my score to 6.",
"This paper proposes a reduction-based framework for a large class of reinforcement learning algorithms, including bandits, linear bandits, tabular MDP and linear MDP.\nThe authors notably propose a generic lower bound that holds for all ... | [
-1,
8,
6,
8,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
4,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"gsI60UDBDu",
"iclr_2022_AcrlgZ9BKed",
"iclr_2022_AcrlgZ9BKed",
"iclr_2022_AcrlgZ9BKed",
"rc9RVaiSfwI",
"IWoVEUrtZ4C",
"AuAdtSFSxR",
"P3r4Eyq8QxF",
"iclr_2022_AcrlgZ9BKed",
"iclr_2022_AcrlgZ9BKed",
"iclr_2022_AcrlgZ9BKed"
] |
iclr_2022_xy_2w3J3kH | Communication-Efficient Actor-Critic Methods for Homogeneous Markov Games | Recent success in cooperative multi-agent reinforcement learning (MARL) relies on centralized training and policy sharing. Centralized training eliminates the issue of non-stationarity MARL yet induces large communication costs, and policy sharing is empirically crucial to efficient learning in certain tasks yet lacks ... | Accept (Poster) | This paper makes a contribution in the literature of cooperative multi-agent reinforcement learning by proposing a decentralized and communication-efficient training framework under a fully observable setting. The paper first defines the homogeneous or permutation invariant subclass of Markov games (homogeneous MG), wh... | train | [
"TYQa_DzRBx3",
"rQwxdLOA8Tl",
"ljWBhv-ceP",
"Q6UPeIxP4G-",
"e9iNvx-y7aE",
"MoqwTePIoXJ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a consensus-based actor-critic algorithm for multi-agent RL with limited communication during the training phase. This is in contrast to standard centralized training approaches such as MADDPG or COMA that assume access to the complete data form all agents for training critics. The paper motiva... | [
6,
6,
6,
-1,
-1,
6
] | [
4,
5,
3,
-1,
-1,
3
] | [
"iclr_2022_xy_2w3J3kH",
"iclr_2022_xy_2w3J3kH",
"iclr_2022_xy_2w3J3kH",
"iclr_2022_xy_2w3J3kH",
"iclr_2022_xy_2w3J3kH",
"iclr_2022_xy_2w3J3kH"
] |
iclr_2022__uCb2ynRu7Y | Path Integral Sampler: A Stochastic Control Approach For Sampling | We present Path Integral Sampler~(PIS), a novel algorithm to draw samples from unnormalized probability density functions. The PIS is built on the Schr\"odinger bridge problem which aims to recover the most likely evolution of a diffusion process given its initial distribution and terminal distribution. The PIS draws s... | Accept (Poster) | This paper introduces a control-based approach to sampling. All of the reviewers found the idea interesting. There were serious concerns by some of the reviewers regarding how the paper positioned itself relative to the literature, how it designed baselines for experiments, and how it compared itself to existing method... | train | [
"2WmXr7jPA4M",
"N4HpHA8Fx8",
"kar_Ri9fGK",
"8bPbY2Tkppu",
"VSNjkKork1T",
"SNINq3da6FI",
"M1yR96XtJps",
"_IsotbU1dng",
"cwwWU4VXTv3",
"3x3NlijRZ9I",
"Dp9GyHaJPe8",
"z0oPr5HF6o",
"2xImvHXV4p",
"2SEGgLSl5s",
"HTyYC8Eiy_g",
"zyOZIKcIPHg"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper presents a control approach to sampling. It is proposed to use a controlled diffusion initialized at time t=0 to sample at a given time t=T from a given unnormalized target distribution. This is achieved by minimizing a forward KL between two suitable diffusions. The resulting expression is simple as it ... | [
6,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8
] | [
5,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2022__uCb2ynRu7Y",
"SNINq3da6FI",
"zyOZIKcIPHg",
"2WmXr7jPA4M",
"HTyYC8Eiy_g",
"iclr_2022__uCb2ynRu7Y",
"_IsotbU1dng",
"cwwWU4VXTv3",
"2WmXr7jPA4M",
"z0oPr5HF6o",
"2xImvHXV4p",
"SNINq3da6FI",
"HTyYC8Eiy_g",
"zyOZIKcIPHg",
"iclr_2022__uCb2ynRu7Y",
"iclr_2022__uCb2ynRu7Y"
] |
iclr_2022__Wzj0J2xs2D | CURVATURE-GUIDED DYNAMIC SCALE NETWORKS FOR MULTI-VIEW STEREO | Multi-view stereo (MVS) is a crucial task for precise 3D reconstruction. Most recent studies tried to improve the performance of matching cost volume in MVS by introducing a skilled design to cost formulation or cost regularization. In this paper, we focus on learning robust feature extraction to enhance the performanc... | Accept (Poster) | All reviewers recommended accept after discussion. I am happy to accept this paper. | train | [
"h6Wfb0jT9kB",
"nwiw1XKZSeX",
"Qtthrfs0qEN",
"wcHxMEatJM4",
"jAib1ISCnoL",
"Qw_0aBJK49A",
"wnq-iRfkEZ8",
"HOxmU7xavl4",
"nHRMoicCusX",
"nSqby1635G",
"wADt_thkwB",
"JinGen9V3n",
"wOliJuz_p9b",
"okJ9ndJkuW3",
"7jSVHINakCH",
"UFVUihODrzx",
"Zyg432T5y8d",
"ireXg8ZO3W",
"EnFodgZjyso"
... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper adds upon the recent line of MVSnet works, (specifically CasMVSNet) by explicitly treating the scale needed to reconstruct each part of the scene. This is done by estimating the local curvature of the surface prior to the depth, which is then used to select (in an attention sense) a kernel with an appro... | [
6,
-1,
8,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
3,
-1,
5,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2022__Wzj0J2xs2D",
"nSqby1635G",
"iclr_2022__Wzj0J2xs2D",
"jAib1ISCnoL",
"Qtthrfs0qEN",
"wADt_thkwB",
"iclr_2022__Wzj0J2xs2D",
"nSqby1635G",
"ireXg8ZO3W",
"h6Wfb0jT9kB",
"wnq-iRfkEZ8",
"iclr_2022__Wzj0J2xs2D",
"h6Wfb0jT9kB",
"h6Wfb0jT9kB",
"Qtthrfs0qEN",
"Qtthrfs0qEN",
"Qtthrfs... |
iclr_2022_CyKHoKyvgnp | Transition to Linearity of Wide Neural Networks is an Emerging Property of Assembling Weak Models | Wide neural networks with linear output layer have been shown to be near-linear, and to have near-constant neural tangent kernel (NTK), in a region containing the optimization path of gradient descent. These findings seem counter-intuitive since in general neural networks are highly complex models. Why does a linear st... | Accept (Spotlight) | The authors provide in this manuscript a theoretical analysis to explain why deep neural networks become linear in the neighbourhood of the initial optimisation point as their width tends to infinity. They approach this question by viewing the network as a multi-level assembly model.
All reviewers agree that this is a... | train | [
"sme3j5S4eVa",
"IAUuTj4NvS",
"5FyqDocX6UX",
"yc3zle1TLZ",
"UGOrOzUCpe",
"f12y6t2t60-",
"3Mg0Ud4mjw-",
"pR56a3f4NLx",
"I1Ia1SKAjoN",
"PYeVkA9H5yO"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the response and the suggestion on these empirical validations.\n\nWe think that these empirical experiments should be an improvement of the article. Now, we are conducting more such experiments in various settings. We plan to add a detailed discussion about these numerical verifications... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
2
] | [
"yc3zle1TLZ",
"5FyqDocX6UX",
"3Mg0Ud4mjw-",
"f12y6t2t60-",
"PYeVkA9H5yO",
"I1Ia1SKAjoN",
"pR56a3f4NLx",
"iclr_2022_CyKHoKyvgnp",
"iclr_2022_CyKHoKyvgnp",
"iclr_2022_CyKHoKyvgnp"
] |
iclr_2022_-70L8lpp9DF | Hyperparameter Tuning with Renyi Differential Privacy | For many differentially private algorithms, such as the prominent noisy stochastic gradient descent (DP-SGD), the analysis needed to bound the privacy leakage of a single training run is well understood. However, few studies have reasoned about the privacy leakage resulting from the multiple training runs needed to fin... | Accept (Oral) | This paper tackles a problem at the intersection of AutoML and trustworthiness that has not been studied much before, and provides a first solution, leaving much space for a lot of interesting future research.
All reviewers agree that this is a strong paper and clearly recommend acceptance.
I recommend acceptance as an... | train | [
"MsBnQDk0WFW",
"FaGWKUw1Khv",
"32VU7QdqAfK",
"3U4uf88q16R",
"LzK1PVgXQ7Y",
"rpI_eMPCsW",
"QkG_SvNgN6N",
"udun_vLx--Z",
"VYVTaMmsw9",
"B2xrf3oZSrD",
"_kuqisf8-EO",
"jwPUp7mJ2m8",
"cTN7rY4rDiU",
"0qJRDAo9N8"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks the authors for the rebuttal, which answers my questions.",
"The paper provides an considerable improvement to the DP analysis of hyperparameter tuning of DP algorithms (such as DP-SGD). The analysis is carried out using Rényi differential privacy (RDP), and the DP bounds are RDP bounds that contain the ... | [
-1,
8,
-1,
-1,
-1,
-1,
-1,
10,
-1,
-1,
-1,
-1,
8,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
3,
4
] | [
"_kuqisf8-EO",
"iclr_2022_-70L8lpp9DF",
"rpI_eMPCsW",
"jwPUp7mJ2m8",
"jwPUp7mJ2m8",
"FaGWKUw1Khv",
"VYVTaMmsw9",
"iclr_2022_-70L8lpp9DF",
"B2xrf3oZSrD",
"udun_vLx--Z",
"0qJRDAo9N8",
"cTN7rY4rDiU",
"iclr_2022_-70L8lpp9DF",
"iclr_2022_-70L8lpp9DF"
] |
iclr_2022_Q5uh1Nvv5dm | AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation | We extend semi-supervised learning to the problem of domain adaptation to learn significantly higher-accuracy models that train on one data distribution and test on a different one. With the goal of generality, we introduce AdaMatch, a unified solution for unsupervised domain adaptation (UDA), semi-supervised learning ... | Accept (Poster) | Thanks for your submission to ICLR!
This paper presents a novel way to combine domain adaptation with semi-supervised learning. The reviewers were, on the whole, quite happy with the paper. On the positive side, the results are very extensive and impressive, it's a clever way to combine domain adaptation and semi-su... | train | [
"BiJ4jAPxsqL",
"z_JH_gCxUY3",
"reU1ninix0T",
"QIqlOStGaUH",
"CB4-n_YS9dP",
"akpWKLnuneg",
"1bOcA2mMnWV",
"Cl30iA3lW0",
"YVs0G7QS3_q",
"CPSgawepDNq",
"6hBXCWB-uA2",
"u2V0GdM3yi"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response and addressing the previous concerns.",
" By logits, we refer to the vector of raw (non-normalized) predictions that the model generates, which is ordinarily then passed to a normalization function. We use the logits as input to the softmax function, which generates a vector of (normaliz... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
4
] | [
"akpWKLnuneg",
"reU1ninix0T",
"CB4-n_YS9dP",
"u2V0GdM3yi",
"u2V0GdM3yi",
"6hBXCWB-uA2",
"CPSgawepDNq",
"YVs0G7QS3_q",
"iclr_2022_Q5uh1Nvv5dm",
"iclr_2022_Q5uh1Nvv5dm",
"iclr_2022_Q5uh1Nvv5dm",
"iclr_2022_Q5uh1Nvv5dm"
] |
iclr_2022_O476oWmiNNp | Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain Analysis: From Theory to Practice | Vision Transformer (ViT) has recently demonstrated promise in computer vision problems. However, unlike Convolutional Neural Networks (CNN), it is known that the performance of ViT saturates quickly with depth increasing, due to the observed attention collapse or patch uniformity. Despite a couple of empirical solution... | Accept (Poster) | The paper analyses the frequency filtering properties of self-attention in vision architectures, shows that it mainly acts as a low-pass filter, and proposes fixes that allow to better preserve the higher frequencies. These fixes yield moderate classification accuracy gains (~0.5-1%) for several existing attention-base... | train | [
"3phosQrxUi",
"_Wm4lSwD8Nn",
"2PELTbbq8Ik",
"YNjon8AhSKZ",
"Df5e0wneFX4",
"AWlWWZcc0j",
"A6_mW4YFWJs",
"ykvp4nrEYoz",
"xSNkltTbb4u",
"nwQpjX61bv4",
"VzD2P5liu0Z",
"QvumLxEh6BN",
"xtBzxz4qOun",
"c_5Qw9ytIkq",
"DPoK75mZiBF",
"UnPKsmIaqTg"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Dear reviewer DHWs:\n\nThank you for the precious endorsement and review!\n\nBest,\n\nAuthors of paper 887\n",
"This paper explores the reasons why ViTs cannot go deeper. The article provides clear and solid proofs, clarifying the rank collapse in attention matrix via Fourier analysis. Meanwhile, the authors su... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
2
] | [
"YNjon8AhSKZ",
"iclr_2022_O476oWmiNNp",
"YNjon8AhSKZ",
"Df5e0wneFX4",
"_Wm4lSwD8Nn",
"ykvp4nrEYoz",
"iclr_2022_O476oWmiNNp",
"xtBzxz4qOun",
"iclr_2022_O476oWmiNNp",
"xtBzxz4qOun",
"DPoK75mZiBF",
"c_5Qw9ytIkq",
"xSNkltTbb4u",
"UnPKsmIaqTg",
"_Wm4lSwD8Nn",
"iclr_2022_O476oWmiNNp"
] |
iclr_2022_9-Rfew334N | Givens Coordinate Descent Methods for Rotation Matrix Learning in Trainable Embedding Indexes | Product quantization (PQ) coupled with a space rotation, is widely used in modern approximate nearest neighbor (ANN) search systems to significantly compress the disk storage for embeddings and speed up the inner product computation. Existing rotation learning methods, however, minimize quantization distortion for fixe... | Accept (Poster) | The paper introduces a method to learn rotations of a quantized embedding end-to-end. The proposed technique seems novel, although the technical/algorithm novelty seems to be somewhat marginal.
The empirical results are promising, although do not quite match some of the claims by the authors.
Hopefully the reviewer f... | train | [
"cNTjvcJg-A",
"K3X2qnSbT6o",
"xVjYNI1PZ3A",
"8rpuAWBZE_u",
"mH4bzgPpTTk",
"rd01q1x0qtE",
"KytUaXM5Uw",
"FehDPmesBlV",
"_HVn1c5I7Ut",
"JthfRRh7oiX",
"37HUFSwborH",
"JqG0syIFuHr",
"hKdHfkPbV-k",
"qPtplhyoCiI",
"h1k0j7IjV3"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I've read the revised version of the paper. Thank you for the updates. While the paper looks better now, the revisions do not significantly alter my rating. \n\nThank you for the paper. It was a good read.",
"This paper proposes a block coordinate descent algorithm for rotation learning. The algorithm is based ... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"8rpuAWBZE_u",
"iclr_2022_9-Rfew334N",
"FehDPmesBlV",
"mH4bzgPpTTk",
"rd01q1x0qtE",
"KytUaXM5Uw",
"_HVn1c5I7Ut",
"37HUFSwborH",
"h1k0j7IjV3",
"qPtplhyoCiI",
"K3X2qnSbT6o",
"hKdHfkPbV-k",
"iclr_2022_9-Rfew334N",
"iclr_2022_9-Rfew334N",
"iclr_2022_9-Rfew334N"
] |
iclr_2022_zNHzqZ9wrRB | Equivariant Transformers for Neural Network based Molecular Potentials | The prediction of quantum mechanical properties is historically plagued by a trade-off between accuracy and speed. Machine learning potentials have previously shown great success in this domain, reaching increasingly better accuracy while maintaining computational efficiency comparable with classical force fields. In t... | Accept (Spotlight) | The paper proposes a rotationally equivariant transformer architecture for predicting molecular properties. The proposed architecture demonstrates good computational efficiency and good results on three benchmarks.
All four reviewers recommend acceptance (two weak, two strong), citing the novelty of the architecture, ... | train | [
"UPawlAhAAYq",
"TjdbneJCYa4",
"8ipMyNIaoOP",
"0JUXdNAmbiE",
"1iuXiEJKosg",
"tz97CSwx-py",
"18PH99BmXaW",
"Dwa3Jbqmshg"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors introduce a novel architecture for ML force fields, the Equivariant transformer (ET). It is based on the Transformer approach and can be used to predict energies (and forces) and other molecular properties (e.g., QM targets). The performance on standard benchmarks such as QM9 and MD17 is impressive. Th... | [
8,
6,
-1,
-1,
-1,
-1,
6,
8
] | [
4,
5,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2022_zNHzqZ9wrRB",
"iclr_2022_zNHzqZ9wrRB",
"Dwa3Jbqmshg",
"TjdbneJCYa4",
"UPawlAhAAYq",
"18PH99BmXaW",
"iclr_2022_zNHzqZ9wrRB",
"iclr_2022_zNHzqZ9wrRB"
] |
iclr_2022_085y6YPaYjP | Zero-Shot Self-Supervised Learning for MRI Reconstruction | Deep learning (DL) has emerged as a powerful tool for accelerated MRI reconstruction, but often necessitates a database of fully-sampled measurements for training. Recent self-supervised and unsupervised learning approaches enable training without fully-sampled data. However, a database of undersampled measurements may... | Accept (Poster) | The paper considers the problem of accelerated magnetic resonance imaging where the goal is to reconstruct an image from undersampled measurements. The paper proposes a zero-shot self-supervised learning approach for accelerated deep learning based magnetic resonance imaging. The approach partitions the measurements fr... | train | [
"nezWG1zXVbh",
"KaqOGt1gwdw",
"G_etlO3mKpt",
"6J5hJWArCIp",
"abq6YAYQtyF",
"V_rFl03FTZ2",
"HQqzyaUWrqS",
"WpHvceHPelC",
"tRRKO9yX5jp",
"TvRVDPpJ4sU",
"dlbG68nlde",
"U8fj35thva9",
"_jsMaQWIkYm",
"9lopGO94IIA",
"g5Ud-KrzIdq"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" #### **pt6r.C4: Figure 2 - Part 1**\n\nWe agree that K=100 converges/stops in fewer epochs, and that the metrics may not always reflect the true reconstruction quality in MRI. For the latter point, this is indeed why we provided visual experiments throughout the study, including the selection of K (Figure 9). We... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"6J5hJWArCIp",
"G_etlO3mKpt",
"TvRVDPpJ4sU",
"tRRKO9yX5jp",
"V_rFl03FTZ2",
"WpHvceHPelC",
"g5Ud-KrzIdq",
"HQqzyaUWrqS",
"TvRVDPpJ4sU",
"_jsMaQWIkYm",
"9lopGO94IIA",
"iclr_2022_085y6YPaYjP",
"iclr_2022_085y6YPaYjP",
"iclr_2022_085y6YPaYjP",
"iclr_2022_085y6YPaYjP"
] |
iclr_2022_swrMQttr6wN | Learning to Map for Active Semantic Goal Navigation | We consider the problem of object goal navigation in unseen environments. Solving this problem requires learning of contextual semantic priors, a challenging endeavour given the spatial and semantic variability of indoor environments. Current methods learn to implicitly encode these priors through goal-oriented navigat... | Accept (Poster) | This paper addresses the problem of goal navigation in unseen environments by learning to build a local, then a registered, global occupancy and semantic map of object categories from reprojected RGB+D observations, while extrapolating (hallucinating) unseen observations from contextual semantic priors (e.g., "tables a... | train | [
"eusy01DJXzJ",
"HwmRLcp5709",
"IQgSOkDwu08",
"t9FlBwzcSnU",
"NDpV7ZAAvPq",
"CXSwis5XNEM",
"VOeJFbpdqER",
"blJWaiiOGjj",
"qKIGN3eBw3J",
"_WpyVNMUhFf",
"_R6Qbb90gqg",
"lOp2Obeymlb",
"USFgnpe96U9",
"CunddNGXe3N"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for agreeing to not block publication of our work. We appreciate the creation of the challenge and are working on our submission.",
" The standard practice in the ML/Vision/NLP communities is that when a dataset does have a test-set held behind an evaluation server, authors submit to this test server.... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"HwmRLcp5709",
"CXSwis5XNEM",
"iclr_2022_swrMQttr6wN",
"NDpV7ZAAvPq",
"_WpyVNMUhFf",
"VOeJFbpdqER",
"lOp2Obeymlb",
"iclr_2022_swrMQttr6wN",
"IQgSOkDwu08",
"IQgSOkDwu08",
"CunddNGXe3N",
"USFgnpe96U9",
"iclr_2022_swrMQttr6wN",
"iclr_2022_swrMQttr6wN"
] |
iclr_2022_3jooF27-0Wy | FlexConv: Continuous Kernel Convolutions With Differentiable Kernel Sizes | When designing Convolutional Neural Networks (CNNs), one must select the size of the convolutional kernels before training. Recent works show CNNs benefit from different kernel sizes at different layers, but exploring all possible combinations is unfeasible in practice. A more efficient approach is to learn the kernel ... | Accept (Poster) | This submission proposes a method for learning convolutional filters with trainable size, that builds on top of multiplicative filter networks. Anti-aliasing is achieved by parametrization with anisotropic Gabor filters. The reviewers were unanimous in their opinion that the paper is suitable for acceptance to ICLR. ... | train | [
"xwHno887QrU",
"STKmHzgqaex",
"PM6KrgvMzCY",
"Tmsf28cYEKY",
"oa-aUvalC4y",
"MVcWg27x9cm",
"-MfOsaDFYhJ",
"wsfMrqIRTpw",
"tVMRutk6asR",
"lHT3Ty4jVKR",
"qfbLmWhA0-5",
"JGiGegPED-A",
"7E3TmwTSkv",
"3GAfjAdSkoM",
"9hU_YeobioM",
"Ij-AXiBWLNu"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper presents a novel convolutional operation named FlexConv, to produce high bandwidth convolutional kernels with learnable kernel size at a fixed parameter cost. It is able to generate kernels with large kernel size and model long-term dependencies among elements in a sequence or an image. State-of-the-art... | [
6,
8,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
4,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2022_3jooF27-0Wy",
"iclr_2022_3jooF27-0Wy",
"iclr_2022_3jooF27-0Wy",
"lHT3Ty4jVKR",
"wsfMrqIRTpw",
"qfbLmWhA0-5",
"iclr_2022_3jooF27-0Wy",
"tVMRutk6asR",
"STKmHzgqaex",
"-MfOsaDFYhJ",
"JGiGegPED-A",
"Ij-AXiBWLNu",
"3GAfjAdSkoM",
"9hU_YeobioM",
"xwHno887QrU",
"iclr_2022_3jooF27-0W... |
iclr_2022_6IYp-35L-xJ | CADDA: Class-wise Automatic Differentiable Data Augmentation for EEG Signals | Data augmentation is a key element of deep learning pipelines, as it informs the network during training about transformations of the input data that keep the label unchanged. Manually finding adequate augmentation methods and parameters for a given pipeline is however rapidly cumbersome. In particular, while intuition... | Accept (Poster) | This paper is close to the borderline, but I think it is good enough that I recommend its acceptance. Although there were some problems raised by the reviewers, the authors managed to successfully address a majority of them. Having said that, I still recommend that the authors carefully analyze the reviews again and ma... | train | [
"uBMlVN-DBu4",
"iMea5K1dCfO",
"mJzxbC-tgSv",
"Mn825jOD1ym",
"H2v4qtLZnf8",
"5H2FpjAJI_",
"GBXc8MUrXT9",
"HqZmAxwNEHb",
"tAujZ5f7FuB",
"wT3cV_t_ZAl",
"6ouZsgmcRvt",
"JVoJvoWqa7d"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes an automatic differentiable data augmentation algorithm for EEG data that outperforms existing methods. They also propose novel augmentations for EEG that help the model to train better in low-labeled data regimes. They also show preliminary results showcasing that class-wise augmentation can be... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
5
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3
] | [
"iclr_2022_6IYp-35L-xJ",
"HqZmAxwNEHb",
"iclr_2022_6IYp-35L-xJ",
"iclr_2022_6IYp-35L-xJ",
"JVoJvoWqa7d",
"6ouZsgmcRvt",
"HqZmAxwNEHb",
"uBMlVN-DBu4",
"wT3cV_t_ZAl",
"iclr_2022_6IYp-35L-xJ",
"iclr_2022_6IYp-35L-xJ",
"iclr_2022_6IYp-35L-xJ"
] |
iclr_2022_ahi2XSHpAUZ | WeakM3D: Towards Weakly Supervised Monocular 3D Object Detection | Monocular 3D object detection is one of the most challenging tasks in 3D scene understanding. Due to the ill-posed nature of monocular imagery, existing monocular 3D detection methods highly rely on training with the manually annotated 3D box labels on the LiDAR point clouds. This annotation process is very laborious ... | Accept (Poster) | This paper received 5 quality reviews. The rebuttal and discussion were effective and addressed many concerns from the reviewers, after which most reviewers increase their ratings of this paper. The final rating is 6 from 4 reviewers, and 8 from 1 reviewer. The AC concurs with the positive recommendation from the revie... | val | [
"Ndb_cRgQfUw",
"5p5iHKkUHM",
"ddZ6gZUo_o",
"SQpU1oBwJGm",
"6abrhk_0CS7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This work, named WeakM3D, notices that Lidar points can provide weak supervision for monocular 3D Object Detection, dispensing with expensive 3D box annotations. It further identifies some challenges when trying to use Lidar points as supervision, and proposes several losses to mitigate the problems. Quantitative ... | [
8,
6,
6,
6,
6
] | [
4,
4,
4,
4,
5
] | [
"iclr_2022_ahi2XSHpAUZ",
"iclr_2022_ahi2XSHpAUZ",
"iclr_2022_ahi2XSHpAUZ",
"iclr_2022_ahi2XSHpAUZ",
"iclr_2022_ahi2XSHpAUZ"
] |
iclr_2022_dSw0QtRMJkO | High Probability Bounds for a Class of Nonconvex Algorithms with AdaGrad Stepsize | In this paper, we propose a new, simplified high probability analysis of AdaGrad for smooth, non-convex problems.
More specifically, we focus on a particular accelerated gradient (AGD) template (Lan, 2020), through which we recover the original AdaGrad and its variant with averaging, and prove a convergence rate of $\... | Accept (Poster) | The paper provides a high probability analysis for Adagrad for smooth non-convex optimization and shows its rate of convergence to critical points. Both rates for deterministic optimization and for stochastic optimization are provided. The main contribution of the paper is that unlike for SGD they don’t require knowled... | train | [
"QmZxEUjHZ6G",
"FXTVBobuAr",
"owi4XFfkS1I",
"knackQCyrc",
"JbfRPwemKeB",
"BPJyGjEqTolM",
"CYjr18EZkua",
"8OXDkw2YwZ4",
"6fvL7HL8DX",
"m4KKVikC75f"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer"
] | [
"This paper proposed a new analysis for AdaGrad method in smooth and non-convex optimization, to get high probability convergence toward stationary points.\n\nBased on some assumptions (Eqs. (2), (6) and (7)), i.e., Lipschitz, bounded variance of gradient estimates, and bounded stochastic gradient, the authors anal... | [
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2022_dSw0QtRMJkO",
"BPJyGjEqTolM",
"iclr_2022_dSw0QtRMJkO",
"6fvL7HL8DX",
"CYjr18EZkua",
"QmZxEUjHZ6G",
"8OXDkw2YwZ4",
"m4KKVikC75f",
"owi4XFfkS1I",
"iclr_2022_dSw0QtRMJkO"
] |
iclr_2022_a7H7OucbWaU | Memory Replay with Data Compression for Continual Learning | Continual learning needs to overcome catastrophic forgetting of the past. Memory replay of representative old training samples has been shown as an effective solution, and achieves the state-of-the-art (SOTA) performance. However, existing work is mainly built on a small memory buffer containing a few original data, wh... | Accept (Poster) | This works considers limitations of rehearsal-based methods in the context of continual learning (classification and object detection). Rehearsal-based methods provide a strong baseline, but a loss in predictive performance arises when the memory is limited in size. The authors propose to leverage compression (JPEG) to... | train | [
"PlVgHPFqwew",
"oAQ1OA1fR_W",
"Qd3Mvo7_Fvx",
"X1LhzpxMte1",
"rmZatAVk0Lf",
"KnDhpYUcCg",
"obA8Kd1nRDa",
"Q0t1jLRxnHq",
"bPOJKyv8fS",
"np1Wbh2DXvV",
"q0t9nmSMk-o",
"dtIjEF5h1OX",
"lQb0LtUyQ5k",
"kqArP1mUAZH",
"BAQrSLYo-i9",
"-OJMuPtSma",
"p-N-d65GKKt",
"oHerUklN1N2",
"RnYUXnCJMy",... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_r... | [
" Thank you very much for the positive feedback! We highly appreciate that.",
" Thank you for your diligent works and for providing additional experiments. And I think the proposed method is a necessary component for practical applications of compressed data replay. I keep my final decision as \"marginally above ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"oAQ1OA1fR_W",
"BAQrSLYo-i9",
"iclr_2022_a7H7OucbWaU",
"rmZatAVk0Lf",
"Q0t1jLRxnHq",
"bPOJKyv8fS",
"iclr_2022_a7H7OucbWaU",
"np1Wbh2DXvV",
"dtIjEF5h1OX",
"-OJMuPtSma",
"RnYUXnCJMy",
"q0t9nmSMk-o",
"S-jYMC8q7as",
"lQb0LtUyQ5k",
"oHerUklN1N2",
"p-N-d65GKKt",
"iclr_2022_a7H7OucbWaU",
... |
iclr_2022_Mng8CQ9eBW | BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models | Pre-trained Natural Language Processing (NLP) models, which can be adapted to a variety of downstream language tasks via fine-tuning, highly accelerate the learning progress of NLP models. However, NLP models have been shown to be vulnerable to backdoor attacks. Previous NLP backdoor attacks mainly focus on one specifi... | Accept (Poster) | The paper presents a backdoor attack approach against pre-trained models that may affect different downstream languages tasks with the same trigger. The paper shows that the downstream models can inherit security holes from upstream pre-trained models.
The paper is on the borderline and disagreement remains after dis... | train | [
"57i_7DpRUx8",
"vd5GQAewjgb",
"VnV8k9h3Vbo",
"Ty3b74nEpy",
"zfQRBXYz3i0",
"PJF8grqIno5",
"e-o4OsSW-R",
"RBuHtXKKOtE",
"5L8OT1upgKD",
"fXohuKRThd",
"Mn49OrHhMTa",
"2fMBvzS3AqU",
"34ANAyBf6no",
"Zxf35nn-wOc",
"aesGQ_yDYxF"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" ### The stealthiness of triggers\nThanks a lot for the suggestion. As mentioned in our response to Reviewer Fftf, **“we agree that common words as triggers are more stealthy and powerful, but it is really hard to realize this in our scenario which has higher attack demands. This will be a very promising research ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4,
4
] | [
"VnV8k9h3Vbo",
"VnV8k9h3Vbo",
"Mn49OrHhMTa",
"RBuHtXKKOtE",
"fXohuKRThd",
"e-o4OsSW-R",
"5L8OT1upgKD",
"aesGQ_yDYxF",
"34ANAyBf6no",
"Zxf35nn-wOc",
"2fMBvzS3AqU",
"iclr_2022_Mng8CQ9eBW",
"iclr_2022_Mng8CQ9eBW",
"iclr_2022_Mng8CQ9eBW",
"iclr_2022_Mng8CQ9eBW"
] |
iclr_2022_P1QUVhOtEFP | Topologically Regularized Data Embeddings | Unsupervised feature learning often finds low-dimensional embeddings that capture the structure of complex data. For tasks for which prior expert topological knowledge is available, incorporating this into the learned representation may lead to higher quality embeddings. For example, this may help one to embed the dat... | Accept (Poster) | This paper proposes loss functions to encode topological priors during data embedding, based on persistence diagram constructions from computational topology. The paper initially had some expositional issues and technical questions, but the authors did an exceptional job of addressing them during the rebuttal period--... | train | [
"wEMb1awC9Xh",
"egRfxlU1ccv",
"OpliCpjwJ5d",
"qsMJOj6Ay6N",
"tJ2jJ9E95k4",
"RirQ4P--8F",
"qWUYvnsGXXg",
"8iVhHdkVaqV",
"3EgzwGKZ0c",
"aVtcCFzj9Z1",
"SY-Fd-74av",
"ClNpTmcOP5W",
"wVNqjxoKgSD",
"2znWhv33Mf",
"dDQPgAGrjkg",
"audFr6lj1eZ",
"Kfdr0oz32lz",
"lrM_JcVnfYP",
"ki_YGE-Vjiq",... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_... | [
" Thank you very much for your detailed responses addressing each of my concerns and the great effort in your revisions. I think the paper has substantially improved. It reads much more clearly and the added theoretical details and experimental explorations enhance its contributions. For these reasons, I am willing... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3,
3
] | [
"qsMJOj6Ay6N",
"3EgzwGKZ0c",
"oSlY4L0RKC",
"ki_YGE-Vjiq",
"lrM_JcVnfYP",
"qWUYvnsGXXg",
"aVtcCFzj9Z1",
"JN-eR2gQ6gP",
"lrM_JcVnfYP",
"JN-eR2gQ6gP",
"JN-eR2gQ6gP",
"oSlY4L0RKC",
"oSlY4L0RKC",
"ki_YGE-Vjiq",
"ki_YGE-Vjiq",
"Kfdr0oz32lz",
"iclr_2022_P1QUVhOtEFP",
"iclr_2022_P1QUVhOtEF... |
iclr_2022_qqdXHUGec9h | Exploiting Class Activation Value for Partial-Label Learning | Partial-label learning (PLL) solves the multi-class classification problem, where each training instance is assigned a set of candidate labels that include the true label. Recent advances showed that PLL can be compatible with deep neural networks, which achieved state-of-the-art performance. However, most of the exist... | Accept (Poster) | This paper considers the so-called partial-label learning problem and proposes a class activation map that is better at making accurate predictions than the model itself on selecting the true label from candidate labels. The authors investigate the approach in experimental results on four benchmark image datasets.
Th... | val | [
"-M0q-hilkvR",
"wdlp8DBaNxd",
"qUh4VQBJksd",
"jV1NSpmpyl0",
"i3n_waicF_C",
"NVuwArfflGw",
"vDB1dueJbmW",
"rkhZnvP4az-",
"jvr1ifDxzLI",
"uo5Pm9Q7wEO",
"VWOJHs5oVj8",
"45aViWQk8i",
"Kz-hq3D-7p",
"LjhHK6g-QXm",
"yjDQdfL42hh",
"Wt_jeBjGMQK",
"CJYKrXLl88N",
"kozVSe_UMBH",
"rB7S1MUX-d"... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
... | [
" Thanks for your reply and we are glad that all your questions are answered to better understand our contribution according to our rebuttal. Please let us know if you have any other concerns and we will definitely provide detailed explanations.",
" There might be a misunderstanding, all key points of the Q3 are ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"wdlp8DBaNxd",
"NVuwArfflGw",
"VWOJHs5oVj8",
"i3n_waicF_C",
"rB7S1MUX-d",
"qUh4VQBJksd",
"iclr_2022_qqdXHUGec9h",
"iclr_2022_qqdXHUGec9h",
"VWOJHs5oVj8",
"VWOJHs5oVj8",
"rkhZnvP4az-",
"Kz-hq3D-7p",
"LjhHK6g-QXm",
"yjDQdfL42hh",
"Wt_jeBjGMQK",
"kozVSe_UMBH",
"ixxa6YQHo5S",
"zu3x2tEu... |
iclr_2022_73MEhZ0anV | QUERY EFFICIENT DECISION BASED SPARSE ATTACKS AGAINST BLACK-BOX DEEP LEARNING MODELS | Despite our best efforts, deep learning models remain highly vulnerable to even tiny adversarial perturbations applied to the inputs. The ability to extract information from solely the output of a machine learning model to craft adversarial perturbations to black-box models is a practical threat against real-world syst... | Accept (Poster) | This paper introduces a technique to generate L0 adversarial examples in
a black-box manner. The reviews are largely positive, with the reviewers
especially commenting on the paper being well written and clearly explaining
the method. The main drawbacks raised by the reviewers is that the method
is not clearly compared... | train | [
"hR_r3FLAuvS",
"jIzZjMUJkQ",
"1jx_KK5Malg",
"7vBITN4MA82",
"d-X5x2PEi6r",
"JiTJjcV8G-l",
"wLn0IZq1kN",
"px4joEZVlgz",
"z2XcXhIesrK",
"JepwZTIYn3u",
"9K1_kiItlyL",
"ArsMA-4ovX"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes the use of an evolutionary algorithm to construct decision-based black-box adversarial examples with L0 or sparsity constraints against image classifiers such as CNNs and Image Transformers. The algorithm uses an L2 distance constraint to check the fitness of a solution, and employs several tri... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3
] | [
"iclr_2022_73MEhZ0anV",
"ArsMA-4ovX",
"ArsMA-4ovX",
"ArsMA-4ovX",
"JepwZTIYn3u",
"hR_r3FLAuvS",
"9K1_kiItlyL",
"ArsMA-4ovX",
"9K1_kiItlyL",
"iclr_2022_73MEhZ0anV",
"iclr_2022_73MEhZ0anV",
"iclr_2022_73MEhZ0anV"
] |
iclr_2022_EhYjZy6e1gJ | Contrastive Label Disambiguation for Partial Label Learning | Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set, which well suits many real-world data annotation scenarios with label ambiguity. Despite the promise, the performance of PLL often lags behind the supervised counterpart. In this work, we b... | Accept (Oral) | This paper presents PiCO, a novel approach for partial label learning, which achieves very strong performance close to that of fully supervised learning and outperforms PPL baselines. The experiments are extensive with very impressive results and the analysis are thorough. | train | [
"-NKtg8CQnTx",
"ZceHv5oMY3o",
"bFa-_0xwClv",
"FTFwE53Lnmo",
"hnimOqlnUH",
"Mne6U5y44-",
"zAcBYCX1O9Y",
"2dYg36GYCyS",
"jI4LoxpfyKK",
"QfIVxQmRd8",
"Prj97uCQlkK",
"CRpXMhxuyZ0",
"lBwdz18uwtF",
"MsJN5PBk1Z_"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for the informative response. Although I disagree with the setup using clean dev sets, I understand that this is common in several research areas (and sometimes also in unsupervised learning e.g. unsupervised parsing in NLP). \n\nSatisfied with the response, and the latest draft,... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"QfIVxQmRd8",
"bFa-_0xwClv",
"FTFwE53Lnmo",
"Mne6U5y44-",
"Mne6U5y44-",
"zAcBYCX1O9Y",
"2dYg36GYCyS",
"Prj97uCQlkK",
"MsJN5PBk1Z_",
"lBwdz18uwtF",
"CRpXMhxuyZ0",
"iclr_2022_EhYjZy6e1gJ",
"iclr_2022_EhYjZy6e1gJ",
"iclr_2022_EhYjZy6e1gJ"
] |
iclr_2022_YZHES8wIdE | Generative Planning for Temporally Coordinated Exploration in Reinforcement Learning | Standard model-free reinforcement learning algorithms optimize a policy that generates the action to be taken in the current time step in order to maximize expected future return. While flexible, it faces difficulties arising from the inefficient exploration due to its single step nature. In this work, we present Gener... | Accept (Spotlight) | This paper proposes an alternative approach to epsilon-greedy exploration by instead generating multi-step plans from an RNN, and then stochastically determining whether to continue with the plan or re-plan. The reviewers agreed that this idea is novel and interesting, that the paper is well-written, and that the evalu... | train | [
"U-lzVulnBgr",
"lPnIlVgQKeu",
"A09XYJmjEO",
"GGcKihhFNg-",
"lLalxj5QCmk",
"b7usNut79pb",
"eGCqhaRrB7Q",
"nk-pUqOfwM-",
"V2kbZbU0cB",
"jprp2Y5oZC5",
"bDjI6M-mBX",
"2hWpG4kXJMi",
"ytHKnzDVXL",
"Od2dWFMw0l",
"1B6wFkEgYue"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a method for exploration called Generative Planning method (GPM), which generates a multi-step action sequence such that the exploration is more temporally consistent and \"intentional\" compared to regular single-step action noise exploration. The multi-step action sequence is output by a gene... | [
8,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8
] | [
3,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2022_YZHES8wIdE",
"nk-pUqOfwM-",
"jprp2Y5oZC5",
"ytHKnzDVXL",
"iclr_2022_YZHES8wIdE",
"bDjI6M-mBX",
"Od2dWFMw0l",
"U-lzVulnBgr",
"iclr_2022_YZHES8wIdE",
"Od2dWFMw0l",
"lLalxj5QCmk",
"lLalxj5QCmk",
"1B6wFkEgYue",
"iclr_2022_YZHES8wIdE",
"iclr_2022_YZHES8wIdE"
] |
iclr_2022_PtSAD3caaA2 | Maximum Entropy RL (Provably) Solves Some Robust RL Problems | Many potential applications of reinforcement learning (RL) require guarantees that the agent will perform well in the face of disturbances to the dynamics or reward function. In this paper, we prove theoretically that maximum entropy (MaxEnt) RL maximizes a lower bound on a robust RL objective, and thus can be used to ... | Accept (Poster) | The reviewers thought this paper tackles an interesting question around whether MaxEnt RL already provides an important form of robustness. Such work helps us better understand the intersection between generalization, regularization and robustness. The reviewers had a number of comments, questions and clarifications an... | train | [
"lutJARbSGCh",
"NzBDAA1K2U4",
"0xPvIvEa8k",
"WeUx4IYD9q9"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors work on the important problem of robustness guarantees in RL algorithms. They analyze the robustness of max entropy RL w.r.t. dynamics uncertainties, and reward function uncertainties. The paper is generally well written. The motivation of the work is clear, and the results seem technically correct (I ... | [
8,
5,
6,
6
] | [
3,
3,
3,
5
] | [
"iclr_2022_PtSAD3caaA2",
"iclr_2022_PtSAD3caaA2",
"iclr_2022_PtSAD3caaA2",
"iclr_2022_PtSAD3caaA2"
] |
iclr_2022_04pGUg0-pdZ | Finite-Time Convergence and Sample Complexity of Multi-Agent Actor-Critic Reinforcement Learning with Average Reward | In this paper, we establish the first finite-time convergence result of the actor-critic algorithm for fully decentralized multi-agent reinforcement learning (MARL) problems with average reward.
In this problem, a set of $N$ agents work cooperatively to maximize the global average reward through interacting with their... | Accept (Spotlight) | This paper provides actor-critic method for fully decentralized MARL. The results remove some of the restrictions from existing results and have also obtained a sample bound that matches with the bound in single agent RL. The authors also give detailed responses to the reviewers' concerns. The overall opinions from the... | train | [
"J1xSXIEU3f4",
"oAXFeNvCmg7",
"Dhyw9aPnYy4",
"hG-k8hYX_Vj",
"UX7SRWZRPdT",
"AFHdxXwsjrF",
"PwArn1Oz984",
"A1Qo1L36d7s",
"PfdGn8XB_gj",
"cAk-7tZ4L6h",
"ukzfhIy91lM",
"bZvG81QGPng",
"xWhq1TS4SHM",
"ghe7MnO0Ftw",
"CYm0aoIMbgA",
"FUsmeChTJG0",
"QIqSCMx6t1D",
"g2Pv4Pmla-F",
"2IBUtLKeb... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"... | [
" The reviewer would like to thank the authors for the careful and detailed responses. The concerns are well addressed. ",
"This paper establishes the first finite-time convergence result of the actor-critic algorithm for fully decentralized multi-agent reinforcement learning (MARL) problems with average reward. ... | [
-1,
6,
-1,
-1,
8,
6,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
3,
-1,
-1,
4,
4,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"bZvG81QGPng",
"iclr_2022_04pGUg0-pdZ",
"oAXFeNvCmg7",
"FUsmeChTJG0",
"iclr_2022_04pGUg0-pdZ",
"iclr_2022_04pGUg0-pdZ",
"W-irvgWN2hr",
"UuLBCpvLtDf",
"iclr_2022_04pGUg0-pdZ",
"ukzfhIy91lM",
"AFHdxXwsjrF",
"xWhq1TS4SHM",
"Bi_gZr3O-EG",
"oAXFeNvCmg7",
"PfdGn8XB_gj",
"QIqSCMx6t1D",
"g2P... |
iclr_2022_C03Ajc-NS5W | An Autoregressive Flow Model for 3D Molecular Geometry Generation from Scratch | We consider the problem of generating 3D molecular geometries from scratch. While multiple methods have been developed for generating molecular graphs, generating 3D molecular geometries from scratch is largely under-explored. In this work, we propose G-SphereNet, a novel autoregressive flow model for generating 3D mol... | Accept (Poster) | This work introduces an autoregressive flow model that generates molecular geometries by placing one atom at the time.
In order to preserve the E(3) invariance of the density, successive atom locations are sampled relative to already placed atoms (in a coordinate system described by distance, angle and torsion).
The p... | val | [
"V0NxRODoH_4",
"o0KSCqP7eZj",
"Y0xhEKy4-w",
"Bl5Ues3m_rd",
"_7UJepNyR7",
"3z2oVs_Pip3",
"xreXpxcOrzC",
"H4gnTN3mYK",
"2zrvLnBg_0z",
"j__ILjEmnt",
"O5s7-Wer1ko",
"WRB1yqmDYDn",
"UQX659jF9pn",
"cI4OOnn_-id",
"HhEW37lVdqe",
"Sg8tL8SU9c1"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your updates here. Having read the reviews and responses, I remain in favor of accepting the paper, given the changes in response to reviewer comments. I think advancing generative models over molecular design remains an important problem.",
" Dear Reviewer UA8J,\n\nSince the discussion period will... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"HhEW37lVdqe",
"Sg8tL8SU9c1",
"Sg8tL8SU9c1",
"HhEW37lVdqe",
"cI4OOnn_-id",
"UQX659jF9pn",
"Sg8tL8SU9c1",
"Sg8tL8SU9c1",
"Sg8tL8SU9c1",
"HhEW37lVdqe",
"cI4OOnn_-id",
"UQX659jF9pn",
"iclr_2022_C03Ajc-NS5W",
"iclr_2022_C03Ajc-NS5W",
"iclr_2022_C03Ajc-NS5W",
"iclr_2022_C03Ajc-NS5W"
] |
iclr_2022_hpBTIv2uy_E | You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks | Hypergraphs are used to model higher-order interactions amongst agents and there exist many practically relevant instances of hypergraph datasets. To enable the efficient processing of hypergraph data, several hypergraph neural network platforms have been proposed for learning hypergraph properties and structure, with ... | Accept (Poster) | This paper proposes a hypergraph representation learning based on multiset encoding, which covers most existing propagation methods for hypergraph neural networks. The authors provide theoretical proofs that both CE-based and tensor-based propagation rules can be represented as a composition of two multiset functions,... | train | [
"pWLo9Ys6Man",
"e7BlyYd9H13",
"1Gp7PomFRTG",
"F9swjgBR_OI",
"zQlMtqeVt3d",
"q-CDO1ue3Xc",
"Nc9Taar_2-4",
"L4-PaPG3LSF",
"8698Kwd34M_",
"z0XtVYi12NL",
"SIle_pLBnFY",
"A8EhmtIxlJt",
"eI_jhkfkGjd",
"-WLMSDaB0A",
"8iyk_8ei5fv"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Transferring standard graph operators to the hypergraph setting is non-trivial and several message passing operators have been considered in the setting of hypergraphs, including clique-expansion and tensor based. In this paper the authors propose a general framework where learnable multiset functions are used in ... | [
8,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6
] | [
3,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2022_hpBTIv2uy_E",
"8iyk_8ei5fv",
"pWLo9Ys6Man",
"iclr_2022_hpBTIv2uy_E",
"Nc9Taar_2-4",
"8698Kwd34M_",
"q-CDO1ue3Xc",
"pWLo9Ys6Man",
"z0XtVYi12NL",
"SIle_pLBnFY",
"F9swjgBR_OI",
"8iyk_8ei5fv",
"-WLMSDaB0A",
"iclr_2022_hpBTIv2uy_E",
"iclr_2022_hpBTIv2uy_E"
] |
iclr_2022_v8OlxjGn23S | Low-Budget Active Learning via Wasserstein Distance: An Integer Programming Approach | Active learning is the process of training a model with limited labeled data by selecting a core subset of an unlabeled data pool to label. The large scale of data sets used in deep learning forces most sample selection strategies to employ efficient heuristics. This paper introduces an integer optimization problem for... | Accept (Poster) | This is an interesting submission, which was overall well received by the reviewers. I would recommend the authors to discuss further the vast modern litterature on efficient computation of Wasserstein distances and their minimization (see, e.g. Peyré and Cuturi 2019, and references therein) | train | [
"CUaNf43_Zc",
"vxP6g0Wohz5",
"Xbcp9QL48gn",
"whdPK9e2ebF",
"_xyGjMQY7_z",
"vDNmr0aI2Jc",
"aN5iNWVi6hW",
"Us6CoVStH8p",
"WSv28AUu6CT",
"wj4ybFohKZO"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Appreciate the clarity provided in the response. I retain my score-- I think the approach is interesting and novel. I would add that stronger uncertainty-based baselines (e.g. BALD and batchBALD) would be appreciated in the camera-ready (apologies for not raising this in my initial review), should the paper be ac... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"Xbcp9QL48gn",
"iclr_2022_v8OlxjGn23S",
"aN5iNWVi6hW",
"WSv28AUu6CT",
"wj4ybFohKZO",
"Us6CoVStH8p",
"iclr_2022_v8OlxjGn23S",
"iclr_2022_v8OlxjGn23S",
"iclr_2022_v8OlxjGn23S",
"iclr_2022_v8OlxjGn23S"
] |
iclr_2022_DesNW4-5ai9 | Transferable Adversarial Attack based on Integrated Gradients | The vulnerability of deep neural networks to adversarial examples has drawn tremendous attention from the community. Three approaches, optimizing standard objective functions, exploiting attention maps, and smoothing decision surfaces, are commonly used to craft adversarial examples. By tightly integrating the three ap... | Accept (Poster) | This paper proposes integrating three existing approaches to give a simple algorithm called TAIG for generating transferable adversarial examples under blackbox attacks.
In the original reviews, some strengths and weaknesses of the papers were highlighted although some of them have not reached general agreement after ... | train | [
"LrXXRzZZ0yI",
"ww4CrPTzR4e",
"9GF4qEAwRI",
"sxbHJuI4ji",
"20AUjNnsXjG",
"b6F2vncTYS",
"eieyhdZRDme",
"JofThpJolDG",
"Q3CyBmOMVWZ",
"IoUShUzEflT",
"2mAfyh5Wp_f",
"l7EFbOptq6p",
"H4ohc1vpZuC",
"ItdEOy2Rm6-",
"46464o9Ricp",
"TxVTxAWQ0g",
"umhZXYCV236",
"r_imaQKVAr",
"9GYQ3yTe2j-",
... | [
"official_reviewer",
"author",
"author",
"author",
"public",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In practice, adversarial examples are generated in three ways, i) solving a standard optimisation problem, ii) leveraging the salient regions of an image, or iii) smoothing the decision surfaces. The authors propose a simple technique named Transferable Attack based on Integrated Gradients (TAIG) that combines all... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
8
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"iclr_2022_DesNW4-5ai9",
"20AUjNnsXjG",
"iclr_2022_DesNW4-5ai9",
"w5HJ_wKAv5",
"r_imaQKVAr",
"w5HJ_wKAv5",
"w5HJ_wKAv5",
"w5HJ_wKAv5",
"w5HJ_wKAv5",
"LrXXRzZZ0yI",
"LrXXRzZZ0yI",
"5dJILT9Clr",
"5dJILT9Clr",
"LrXXRzZZ0yI",
"0l6Kiptn2Wd",
"5dJILT9Clr",
"5dJILT9Clr",
"9GYQ3yTe2j-",
... |
iclr_2022_POxF-LEqnF | You Mostly Walk Alone: Analyzing Feature Attribution in Trajectory Prediction | Predicting the future trajectory of a moving agent can be easy when the past trajectory continues smoothly but is challenging when complex interactions with other agents are involved. Recent deep learning approaches for trajectory prediction show promising performance and partially attribute this to successful reasonin... | Accept (Poster) | The manuscript brings up an important issue: that current methods and datasets don't generally highlight interactions when it comes to trajectory prediction. This is despite the fact that it would seem that current methods incorporate agent interactions and that datasets appear to require reasoning about agent interact... | train | [
"nblKKvW4aLR",
"PzUcWFYNrHg",
"MRZhRiP3OiR",
"p2_V2juA-q",
"FJPYTzWQT5S",
"4_tzvRfpNyW",
"NrRUigyalm2",
"SopV0cPO2e8",
"OJksY4TI2A4",
"RT2kiAUtDfO",
"fY0lt02BbAr",
"0MaohpCDw87",
"8BBwAc137x3",
"CT8eODxY-B7",
"ju-AoO02Lp",
"NQYNLCXDl9A",
"uFjniN-MG4P",
"VXzjaWwwqIK",
"RV5DSrdvTiW... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the feedback and glad to know that our rebuttal addressed most of the raised concerns.\n\nWe kindly ask the reviewer to provide a final post-rebuttal comment about our paper and its recommendation.",
" I would like to thank the authors for their response and update in the new version. ... | [
-1,
-1,
-1,
8,
8,
-1,
10,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6
] | [
-1,
-1,
-1,
4,
4,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"PzUcWFYNrHg",
"8BBwAc137x3",
"p2_V2juA-q",
"iclr_2022_POxF-LEqnF",
"iclr_2022_POxF-LEqnF",
"CT8eODxY-B7",
"iclr_2022_POxF-LEqnF",
"RT2kiAUtDfO",
"iclr_2022_POxF-LEqnF",
"fY0lt02BbAr",
"NrRUigyalm2",
"p2_V2juA-q",
"RV5DSrdvTiW",
"FJPYTzWQT5S",
"VXzjaWwwqIK",
"VXzjaWwwqIK",
"iclr_2022... |
iclr_2022_qSV5CuSaK_a | Few-Shot Backdoor Attacks on Visual Object Tracking | Visual object tracking (VOT) has been widely adopted in mission-critical applications, such as autonomous driving and intelligent surveillance systems. In current practice, third-party resources such as datasets, backbone networks, and training platforms are frequently used to train high-performance VOT models. Whilst ... | Accept (Poster) | This paper proposes a few-shot (untargeted) backdoor attack (FSBA) against siamese network-based visual object tracking. Contributions can be summarized as follows: First, this paper treats the attack task as an instance of multi-task learning and can be regarded as the first backdoor attack against VOT. Besides, a sim... | train | [
"ccuVO1_wePJ",
"GDVyECXo3t",
"w56uJO21PLa",
"5jtwszFvln",
"gHfIRBsAdA5",
"GaqvX-zB1tm",
"XG3qfUTSoo2",
"YLwty0O0XNt",
"ZFggXk4aX2v",
"iD8a_eu1PG",
"eLnw2vwRTjd",
"pQtxhedgdf3",
"-ri8eaVwX15",
"JsTQNZMrz6h",
"LlxiLAEINAz",
"VeYZWjdR0_E",
"OeHt19lB6x",
"VZjMWwPN46I",
"0cuGQ3da9_2",... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thanks for your positive feedback and constructive suggestions. We will add more detailed discussions and analyses in our final version, as you suggested.",
" Thanks for the authors' response. As the first work that relates backdoor attacks against VOT, we still believe this work is vital for the community. Alt... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"GDVyECXo3t",
"1YP82Krgtj4",
"gHfIRBsAdA5",
"iclr_2022_qSV5CuSaK_a",
"XG3qfUTSoo2",
"iclr_2022_qSV5CuSaK_a",
"5jtwszFvln",
"-ri8eaVwX15",
"1YP82Krgtj4",
"5jtwszFvln",
"5jtwszFvln",
"JsTQNZMrz6h",
"iclr_2022_qSV5CuSaK_a",
"YLwty0O0XNt",
"1YP82Krgtj4",
"5jtwszFvln",
"-ri8eaVwX15",
"-... |
iclr_2022_uorVGbWV5sw | Strength of Minibatch Noise in SGD | The noise in stochastic gradient descent (SGD), caused by minibatch sampling, is poorly understood despite its practical importance in deep learning. This work presents the first systematic study of the SGD noise and fluctuations close to a local minimum. We first analyze the SGD noise in linear regression in detail an... | Accept (Spotlight) | All the reviewers think that the work is significant and new. Therefore, they support the paper to be published at ICLR 2022. Given the strong results and the “accept” consensus from the reviewers, I accept the paper as “spotlight”. The authors should implement all the reviewers’ suggestions into the final version. | train | [
"e_qekm2cKmg",
"2guNR5TG2T",
"lI-cGBXWQ6",
"Kv3AxIFhjBf",
"O-AyBttZYLB",
"ObD4w5sFfQ3",
"y5MmnAyuLFN",
"hC66eWOXif",
"wb7AwDCwf_n",
"x0kSNc1phLI",
"VUK0_zZS8iu"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their response to my review. I encouraged the authors to incorporate their responses. I keep my recommendation for acceptance.",
" *Q1: A sudden drop/rise can be seen in Figures 3 and 4. Can you explain this phenomenon and what does it mean?*\n\n* Sorry for the ambiguity. It is not a dr... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"lI-cGBXWQ6",
"VUK0_zZS8iu",
"x0kSNc1phLI",
"wb7AwDCwf_n",
"hC66eWOXif",
"O-AyBttZYLB",
"iclr_2022_uorVGbWV5sw",
"iclr_2022_uorVGbWV5sw",
"iclr_2022_uorVGbWV5sw",
"iclr_2022_uorVGbWV5sw",
"iclr_2022_uorVGbWV5sw"
] |
iclr_2022_3HJOA-1hb0e | Toward Efficient Low-Precision Training: Data Format Optimization and Hysteresis Quantization | As the complexity and size of deep neural networks continue to increase, low-precision training has been extensively studied in the last few years to reduce hardware overhead. Training performance is largely affected by the numeric formats representing different values in low-precision training, but finding an optimal ... | Accept (Poster) | This paper introduces a method to determine which precision to use for the weights, as well as a quantisation method using hysteresis to improve performance with low-precision weights, including 4-bits.
Reviewers tend to agree that the two points presented are useful and can have a large impact on the field.
Generally,... | val | [
"8zYww_E17iu",
"Hpd5HzdBNRq",
"8BvcEc9Rzga",
"CM0kTEyLa_q",
"mIaMW8UP7za",
"Km6jVfQ_LD8",
"yFL2QYgDn4F",
"s1S6QtHHvc",
"nYv8xxSgr5g",
"oWpk8wHs0hV",
"3VaSxc3PIz",
"19pG5_do_9",
"OgeU5637J9h",
"O4SmmtOt2lw",
"z46N5tz64bL",
"0I9M8wwucuR",
"oBeP2Dt8pFL",
"tbQLMTi-6fX",
"rlpTZ8jaxXo"... | [
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
... | [
" We thank the reviewer again for carefully reviewing our response and providing constructive feedback. We have addressed the comments and questions below.\n\n$\\textbf{Q1.}$ Figure 1 (or Figure 14) is about inference only. If a paper discusses low-precision training, much more detailed configurations would be requ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"mIaMW8UP7za",
"mIaMW8UP7za",
"mIaMW8UP7za",
"mIaMW8UP7za",
"3VaSxc3PIz",
"nYv8xxSgr5g",
"nYv8xxSgr5g",
"QZfb7VLI_Zo",
"CHxOPSlvd7",
"19pG5_do_9",
"19pG5_do_9",
"O4SmmtOt2lw",
"iclr_2022_3HJOA-1hb0e",
"OgeU5637J9h",
"oBeP2Dt8pFL",
"oBeP2Dt8pFL",
"pwXkFvWUGm",
"_2uFXb0fLV",
"_2uFX... |
iclr_2022_wwDg3bbYBIq | Learning to Remember Patterns: Pattern Matching Memory Networks for Traffic Forecasting | Traffic forecasting is a challenging problem due to complex road networks and sudden speed changes caused by various events on roads. Several models have been proposed to solve this challenging problem, with a focus on learning the spatio-temporal dependencies of roads. In this work, we propose a new perspective for co... | Accept (Poster) | The paper presents a neural architecture based on neural memory modules to model the spatiotemporal traffic data. The reviewers think this is an important application of deep learning and thus fits the topic of ICLR. The writing and the novelty of the proposed method need improvement. | train | [
"hHWOXFqvCoj",
"6Sl_nKCbRqF",
"hthQ3YPnT2n",
"CLcAgwJC6DM",
"nSy0uAUe9db",
"4jx1UJ6HM7X",
"oxIXvlrkR_U",
"PAx_kjTCe9i",
"RHSnNLVSft9",
"TIo4forRL3w",
"DiZEtrQ1_46",
"fRD3xMnCt4G"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This work studied the problem of traffic speed forecasting. In particular, the authors proposed a framework that improves forecasting performance by leveraging both the spatio-temporal dependency and extracted traffic patterns. Strengths:\n- The idea of using both extracted traffic patterns and spatial dependenci... | [
6,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
5
] | [
"iclr_2022_wwDg3bbYBIq",
"iclr_2022_wwDg3bbYBIq",
"CLcAgwJC6DM",
"fRD3xMnCt4G",
"6Sl_nKCbRqF",
"hHWOXFqvCoj",
"iclr_2022_wwDg3bbYBIq",
"hHWOXFqvCoj",
"6Sl_nKCbRqF",
"DiZEtrQ1_46",
"iclr_2022_wwDg3bbYBIq",
"iclr_2022_wwDg3bbYBIq"
] |
iclr_2022_zRJu6mU2BaE | ConFeSS: A Framework for Single Source Cross-Domain Few-Shot Learning | Most current few-shot learning methods train a model from abundantly labeled base category data and then transfer and adapt the model to sparsely labeled novel category data. These methods mostly generalize well on novel categories from the same domain as the base categories but perform poorly for distant domain catego... | Accept (Poster) | Summary:
Paper addresses the cross-domain few-shot learning scenario, where meta-learning data is unavailable, and approaches are evaluated directly on novel settings. Authors propose a 3-step approach: 1) self-supervised pretraining, 2) feature selection, 3) fine-tuning, and demonstrate gains over state-of-art.
Pro... | train | [
"sBszkc6NHC2",
"QE6pe758W7H",
"rw_rWnPup1",
"Mag2d7fOrNk",
"pQg93OyRQbd",
"Wk9F1klBDQM",
"dp8bEUXh3gu",
"0fOk7rrmJxG",
"tHfwJoFGnPr",
"RH5ZPeNj99_",
"fyl7sNm1Rm"
] | [
"author",
"public",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for bringing your published paper [1] to our notice. It is an impressive work that produces good performance on the BCDFSL benchmark (Guo et al.). We have read your paper and can argue that the method of adversarial task augmentation (ATA) is a plug and play method that needs to be added on top of exist... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
4
] | [
"QE6pe758W7H",
"iclr_2022_zRJu6mU2BaE",
"fyl7sNm1Rm",
"RH5ZPeNj99_",
"tHfwJoFGnPr",
"dp8bEUXh3gu",
"0fOk7rrmJxG",
"iclr_2022_zRJu6mU2BaE",
"iclr_2022_zRJu6mU2BaE",
"iclr_2022_zRJu6mU2BaE",
"iclr_2022_zRJu6mU2BaE"
] |
iclr_2022_Vog_3GXsgmb | Discovering Nonlinear PDEs from Scarce Data with Physics-encoded Learning | There have been growing interests in leveraging experimental measurements to discover the underlying partial differential equations (PDEs) that govern complex physical phenomena. Although past research attempts have achieved great success in data-driven PDE discovery, the robustness of the existing methods cannot be gu... | Accept (Poster) | The paper introduces a pipeline to discover PDEs from scarce and noisy data. Reviewers engaged in a very thoughtful discussion with the authors. I read the extensive rebuttal, and I believe the authors have addressed the major concerns claimed by the reviewers. I ask the authors to make sure to include all the changes ... | train | [
"o8ffGOC71o",
"lL6y_donvoU",
"gyrzojUBfVR",
"dvHx48aVxas",
"aX5bojyXqh",
"2LHY2ZA46XI",
"wvXYgNYPBx",
"cYRK-0MxLku",
"UXZzg4M9SiF",
"VOJ2cJ3CNpg",
"p2XRH-VnP2p",
"-JxAJtcO49K",
"mvpOy_jzOtf",
"Yy39vg0jfCL",
"_ehLq8xgBgv",
"gFJ68jemBjX",
"Q41_25WirJa",
"Hzq-VP3YH2S",
"no6L_6_xzp4"... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
... | [
"Problem: data-driven PDE discovery methods are not robus to low-quality measurement data.\n\nSolution: A novel architecture that can encode prior knowledge (known terms, PDE structure, boundary conditions), and sparse regression procedure that is hypothesized to be more robust to scarce and noisy data scenarios.\n... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2022_Vog_3GXsgmb",
"iclr_2022_Vog_3GXsgmb",
"baNbq9SR5-D",
"2LHY2ZA46XI",
"mvpOy_jzOtf",
"aX5bojyXqh",
"Q41_25WirJa",
"UXZzg4M9SiF",
"gFJ68jemBjX",
"iclr_2022_Vog_3GXsgmb",
"Q41_25WirJa",
"p2XRH-VnP2p",
"Yy39vg0jfCL",
"-JxAJtcO49K",
"5LH1GgNUK60",
"Hzq-VP3YH2S",
"GRfF2uT83YG",
... |
iclr_2022__PHymLIxuI | CrossFormer: A Versatile Vision Transformer Hinging on Cross-scale Attention | Transformers have made great progress in dealing with computer vision tasks. However, existing vision transformers have not yet possessed the ability of building the interactions among features of different scales, which is perceptually important to visual inputs. The reasons are two-fold: (1) Input embeddings of each ... | Accept (Poster) | The paper proposes several modifications to vision transformers: multiscale features, a variant of factorized attention, and "dynamic position bias". The proposed architecture with these modifications achieves strong results on classification, detection, and segmentation.
After considering the authors' responses, all ... | train | [
"qgtKm2tKDX",
"mViUhFuMAnF",
"8L9I8fACxkw",
"XA7MxnIL8W5",
"VtQLoIn7OB8",
"P0Zt8dcbnYU",
"YXhmnacuJbl",
"u8ahcm2zVn",
"GwH8AjqY3u",
"dfTjGrovhS7",
"8x8BZ1MgY9q",
"hJc0ViB3lI",
"I2hDbWLBSPb",
"AgSemm0NAix",
"F7n5ZGSxIJ8",
"nDb-DDcMZuE"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for authors' response. The response has solved some of my concerns. I would keep my rating score as weak accept.",
" Thanks for your reply, and we are glad that our responses solve your concerns. We kindly remind you that you may need to **update the score accordingly**.",
" The response has solved som... | [
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
"I2hDbWLBSPb",
"XA7MxnIL8W5",
"8x8BZ1MgY9q",
"nDb-DDcMZuE",
"iclr_2022__PHymLIxuI",
"GwH8AjqY3u",
"AgSemm0NAix",
"nDb-DDcMZuE",
"VtQLoIn7OB8",
"VtQLoIn7OB8",
"AgSemm0NAix",
"iclr_2022__PHymLIxuI",
"F7n5ZGSxIJ8",
"iclr_2022__PHymLIxuI",
"iclr_2022__PHymLIxuI",
"iclr_2022__PHymLIxuI"
] |
iclr_2022_US2rTP5nm_ | EntQA: Entity Linking as Question Answering | A conventional approach to entity linking is to first find mentions in a given document and then infer their underlying entities in the knowledge base. A well-known limitation of this approach is that it requires finding mentions without knowing their entities, which is unnatural and difficult. We present a new model t... | Accept (Spotlight) | This paper casts entity linking in a retrieve-then-read framework by first retrieving entity candidates and then finding their mentions via reading comprehension. All reviewers agree that the proposed approach is novel, well-motivated, and simple yet performant. The authors have done a good job of addressing all the co... | train | [
"uS-eZ8WjQFZ",
"Ae1HdHsK6L",
"inhkfd7i5Wz",
"mHrjTc40AcM",
"KcYQNQ5QV8e",
"2gUKdHT1e7",
"tsi94xwShrJ",
"uBihFii5TAz",
"7UuS46-EtBn",
"6h2rzWTwkHW",
"6BcIANkl6i3"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the clarifications. We agree that it will be more complete to analyze other models in addition to EntQA. As for training efficiency, our main point of comparison is GENRE which requires pretraining on 64 GPUs for 30 hours (Section 3.2.1). \n\nWe will include more details in discussing EntQA and GENR... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"Ae1HdHsK6L",
"2gUKdHT1e7",
"tsi94xwShrJ",
"uBihFii5TAz",
"6h2rzWTwkHW",
"7UuS46-EtBn",
"6BcIANkl6i3",
"iclr_2022_US2rTP5nm_",
"iclr_2022_US2rTP5nm_",
"iclr_2022_US2rTP5nm_",
"iclr_2022_US2rTP5nm_"
] |
iclr_2022_WvOGCEAQhxl | Assessing Generalization of SGD via Disagreement | We empirically show that the test error of deep networks can be estimated by training the same architecture on the same training set but with two different runs of Stochastic Gradient Descent (SGD), and then measuring the disagreement rate between the two networks on unlabeled test data. This builds on -- and is a stro... | Accept (Spotlight) | This article introduces an interesting variant of the work of Nakkiran & Bansal (2020). It shows empirically that the test error of deep models can be approximated from the disagreement on the unlabelled test data between two different trainings on the same data. The authors then show theoretically that a calibration p... | train | [
"I400SBI7tbh",
"Yk-7VNzC5ak",
"rDdXS4tYkz5",
"faML-aVgWHM",
"F4NJyTDnbn1",
"BNiHfYDSH_y",
"YDzzbgTEPgi",
"2e6bKkrwg4",
"DHRbI-pLbw0",
"5IETjgePLTo",
"5JMs0A7Ojlq",
"wkcCjAgbLV",
"8Ew7TBqCtBh",
"GWajQVZ2kg",
"ZxIENUB3G5"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your effort in replying to reviews and revising the paper. I confirm that I will keep my original score. Cheers.",
" Agreed. Broadly, if there was a hold-out set, in-distribution u.a.e would be moot from a practical point of view. However, we think that in cases where preserving a hold-out dataset i... | [
-1,
-1,
-1,
-1,
-1,
8,
8,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"5JMs0A7Ojlq",
"rDdXS4tYkz5",
"faML-aVgWHM",
"F4NJyTDnbn1",
"5IETjgePLTo",
"iclr_2022_WvOGCEAQhxl",
"iclr_2022_WvOGCEAQhxl",
"iclr_2022_WvOGCEAQhxl",
"ZxIENUB3G5",
"ZxIENUB3G5",
"GWajQVZ2kg",
"BNiHfYDSH_y",
"YDzzbgTEPgi",
"iclr_2022_WvOGCEAQhxl",
"iclr_2022_WvOGCEAQhxl"
] |
iclr_2022_Oh1r2wApbPv | Contextualized Scene Imagination for Generative Commonsense Reasoning | Humans use natural language to compose common concepts from their environment into plausible, day-to-day scene descriptions. However, such generative commonsense reasoning (GCSR) skills are lacking in state-of-the-art text generation methods. Descriptive sentences about arbitrary concepts generated by neural text gener... | Accept (Poster) | Strengths:
* Strong results across two benchmarks
* Ablation study demonstrates importance of components
* Provides improvements especially in low resource settings
* Well-written paper
Weaknesses:
* Novelty of the method may be limited as previous works have explored structured outputs as intermediate plans
* Not cle... | train | [
"IBMf4bWGGf",
"RbKilpoytE",
"KDjet7n_PSR",
"p1SrPwzAtKM",
"N6_m324apSe",
"QPcBBI7BJPe",
"frdVZe4Co-u",
"P_mWfeyLQ8a",
"jLVg3HHjn4q",
"-qj5xY65edd",
"pRMp1DmfdEk",
"yXm3ML-Irqw",
"17gXDTjnhns",
"-4WvWlHRyq5",
"UjaIEC8CvB",
"VdcMaMgS9nx",
"gerGALWcj61",
"If873hj4sE",
"PH8Kb4Uwsy",
... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"officia... | [
"The paper introduces an Imagine and Verbalize two-step method for generative commonsense reasoning. The system first imagines a scene in the form of a linearized SKG and uses that as input for a second model that verbalizes it into (more) human readable text. The method is tested on the Concept2Sentence and Concep... | [
6,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8
] | [
4,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2022_Oh1r2wApbPv",
"P_mWfeyLQ8a",
"p1SrPwzAtKM",
"QPcBBI7BJPe",
"iclr_2022_Oh1r2wApbPv",
"If873hj4sE",
"aKO7KF3dtL",
"pRMp1DmfdEk",
"IBMf4bWGGf",
"-4WvWlHRyq5",
"yXm3ML-Irqw",
"17gXDTjnhns",
"jLVg3HHjn4q",
"SMoX1CzFzNR",
"C-N6mStZC56",
"gerGALWcj61",
"C-N6mStZC56",
"PH8Kb4Uws... |
iclr_2022_WuEiafqdy9H | Model-augmented Prioritized Experience Replay | Experience replay is an essential component in off-policy model-free reinforcement learning (MfRL). Due to its effectiveness, various methods for calculating priority scores on experiences have been proposed for sampling. Since critic networks are crucial to policy learning, TD-error, directly correlated to $Q$-values,... | Accept (Poster) | This work proposes a new strategy for prioritized experience replay. It is based on the argument that the TD error itself may not be a good indicator for priority, so we should rely on other factors that are easier and more reliable to learn. The new method is based on two modifications: (1) modifying the critic's obje... | train | [
"6WeFzKtNHC7",
"cfJD4uOfy5x",
"OGOOiavOhHF",
"gGusQRxhZXM",
"y9IP_lqf8Ym",
"0BhNek_wSg7",
"Ohg1XB9WK2M",
"g3vL2gBmQOk",
"g9S9qcsbzaB",
"oFSk54tR3Ig",
"3mCg0zcZi-E",
"5VKyu9MQUPN",
"0PqbbDNR90Y"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewers and AC,\n\nWe really appreciate all the reviewers for their constructive comments. We have responded to the common comments as well as individual comments from the reviewers below, and believe that we have successfully responded to all of them. Here we briefly summarize the updates we have made to ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
5
] | [
"iclr_2022_WuEiafqdy9H",
"iclr_2022_WuEiafqdy9H",
"gGusQRxhZXM",
"y9IP_lqf8Ym",
"3mCg0zcZi-E",
"oFSk54tR3Ig",
"5VKyu9MQUPN",
"0PqbbDNR90Y",
"3mCg0zcZi-E",
"iclr_2022_WuEiafqdy9H",
"iclr_2022_WuEiafqdy9H",
"iclr_2022_WuEiafqdy9H",
"iclr_2022_WuEiafqdy9H"
] |
iclr_2022_84NMXTHYe- | Evidential Turing Processes | A probabilistic classifier with reliable predictive uncertainties i) fits successfully to the target domain data, ii) provides calibrated class probabilities in difficult regions of the target domain (e.g. class overlap), and iii) accurately identifies queries coming out of the target domain and reject them. We introdu... | Accept (Poster) | It seems the reviewers are in an agreement that the work seems interesting, well motivated, and results are meaningful. The main complaints or issues that remain is the amount of rewriting involved, which might be hard for the reviewers to track, and maybe question regarding the results given for e.g. the choice of arc... | train | [
"1vqumuscHpN",
"lZe-kspfZPJ",
"mmty5iQ2Cq",
"t4CrmQ0jrV2",
"KeAaGvvXMd",
"9v9um_S4ueN",
"UdtXr41FnMW",
"YJ1rr1n4tz-",
"zv26Hq9nvCR",
"pRSY3w-4aoY",
"ZJrvd-HFlva",
"e3iClc14ZO1"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for clarifying my misunderstandings about Definition 1 and about the vanilla EDL baseline, and for removing the variational free energies formulas. \n- Concerning model size, making a larger scale experiment on CIFAR100 is a good start, but this does not address the fact that 15% prediction error on CIFAR1... | [
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
2,
2,
3
] | [
"UdtXr41FnMW",
"t4CrmQ0jrV2",
"iclr_2022_84NMXTHYe-",
"KeAaGvvXMd",
"mmty5iQ2Cq",
"ZJrvd-HFlva",
"e3iClc14ZO1",
"pRSY3w-4aoY",
"iclr_2022_84NMXTHYe-",
"iclr_2022_84NMXTHYe-",
"iclr_2022_84NMXTHYe-",
"iclr_2022_84NMXTHYe-"
] |
iclr_2022_2ggNjUisGyr | Partial Wasserstein Adversarial Network for Non-rigid Point Set Registration | Given two point sets, the problem of registration is to recover a transformation that matches one set to the other. This task is challenging due to the presence of large number of outliers, the unknown non-rigid deformations and the large sizes of point sets. To obtain strong robustness against outliers, we formulate t... | Accept (Poster) | In this paper the authors proposes to use partial optimal transport to align point cloud in the presence of noise and partially observed data. To this end they express the partial Wasserstein Kantorovich-Rubinstein duality and use it to adapt the classical WGAN loss to partial OT. The optimal alignment between point cl... | train | [
"v1e2af7sMpM",
"wIbv97RMKE",
"zBw34yTVpYc",
"LZP3Qco1Yaw",
"KowEFraZh8",
"k2UqfXvXdT",
"1v5yk4kure8",
"Zvj6liBDWU-",
"wEeMhuAdV4",
"3X7xdW6-nRo",
"rPO1DG-dn9X",
"8Gx9BrWYyju",
"GdtM9z6Efma",
"QCsZ0eXD1gG",
"Uvu4AzVdwZp",
"71t4byrAft",
"85c0A3zRT04",
"aEj7nqDTwJ1",
"B54eBpRJbu",
... | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official... | [
" We thank the reviewer for the valuable suggestions.\n\nAccording to the comments,\nwe have added a new experiments on the challenging human shape registration,\nand we have added a new section Appx.B discussing the connections between our work and other related work.\n\nPlease let us know whether these modificati... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"dF0noYEK8Pa",
"JkeuJ6wGp5n",
"Uvu4AzVdwZp",
"nw9Tjm6fmq",
"k2UqfXvXdT",
"iclr_2022_2ggNjUisGyr",
"Zvj6liBDWU-",
"wEeMhuAdV4",
"aEj7nqDTwJ1",
"iclr_2022_2ggNjUisGyr",
"GdtM9z6Efma",
"GdtM9z6Efma",
"85c0A3zRT04",
"iclr_2022_2ggNjUisGyr",
"yb0BL3lC1Jt",
"JkeuJ6wGp5n",
"k2UqfXvXdT",
"... |
iclr_2022_aBVxf5NaaRt | Unrolling PALM for Sparse Semi-Blind Source Separation | Sparse Blind Source Separation (BSS) has become a well established tool for a wide range of applications – for instance, in astrophysics and remote sensing. Classical sparse BSS methods, such as the Proximal Alternating Linearized Minimization (PALM) algorithm, nevertheless often suffer from a d... | Accept (Poster) | The paper develops an unrolled version of the PALM algorithm for sparse blind (or semi-blind) source separation. The unrolled version includes a soft-thresholding update, in which the thresholding parameter and one of the weight matrices is learned from data, with a least squares dictionary update, in which the step si... | train | [
"90jVXQaRGys",
"GYEaEYFFebc",
"UTZE3FDMZ3v",
"eX0yJMsYaC_",
"kYcdXIcjKnK",
"e6o5U1bxA1j",
"FLvijKmJm4_",
"k1eyHhXrqcQ",
"HzN4P46y4ub",
"QtxEYPdRXEm",
"SNvKt_oSWES",
"PViPlaWEa0Z",
"E6w9rr00zLF",
"k6VgDxhKWSx"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
" Thanks to all the reviewers for having evaluated the new version of the paper and updated the global scores.\n\nFor the correctness, technical novelty and significance, empirical novelty and significance scores, we would be grateful if the reviewers update them accordingly, in the case they find that it is releva... | [
-1,
5,
-1,
-1,
-1,
8,
-1,
8,
-1,
6,
-1,
-1,
-1,
-1
] | [
-1,
2,
-1,
-1,
-1,
4,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2022_aBVxf5NaaRt",
"iclr_2022_aBVxf5NaaRt",
"GYEaEYFFebc",
"GYEaEYFFebc",
"eX0yJMsYaC_",
"iclr_2022_aBVxf5NaaRt",
"iclr_2022_aBVxf5NaaRt",
"iclr_2022_aBVxf5NaaRt",
"k6VgDxhKWSx",
"iclr_2022_aBVxf5NaaRt",
"k1eyHhXrqcQ",
"e6o5U1bxA1j",
"QtxEYPdRXEm",
"E6w9rr00zLF"
] |
iclr_2022_BS49l-B5Bql | GNN-LM: Language Modeling based on Global Contexts via GNN | Inspired by the notion that ``{\it to copy is easier than to memorize}``, in this work, we introduce GNN-LM, which extends vanilla neural language model (LM) by allowing to reference similar contexts in the entire training corpus. We build a directed heterogeneous graph between an input context and its semantically rel... | Accept (Spotlight) | This paper introduces a new type of language model, the GNN-LM, which uses a graph neural network to allow a language model to reference similar contexts in the training corpus in addition to the input context. The empirical results are good, and the model sets a new SOTA on the benchmark Wikitext-103 corpus, as well ... | train | [
"P5baAzR3F_",
"vHz80Zsy5Qa",
"ta5REV2DEIg",
"SGWGecdPHpz",
"o7yTPb2Ccha",
"9gde-Q3a1xf",
"_Mq1Bokw4_f"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper presents a GNN based language model where neighbor contexts are retrieved, encoded via a graph neural network, and used to enhance generation. Evaluation on three benchmarks indicates that the proposed approach can outperform a bunch of baseline models. \n\nContributions:\n1. a new retrieval-augmented la... | [
6,
-1,
-1,
-1,
-1,
8,
10
] | [
3,
-1,
-1,
-1,
-1,
3,
4
] | [
"iclr_2022_BS49l-B5Bql",
"ta5REV2DEIg",
"_Mq1Bokw4_f",
"P5baAzR3F_",
"9gde-Q3a1xf",
"iclr_2022_BS49l-B5Bql",
"iclr_2022_BS49l-B5Bql"
] |
iclr_2022_CgIEctmcXx1 | ADAVI: Automatic Dual Amortized Variational Inference Applied To Pyramidal Bayesian Models | Frequently, population studies feature pyramidally-organized data represented using Hierarchical Bayesian Models (HBM) enriched with plates. These models can become prohibitively large in settings such as neuroimaging, where a sample is composed of a functional MRI signal measured on 300 brain locations, across 4 measu... | Accept (Poster) | The paper provides a unique contribution to the scalability of Bayesian inference to Pyramidal Bayesian Models with application to neuroimaging. The major point of concern by the reviewers is around how close is the inference approach to the more classical Mean-Field VI. However, in my opinion, the authors have address... | train | [
"RZ-6_Ansla-",
"zYksxmLpgjo",
"wOSZI28v5vB",
"C27IlcIpeG7",
"3h6_AwDm-c",
"9JP5GN1BISt",
"XWlrdf3FW16",
"Y6XtNtLm3Zf",
"PcmghO1NWt",
"_OtE5-bhoVl",
"PLvZxaYig7x",
"S8sbkxDz4-B",
"UPSTd8NFgDR",
"UTTsw4Uw0rd",
"lEqtnoK9e2",
"gvI-ybgUTi3",
"SENd45OqheV",
"5wmjepphuV3",
"fKDtBF0TvCk"... | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" ## Detailed Changelog\n\n* **Introduction**:\n * we engage with the general framing of approximate inference, and notably with the notion of the *approximation vs amortization gap*.\n * our main claim has been made clearer: we propose a method to benefit from the expressivity of normalizing flows in the con... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
2
] | [
"C27IlcIpeG7",
"wOSZI28v5vB",
"RZ-6_Ansla-",
"iclr_2022_CgIEctmcXx1",
"iclr_2022_CgIEctmcXx1",
"XWlrdf3FW16",
"Y6XtNtLm3Zf",
"SENd45OqheV",
"_OtE5-bhoVl",
"5wmjepphuV3",
"XWlrdf3FW16",
"UPSTd8NFgDR",
"UTTsw4Uw0rd",
"lEqtnoK9e2",
"F9hS8gBL4uf",
"ADsjSoJRNqG",
"3h6_AwDm-c",
"fKDtBF0T... |
iclr_2022_MR7XubKUFB | Adversarial Retriever-Ranker for Dense Text Retrieval | Current dense text retrieval models face two typical challenges. First, it adopts a siamese dual-encoder architecture to encode query and document independently for fast indexing and searching, whereas neglecting the finer-grained term-wise interactions. This results in a sub-optimal recall performance. Second, it high... | Accept (Poster) | This paper introduces a new method for jointly training a dense bi-encoder retriever with a cross-encoder ranker. More precisely, the proposed method is iteratively training the retriever and the ranker, using an objective function inspired by adversarial training. In addition, the authors propose to use a distillation... | train | [
"Ii3aMrAQmiD",
"D4eNnJuQ5B9",
"Uep5hK0oFBu",
"jG2D_zf7cr",
"YL5HnPbl9dQ",
"8NtpOICRaB",
"AMA2JdFCSPv",
"fVuECPAXeK",
"j-7e0GbuUwk",
"I4lNC2Nsom3",
"RVDbedJ9HB1",
"EC1evNKXtmU",
"G0O-VDI9GFY"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a minimax multi-stage iterative method for document retrieval. The model consists of two components: dense retriever and ranker. The dense retriever is modeled using a dual-encoder and the ranker is modeled with a cross-encoder, both of which are fairly standard. In their proposed approach, in ... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"iclr_2022_MR7XubKUFB",
"iclr_2022_MR7XubKUFB",
"iclr_2022_MR7XubKUFB",
"j-7e0GbuUwk",
"Uep5hK0oFBu",
"EC1evNKXtmU",
"Ii3aMrAQmiD",
"G0O-VDI9GFY",
"Ii3aMrAQmiD",
"RVDbedJ9HB1",
"iclr_2022_MR7XubKUFB",
"iclr_2022_MR7XubKUFB",
"iclr_2022_MR7XubKUFB"
] |
iclr_2022_YVPBh4k78iZ | Scale Mixtures of Neural Network Gaussian Processes | Recent works have revealed that infinitely-wide feed-forward or recurrent neural networks of any architecture correspond to Gaussian processes referred to as NNGP. While these works have extended the class of neural networks converging to Gaussian processes significantly, however, there has been little focus on broaden... | Accept (Poster) | This paper presents a new formulation for the infinitely wide limiting case of deep networks as Gaussian processes, i.e. NNGPs. The authors extend the existing case to incorporate a scale term at the penultimate layer of the network, which results in a scale mixture of NNGPs or a Student-t process in a specific case. ... | train | [
"dFWkVgsWSx",
"oZRMUyQjYm4",
"upqCykHKGeN",
"rUEBU8B2jEU",
"xfe0tZx2EdQ",
"Lk5GFLTwyWv",
"ansadp7LRds",
"Rnn0ELFXDGC",
"IhJPpnQeZbJ",
"yazdlzz8t_B",
"w9rpXAvvYt",
"SDYWw4d82y",
"C4JqHn8NugU",
"EIvhnJ6_N4g"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the author(s) for the response. Overall it is a good paper but I have the same concern as Reviewer C7Q4 does. Therefore, I am going to keep my score.",
" I would like to thank the authors for the response provided to my review. I agree that the paper has new contributions from the theoreti... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
5
] | [
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
2,
4
] | [
"Rnn0ELFXDGC",
"ansadp7LRds",
"iclr_2022_YVPBh4k78iZ",
"IhJPpnQeZbJ",
"iclr_2022_YVPBh4k78iZ",
"IhJPpnQeZbJ",
"EIvhnJ6_N4g",
"C4JqHn8NugU",
"upqCykHKGeN",
"SDYWw4d82y",
"iclr_2022_YVPBh4k78iZ",
"iclr_2022_YVPBh4k78iZ",
"iclr_2022_YVPBh4k78iZ",
"iclr_2022_YVPBh4k78iZ"
] |
iclr_2022_t8O-4LKFVx | Learning Optimal Conformal Classifiers | Modern deep learning based classifiers show very high accuracy on test data but this does not provide sufficient guarantees for safe deployment, especially in high-stake AI applications such as medical diagnosis. Usually, predictions are obtained without a reliable uncertainty estimate or a formal guarantee. Conformal ... | Accept (Spotlight) | In this paper, a new learning scheme for minimizing the confidence set by conformal prediction is proposed. Most of the reviewers agree that the idea is interesting and novel. This is an important contribution to trustworthy ML, with theoretically sound considerations and thorough experimental validation. | val | [
"Uv3QmHaw2hH",
"x-mFsMQdJK",
"2nOJ-ypM0hJ",
"dmVxiagJsDD",
"r8vV9aZNrZZ",
"kVfcMHjwRf9",
"Fd1Id_DeEx",
"UhyXYk1_oy",
"sJex7bzHjb3",
"-_NYtAd5Zp4",
"YYfBsNqQwHv",
"t8nB278fRtS",
"2H6832n95aZ",
"HD9GrnaZmjK",
"mC0FnGcs4F"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the clarification. I still feel like it would be helpful to move some of the discussions in the appendix about the batch size selection, as this is a hyperparameter that is explicitly introduced by the proposed method, but I am also okay with it happening in the appendix. Overall, I see no reason to... | [
-1,
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
8,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"YYfBsNqQwHv",
"sJex7bzHjb3",
"dmVxiagJsDD",
"-_NYtAd5Zp4",
"iclr_2022_t8O-4LKFVx",
"UhyXYk1_oy",
"iclr_2022_t8O-4LKFVx",
"Fd1Id_DeEx",
"mC0FnGcs4F",
"HD9GrnaZmjK",
"2H6832n95aZ",
"iclr_2022_t8O-4LKFVx",
"iclr_2022_t8O-4LKFVx",
"iclr_2022_t8O-4LKFVx",
"iclr_2022_t8O-4LKFVx"
] |
iclr_2022_YX0lrvdPQc | A Johnson-Lindenstrauss Framework for Randomly Initialized CNNs | How does the geometric representation of a dataset change after the application of each randomly initialized layer of a neural network? The celebrated Johnson-Lindenstrauss lemma answers this question for linear fully-connected neural networks (FNNs), stating that the geometry is essentially preserved. For FNNs with th... | Accept (Poster) | This paper focuses on understanding how the angle between two inputs change as they are propagated in a randomly-initialized convolutional neural network layers. They demonstrate very different behavior in different settings and provide rigorous measure concentration results. The reviewers thought the paper is well wri... | train | [
"OER9h99TBte",
"08hIvSHhOQo",
"5wiXIzdQMZX",
"QdqdEFtnwZr",
"P91mMVuzHbd",
"Z456EDFXivT",
"3Rxl9h7VQrp",
"11RKNmqvyz5"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper studies the random initialized CNNs and analyzes the geometry preservation. For linear CNNs, the authors show the JL lemma type results hold. For CNN+ ReLU, the output contracts and the level of contraction depend on the inputs. In numerical experiments, the authors verify the geometry of natural image... | [
6,
8,
-1,
-1,
-1,
-1,
-1,
8
] | [
2,
4,
-1,
-1,
-1,
-1,
-1,
4
] | [
"iclr_2022_YX0lrvdPQc",
"iclr_2022_YX0lrvdPQc",
"P91mMVuzHbd",
"08hIvSHhOQo",
"iclr_2022_YX0lrvdPQc",
"OER9h99TBte",
"11RKNmqvyz5",
"iclr_2022_YX0lrvdPQc"
] |
iclr_2022_dZPgfwaTaXv | Relational Surrogate Loss Learning | Evaluation metrics in machine learning are often hardly taken as loss functions, as they could be non-differentiable and non-decomposable, e.g., average precision and F1 score. This paper aims to address this problem by revisiting the surrogate loss learning, where a deep neural network is employed to approximate the e... | Accept (Poster) | The paper presents an approach to learn the surrogate loss for complex prediction tasks where the task loss is non-differentiable and non-decomposable. The novelty of the approach is to rely on differentiable sorting, optimizing the spearman correlation between the true loss and the surrogate. This leads to a pipeline ... | train | [
"jNYPHJqNa1l",
"WJ41COkKxlf",
"XHuvzxFQhQD",
"_SPmkioAPXr",
"h1drhaR0meG",
"lTJw5emsuPt",
"bwsuQFdRst3",
"Yl_8R4EbmuH",
"REtqljCr_xf"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response. The listed minor issue has been clarified, so I will keep my rating",
" **Q3: The necessity of taking a neural network as the surrogate loss.**\n\n**A3:** We thank the reviewer for raising this interesting point. We have implemented the rank-based classification loss in [1], which calc... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"_SPmkioAPXr",
"XHuvzxFQhQD",
"h1drhaR0meG",
"REtqljCr_xf",
"bwsuQFdRst3",
"Yl_8R4EbmuH",
"iclr_2022_dZPgfwaTaXv",
"iclr_2022_dZPgfwaTaXv",
"iclr_2022_dZPgfwaTaXv"
] |
iclr_2022_3rULBvOJ8D2 | Unraveling Model-Agnostic Meta-Learning via The Adaptation Learning Rate | Model-Agnostic Meta-Learning (MAML) aims to find initial weights that allow fast adaptation to new tasks. The adaptation (inner loop) learning rate in MAML plays a central role in enabling such fast adaptation. However, how to choose this value in practice and how this choice affects the adaptation error remains less e... | Accept (Poster) | After carefully reading all reviews and rebuttal, I actually think the paper provides sufficient new insight in understanding MAML that is worth being accepted. I want to thank the authors for actively engaging with the reviewers, and providing sufficient changes to the paper in order to clarify and improve its contrib... | train | [
"22Y0mvrd51l",
"9sw72pG2w1s",
"8DdMMoZy1Dd",
"iwOG67crANeR",
"aASPIxzM_QAK",
"-TKqLdJwGNWP",
"JVLq8Y2hoi",
"4Py3iirhHF",
"ICc5boVwoao6",
"UUFYseIewbZ",
"i76LrZbuHwA",
"t-8hneLXcxe",
"1Iqo-9ejf_q",
"t7VfsXlLc7b"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" On the concerns on the theoretical front, we have some additional comments on the novelty of our results. \n\n- **Motivation:** To find a principled way to estimate $\\alpha^*$, we presented a fine-grained analysis of $\\alpha$ and improved the range result in (Bernacchia, 2020) to a value result (Theorem 1). Bey... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
6
] | [
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
4,
1
] | [
"1Iqo-9ejf_q",
"ICc5boVwoao6",
"iclr_2022_3rULBvOJ8D2",
"8DdMMoZy1Dd",
"t-8hneLXcxe",
"i76LrZbuHwA",
"iclr_2022_3rULBvOJ8D2",
"t7VfsXlLc7b",
"iwOG67crANeR",
"1Iqo-9ejf_q",
"iclr_2022_3rULBvOJ8D2",
"iclr_2022_3rULBvOJ8D2",
"iclr_2022_3rULBvOJ8D2",
"iclr_2022_3rULBvOJ8D2"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.