paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nips_2021_j4oYd8SGop | Inverse-Weighted Survival Games | Deep models trained through maximum likelihood have achieved state-of-the-art results for survival analysis. Despite this training scheme, practitioners evaluate models under other criteria, such as binary classification losses at a chosen set of time horizons, e.g. Brier score (BS) and Bernoulli log likelihood (BLL). Models trained with maximum likelihood may have poor BS or BLL since maximum likelihood does not directly optimize these criteria. Directly optimizing criteria like BS requires inverse-weighting by the censoring distribution, estimation of which itself also requires inverse-weighted by the failure distribution. But neither are known. To resolve this dilemma, we introduce Inverse-Weighted Survival Games to train both failure and censoring models with respect to criteria such as BS or BLL. In these games, objectives for each model are built from re-weighted estimates featuring the other model, where the re-weighting model is held fixed during training. When the loss is proper, we show that the games always have the true failure and censoring distributions as a stationary point. This means models in the game do not leave the correct distributions once reached. We construct one case where this stationary point is unique. We show that these games optimize BS on simulations and then apply these principles on real world cancer and critically-ill patient data.
| accept | To be updated - This is one, where I think the score underrates the work and should probably be accepted once reviewers update their scores based on the author response. I just emailed all of the reviewers who have yet to indicate they have read the author response. | val | [
"bHze79gjw-",
"ZFAFgEDWps",
"e7NeNnGp5wV",
"QcJcAUNvmy",
"Sotd99-e72k",
"YqNopvQEAnN",
"kpVHs5JW9Gu",
"qvSuIC3LGwM",
"X0mllHzJ7L7",
"bqNSRWocii7",
"Lj7YV_PMZ24",
"wdtvNpXX8LA",
"kYqJSgdhAKZ",
"Af4rPdBNm1",
"Uy65A3GriLW"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer Xu5C,\n\nWe would like to confirm whether we have answered your questions.\n\nFor reference, here are the main points:\n- the NLL baseline *does* account for censorship\n- we have added the Kaplan-Meier experiment as suggested (a good addition to the paper)\n- the plots already *do show end-to-end ... | [
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7
] | [
-1,
3,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"Af4rPdBNm1",
"nips_2021_j4oYd8SGop",
"nips_2021_j4oYd8SGop",
"YqNopvQEAnN",
"kpVHs5JW9Gu",
"wdtvNpXX8LA",
"X0mllHzJ7L7",
"Af4rPdBNm1",
"e7NeNnGp5wV",
"Af4rPdBNm1",
"nips_2021_j4oYd8SGop",
"ZFAFgEDWps",
"Uy65A3GriLW",
"nips_2021_j4oYd8SGop",
"nips_2021_j4oYd8SGop"
] |
nips_2021_RloMRU3keo3 | Generalization Bounds for Meta-Learning via PAC-Bayes and Uniform Stability | We are motivated by the problem of providing strong generalization guarantees in the context of meta-learning. Existing generalization bounds are either challenging to evaluate or provide vacuous guarantees in even relatively simple settings. We derive a probably approximately correct (PAC) bound for gradient-based meta-learning using two different generalization frameworks in order to deal with the qualitatively different challenges of generalization at the "base" and "meta" levels. We employ bounds for uniformly stable algorithms at the base level and bounds from the PAC-Bayes framework at the meta level. The result of this approach is a novel PAC bound that is tighter when the base learner adapts quickly, which is precisely the goal of meta-learning. We show that our bound provides a tighter guarantee than other bounds on a toy non-convex problem on the unit sphere and a text-based classification example. We also present a practical regularization scheme motivated by the bound in settings where the bound is loose and demonstrate improved performance over baseline techniques.
| accept | The reviewers and myself all agree on the significance of the results and their interest to the NeurIPS community. The reviews expressed a number of technical concerns which were all addressed satisfactorily by the authors during the discussion phase. If not done already, I would invite the authors to consider revising their manuscript along the many suggestions made by the reviewers. | train | [
"KWi9t3AYuY2",
"ka1c-ZGOUfG",
"RGO-FFnQp8",
"9Nxi1pGCCCB",
"hlxUtd8WHOV",
"D_Bl7-u82Bg",
"n-NI6uC5s8b",
"lsxscsOkifg",
"kjZuaG9X9ip",
"0aTWeFB3kdr",
"opNx7PWl01h",
"ZRQx2BZHBtV",
"1Us45DxjpbK",
"EgZ_66uB_R",
"k8Upen59sV8",
"2-j-8m3ZU0"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" After reading the other reviews, the authors rebuttal and the discussions, I keep my score and I support the acceptance of this paper. ",
" We thank the reviewer for the opportunity to have a thorough and valuable discussion. We are very grateful for the time and care that the reviewer has taken in this exchang... | [
-1,
-1,
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
-1,
4,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3
] | [
"EgZ_66uB_R",
"RGO-FFnQp8",
"nips_2021_RloMRU3keo3",
"hlxUtd8WHOV",
"D_Bl7-u82Bg",
"lsxscsOkifg",
"nips_2021_RloMRU3keo3",
"0aTWeFB3kdr",
"lsxscsOkifg",
"opNx7PWl01h",
"RGO-FFnQp8",
"k8Upen59sV8",
"n-NI6uC5s8b",
"2-j-8m3ZU0",
"nips_2021_RloMRU3keo3",
"nips_2021_RloMRU3keo3"
] |
nips_2021_A7pvvrlv68 | Parallel Bayesian Optimization of Multiple Noisy Objectives with Expected Hypervolume Improvement | Optimizing multiple competing black-box objectives is a challenging problem in many fields, including science, engineering, and machine learning. Multi-objective Bayesian optimization (MOBO) is a sample-efficient approach for identifying the optimal trade-offs between the objectives. However, many existing methods perform poorly when the observations are corrupted by noise. We propose a novel acquisition function, NEHVI, that overcomes this important practical limitation by applying a Bayesian treatment to the popular expected hypervolume improvement (EHVI) criterion and integrating over this uncertainty in the Pareto frontier. We argue that, even in the noiseless setting, generating multiple candidates in parallel is an incarnation of EHVI with uncertainty in the Pareto frontier and therefore can be addressed using the same underlying technique. Through this lens, we derive a natural parallel variant, qNEHVI, that reduces computational complexity of parallel EHVI from exponential to polynomial with respect to the batch size. qNEHVI is one-step Bayes-optimal for hypervolume maximization in both noisy and noiseless environments, and we show that it can be optimized effectively with gradient-based methods via sample average approximation. Empirically, we demonstrate not only that qNEHVI is substantially more robust to observation noise than existing MOBO approaches, but also that it achieves state-of-the-art optimization performance and competitive wall-times in large-batch environments.
| accept | We thank the authors for the additional clarifications provided in their rebuttal. All reviewers agreed that this work is of practical importance and made solid contributions by building on top of prior work, addressing an issue overlooked in the literature (namely the uncertainty residing in the Pareto front). The authors conducted an extensive set of experiments. They pre-dominantly focus on BO methods; including multi-objective approaches not relying on the BO paradigm could strengthen the paper. It was is also surprising the author missed to include the following reference/baseline:
Daniel Golovin and Qiuyi Zhang. Random Hypervolume Scalarizations for Provable Multi-Objective Black Box Optimization. 2020. | train | [
"yGedXFIa9gW",
"fmHMS-v0HE",
"zEWJ00AHHF",
"1uFsNoJPTyB",
"uIHcFwaOCOB",
"QkNiyfCIq0B",
"Yp1l93lBQHZ",
"tcU7fTYlcQy",
"IK2gEiVV3lg",
"02jpU3wGfW8",
"7jLQBEwmRff",
"oJV_2Fa5Ypk",
"Ppy__LZrw7Z",
"OMm3LjZLQfK",
"CxJYQ5llhfX",
"Ua6KgDdH-l"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer LnaP,\n\nThank you for taking the time to help us understand the point of confusion.\n\nWhen we say that the samples are said to be fixed and computed once in L231-232, we are referring to computing the sequential acquisition function NEHVI with $q=1$, not the parallel variant qNEHVI with $q >1$. We... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
5,
5,
3
] | [
"fmHMS-v0HE",
"uIHcFwaOCOB",
"7jLQBEwmRff",
"QkNiyfCIq0B",
"IK2gEiVV3lg",
"oJV_2Fa5Ypk",
"02jpU3wGfW8",
"nips_2021_A7pvvrlv68",
"CxJYQ5llhfX",
"tcU7fTYlcQy",
"Ua6KgDdH-l",
"OMm3LjZLQfK",
"nips_2021_A7pvvrlv68",
"nips_2021_A7pvvrlv68",
"nips_2021_A7pvvrlv68",
"nips_2021_A7pvvrlv68"
] |
nips_2021_lM2971LAwV | Evolution Gym: A Large-Scale Benchmark for Evolving Soft Robots | Both the design and control of a robot play equally important roles in its task performance. However, while optimal control is well studied in the machine learning and robotics community, less attention is placed on finding the optimal robot design. This is mainly because co-optimizing design and control in robotics is characterized as a challenging problem, and more importantly, a comprehensive evaluation benchmark for co-optimization does not exist. In this paper, we propose Evolution Gym, the first large-scale benchmark for co-optimizing the design and control of soft robots. In our benchmark, each robot is composed of different types of voxels (e.g., soft, rigid, actuators), resulting in a modular and expressive robot design space. Our benchmark environments span a wide range of tasks, including locomotion on various types of terrains and manipulation. Furthermore, we develop several robot co-evolution algorithms by combining state-of-the-art design optimization methods and deep reinforcement learning techniques. Evaluating the algorithms on our benchmark platform, we observe robots exhibiting increasingly complex behaviors as evolution progresses, with the best evolved designs solving many of our proposed tasks. Additionally, even though robot designs are evolved autonomously from scratch without prior knowledge, they often grow to resemble existing natural creatures while outperforming hand-designed robots. Nevertheless, all tested algorithms fail to find robots that succeed in our hardest environments. This suggests that more advanced algorithms are required to explore the high-dimensional design space and evolve increasingly intelligent robots -- an area of research in which we hope Evolution Gym will accelerate progress. Our website with code, environments, documentation, and tutorials is available at http://evogym.csail.mit.edu/.
| accept | The paper proposes a simulation-based benchmark for learning and evaluating algorithms that jointly evolve the design and control of soft robot bodies. The benchmark includes a suite of environments involving locomotion and manipulation tasks. The benchmark includes implementations of several existing design-control optimization methods and the paper presents experimental results for these baselines on a subset of the tasks.
Jointly optimizing a robot's physical design together with its control policy is a challenging problem that has recently gotten the attention of many researchers in the machine learning community. These challenges are exacerbated in the case of soft-bodied robots due to the significantly richer design space and the complexity of the physics. Owing to this and the renewed attention paid to joint optimization of design+control, the availability of a benchmark suite for soft robots would provide a valuable contribution. The reviewers agree that the paper is well motivated and well written. However, there are concerns about some of the assumptions and claims made in the paper, notably the sufficiency of a voxel-based representation of the design space, the simplistic physics, and a lack of a discussion of the sim-to-real gap, which seems to be exacerbated by the simplicity of the physics model and the nature of the design space. Thus, it is unclear which communities this benchmark is targeting. One can certainly see it being used as a simulation benchmark suite for the (robot) learning community, but the extent to which it will benefit soft roboticists appears limited. | train | [
"9e9tTbtgxBk",
"Mjkcnpm0FuV",
"ngVYtkwO1G",
"gIQ_0kON_Vm",
"_auGl_oLJtA",
"YbrAg-I5aoG",
"PoMN-uers7v",
"QGn6b7TwtA",
"fk1x2nNlsa"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for addressing the stated concerns in detail in your response. Specifically, thank you for point (1). Though it's still not completely clear to me that voxel-based robots are going to be especially easy to fabricate in the future, the larger tie-in to modular structures is clear and justifies voxels enough... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
8,
5
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"PoMN-uers7v",
"nips_2021_lM2971LAwV",
"gIQ_0kON_Vm",
"_auGl_oLJtA",
"fk1x2nNlsa",
"QGn6b7TwtA",
"Mjkcnpm0FuV",
"nips_2021_lM2971LAwV",
"nips_2021_lM2971LAwV"
] |
nips_2021_XWYJ25-yTRS | On Calibration and Out-of-Domain Generalization | Out-of-domain (OOD) generalization is a significant challenge for machine learning models. Many techniques have been proposed to overcome this challenge, often focused on learning models with certain invariance properties. In this work, we draw a link between OOD performance and model calibration, arguing that calibration across multiple domains can be viewed as a special case of an invariant representation leading to better OOD generalization. Specifically, we show that under certain conditions, models which achieve \emph{multi-domain calibration} are provably free of spurious correlations. This leads us to propose multi-domain calibration as a measurable and trainable surrogate for the OOD performance of a classifier. We therefore introduce methods that are easy to apply and allow practitioners to improve multi-domain calibration by training or modifying an existing model, leading to better performance on unseen domains. Using four datasets from the recently proposed WILDS OOD benchmark, as well as the Colored MNIST, we demonstrate that training or tuning models so they are calibrated across multiple domains leads to significantly improved performance on unseen test domains. We believe this intriguing connection between calibration and OOD generalization is promising from both a practical and theoretical point of view.
| accept | Authors make substantive contributions to the connections between out-of-distribution generalization and calibration. I agree with the reviewers that the contributions are significant and interesting. To make the paper more accessible, I encourage the authors to make the exposition (at least the first few sections) more friendly to readers who don’t have a causal inference background. | train | [
"U4WZRAnzNO1",
"GbiH6TePxbr",
"iD_pzTLw0bK",
"rVLKP0mXf_b",
"-sEr6Ssco2d",
"MLBXN4QoSJr",
"uEj4_xwkk0Z",
"sJeSgZgd-xX",
"5VvsjePiFa",
"N71WRLaAsB7",
"aYQ04Iemrep",
"TKpJD9qdk6",
"XfXyFGnfTZG",
"u2bjSce8uOH",
"HS8j2RfFNjA"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"This work suggests that for well calibrated models will achieve better performance on OOD data. It is shown empirically that expected calibration error (ECE) is negatively correlated with OOD performance, and various methods of selecting calibrated models are suggested. The best performing method involves regulati... | [
7,
7,
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
3,
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"nips_2021_XWYJ25-yTRS",
"nips_2021_XWYJ25-yTRS",
"nips_2021_XWYJ25-yTRS",
"MLBXN4QoSJr",
"nips_2021_XWYJ25-yTRS",
"uEj4_xwkk0Z",
"sJeSgZgd-xX",
"-sEr6Ssco2d",
"U4WZRAnzNO1",
"GbiH6TePxbr",
"-sEr6Ssco2d",
"iD_pzTLw0bK",
"iD_pzTLw0bK",
"iD_pzTLw0bK",
"nips_2021_XWYJ25-yTRS"
] |
nips_2021_Re_VXFOyyO | On the Convergence and Sample Efficiency of Variance-Reduced Policy Gradient Method | Junyu Zhang, Chengzhuo Ni, zheng Yu, Csaba Szepesvari, Mengdi Wang | accept | The paper introduces a novel truncation mechanism for variance reduction in policy gradient methods. This truncation mechanism allows one to dispense of the importance weight assumption. The paper then develops a complexity theory both for reaching stationary points and global optima (under additional convexity assumptions). All the reviewers agreed that this is an importance contribution, that the theory is sound and well written, and that the additional numerical experiments insightful. | train | [
"CboauD8l9YA",
"qOVuOVoZ3la",
"_jsLa0zVSmh",
"iYYnfpWUZx",
"ecziZnVBRbc",
"yrdDHvT1cnx",
"pRflneNqIPF",
"JJPdWp-oO9I",
"2jST0x2AyUX"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the clarification. Please update Theorem 5.9 in your manuscript. ",
"There is a lot of interest in studying policy gradient type of algorithms for solving the RL problem recently. This paper develops a variance reduced policy gradient method, where the objective function can be some general utility ... | [
-1,
7,
-1,
-1,
-1,
-1,
7,
6,
8
] | [
-1,
3,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"ecziZnVBRbc",
"nips_2021_Re_VXFOyyO",
"qOVuOVoZ3la",
"pRflneNqIPF",
"JJPdWp-oO9I",
"2jST0x2AyUX",
"nips_2021_Re_VXFOyyO",
"nips_2021_Re_VXFOyyO",
"nips_2021_Re_VXFOyyO"
] |
nips_2021__n59kgzSFef | Circa: Stochastic ReLUs for Private Deep Learning | Zahra Ghodsi, Nandan Kumar Jha, Brandon Reagen, Siddharth Garg | accept | Thank you for your submission. The reviewers agree that this paper provides new techniques to reduce the computational overhead of ReLU-based private inference. During the discussion, the authors have addressed the questions raised by the reviewers. The authors should incorporate these clarifications during rebuttal into the next revision of the paper. | train | [
"NLpJn3YRY1C",
"bOpJ7BvWLfy",
"2-xNXMCqeC-",
"hML-IhLckEX",
"9j7_Z2L9-eU",
"uwWXCFDVSKF",
"x1q12OxA1tp",
"ZiyJ8IB9YNV",
"7kHC1gHkTiE",
"6tgSwhFeSVD",
"oX8c26fgP0L",
"oKWmbMJjhn",
"_mPU30GCX_f",
"_9wnG7zEN-a",
"xZYvLR6B7ts"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the constructive discussion and your input that improved our paper! You can find the corresponding GC implementation under `crypto-primitives/src/gc.rs`. We plan on releasing the code publicly along with documentation and instructions.",
"This paper proposes Circa to reduce the communicational ove... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
1,
2
] | [
"2-xNXMCqeC-",
"nips_2021__n59kgzSFef",
"hML-IhLckEX",
"9j7_Z2L9-eU",
"uwWXCFDVSKF",
"6tgSwhFeSVD",
"oKWmbMJjhn",
"oX8c26fgP0L",
"xZYvLR6B7ts",
"bOpJ7BvWLfy",
"_9wnG7zEN-a",
"_mPU30GCX_f",
"nips_2021__n59kgzSFef",
"nips_2021__n59kgzSFef",
"nips_2021__n59kgzSFef"
] |
nips_2021_XHHxE-KOK7 | Reinforcement Learning in Reward-Mixing MDPs | Jeongyeol Kwon, Yonathan Efroni, Constantine Caramanis, Shie Mannor | accept | The main critique of the paper rests with the required assumptions for the theory. Reviewer TkXV summarized them well. The author response clarified that some of these assumptions were just out of simplicity in explication, and could relatively easily be relaxed. The authors should absolutely make this more in clear in revisions of the paper, even if the details of the relaxations are better suited for supplemental material. However, the sticking point is M=2. This limitation is both substantial and with no easy path to relax it. The authors argue that this is still a valuable step even if it's not settling everything. The opposite perspective is that the theoretical machinery doesn't even lend itself toward the M>2 question, and so is a minor advance. With the other restrictive assumptions addressed, I'm partial to theory moving forward in distinct (even if not large) steps, and this is indeed a distinct step. | train | [
"12hbTTNTNrs",
"dpslPuV2tJM",
"t9a0Xo57AtG",
"kXOwbx2DTQU",
"eOC8IZFirzD",
"aQMebePF92",
"lqIaV7mmBz_",
"0cAotksPOR",
"Ok6ZlYLRQ8A",
"cyEpotUXoPn",
"qvHFMSTBHLQ",
"NFUcld5cv2r",
"8vezo7_hvD"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We really appreciate the reviewer's positive update and his/her flexibility. We would like to explain our simplifying assumptions in more detail below and how they can be relaxed. \n\n**Bernoulli Rewards Assumption**: As mentioned in our previous common response, the assumption can be relaxed. We only need to ve... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
7,
5
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
2,
2
] | [
"t9a0Xo57AtG",
"0cAotksPOR",
"nips_2021_XHHxE-KOK7",
"cyEpotUXoPn",
"qvHFMSTBHLQ",
"8vezo7_hvD",
"t9a0Xo57AtG",
"NFUcld5cv2r",
"nips_2021_XHHxE-KOK7",
"nips_2021_XHHxE-KOK7",
"nips_2021_XHHxE-KOK7",
"nips_2021_XHHxE-KOK7",
"nips_2021_XHHxE-KOK7"
] |
nips_2021_S9NmGEMkn29 | A Gang of Adversarial Bandits | Mark Herbster, Stephen Pasteris, Fabio Vitale, Massimiliano Pontil | accept | Some concerns from the reviewers were addressed during the discussion phase.
Despite the tuning issues and the suboptimal bounds brought up in the discussion,
reviewers reached a consensus that the paper studies a well-motivated problem with
some interesting results.
Please do incorporate all the suggestions/discussions from the reviews into the final version.
| train | [
"3h_M62iQM7s",
"jJbZjGNjY02",
"-UDRhexEvpT",
"n5TdRN3Lzt2",
"vqLK_Z4g5K",
"D9BIGFundr",
"KjkXZ2huJMF",
"tAl3bOCBahc",
"tUYM5bye9oG",
"sTJ_xt1Oq9_",
"M8INsqxm8GK"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper considers the problem of contextual bandit problem with adversarial loss and finite number of contexts. Specifically, the authors consider the case where the contexts are related in some sense such that the action generated by a policy seeing one context may be related to the one seeing another similar ... | [
6,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
7,
6
] | [
3,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
4,
3
] | [
"nips_2021_S9NmGEMkn29",
"-UDRhexEvpT",
"n5TdRN3Lzt2",
"KjkXZ2huJMF",
"nips_2021_S9NmGEMkn29",
"M8INsqxm8GK",
"3h_M62iQM7s",
"vqLK_Z4g5K",
"sTJ_xt1Oq9_",
"nips_2021_S9NmGEMkn29",
"nips_2021_S9NmGEMkn29"
] |
nips_2021_k8KDqVbIS2l | Explaining Hyperparameter Optimization via Partial Dependence Plots | Automated hyperparameter optimization (HPO) can support practitioners to obtain peak performance in machine learning models.However, there is often a lack of valuable insights into the effects of different hyperparameters on the final model performance.This lack of explainability makes it difficult to trust and understand the automated HPO process and its results.We suggest using interpretable machine learning (IML) to gain insights from the experimental data obtained during HPO with Bayesian optimization (BO).BO tends to focus on promising regions with potential high-performance configurations and thus induces a sampling bias.Hence, many IML techniques, such as the partial dependence plot (PDP), carry the risk of generating biased interpretations.By leveraging the posterior uncertainty of the BO surrogate model, we introduce a variant of the PDP with estimated confidence bands.We propose to partition the hyperparameter space to obtain more confident and reliable PDPs in relevant sub-regions.In an experimental study, we provide quantitative evidence for the increased quality of the PDPs within sub-regions.
| accept | We thank the authors for the detailed clarifications they provided during the rebuttal. Model understanding is an important topic and understanding the impact/role of hyperparameters is still an open question. The reviewers all agreed that this paper makes interesting contributions even though the authors built on top of earlier work that uses partial dependency plots to interpret the role of hyperparameters. The reviewers found that the experiments supported the claims and that the discussion was balanced, covering both pros and cons of the proposed approach. | train | [
"GvlL98qPZcj",
"GakRrZ_9U4Z",
"XbG0Nhb-T8b",
"1nmcYshLeFQ",
"6o4aR38X1s",
"HcxncIrVPf",
"iQXdUjRq3hP",
"33-Z83gr-e",
"bFTHXON8Kr4"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear authors, thanks for the clarification on RF being the surrogate model of DNN.",
" Thank you very much for your positive assessment of our response. Of course, we will add all of these to our paper. If we get accepted, the additional page we would get will give us ample of space to definitely do this. \nOf ... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"HcxncIrVPf",
"XbG0Nhb-T8b",
"6o4aR38X1s",
"bFTHXON8Kr4",
"33-Z83gr-e",
"iQXdUjRq3hP",
"nips_2021_k8KDqVbIS2l",
"nips_2021_k8KDqVbIS2l",
"nips_2021_k8KDqVbIS2l"
] |
nips_2021_pUZBQd-yFk7 | Robustifying Algorithms of Learning Latent Trees with Vector Variables | We consider learning the structures of Gaussian latent tree models with vector observations when a subset of them are arbitrarily corrupted. First, we present the sample complexities of Recursive Grouping (RG) and Chow-Liu Recursive Grouping (CLRG) without the assumption that the effective depth is bounded in the number of observed nodes, significantly generalizing the results in Choi et al. (2011). We show that Chow-Liu initialization in CLRG greatly reduces the sample complexity of RG from being exponential in the diameter of the tree to only logarithmic in the diameter for the hidden Markov model (HMM). Second, we robustify RG, CLRG, Neighbor Joining (NJ) and Spectral NJ (SNJ) by using the truncated inner product. These robustified algorithms can tolerate a number of corruptions up to the square root of the number of clean samples. Finally, we derive the first known instance-dependent impossibility result for structure learning of latent trees. The optimalities of the robust version of CLRG and NJ are verified by comparing their sample complexities and the impossibility result.
| accept | This paper studies the problem of learning Gaussian latent tree models where the nodes are vector-valued random variables and there are corruptions. There were differing opinions among the reviewers, but in aggregate they agreed that the extension of existing works (notably the algorithms of Choi et al.) from the scalar-valued to vector-valued case was interesting and non-trivial. They also show how the Chow-Liu Recursive Grouping algorithm (as opposed to the Recursive Grouping algorithm) leads to improved dependence on the diameter of the underlying tree. Finally they give robustified algorithms based on the thresholded inner-product, whereby they can tolerate a total number of corruptions of about the squareroot of the number of clean samples. This paper has a nice mix of contributions and would make a solid addition to the conference. | train | [
"bzOREkX2ND",
"cwmwzMookpD",
"uy3beJEvWDK",
"0o68PSi08L",
"GKGMYZJSVZw",
"nxVJcjvjYdb",
"BmDVppIxD-_",
"CZZ73hPrN0k"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the attention and the detailed reply.\n\nWe feel that the reviewer is overly harsh on the point concerning RRG. Let us clarify. The discussion of RRG, which spans only 1.5 pages in the main text, serves as a *stepping stone* for the discussion and analysis of RCLRG (as RRG is performed o... | [
-1,
-1,
-1,
-1,
-1,
5,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
4,
2,
3
] | [
"cwmwzMookpD",
"GKGMYZJSVZw",
"CZZ73hPrN0k",
"BmDVppIxD-_",
"nxVJcjvjYdb",
"nips_2021_pUZBQd-yFk7",
"nips_2021_pUZBQd-yFk7",
"nips_2021_pUZBQd-yFk7"
] |
nips_2021_1LCtHgPC-l4 | Representation Learning on Spatial Networks | Zheng Zhang, Liang Zhao | accept | The paper discusses the problem of representation learning for geometric graphs, which are graphs in which the vertices are positions in 3d space (or some other geometry), and the edges obey some local law. These graphs come about in areas such as biology and chemistry. Being able to compute certain functions on such objects, using deep networks, requires a good representation in latent space. This paper suggests a novel method for learning representations of such networks. The reviews are all in favor of acceptance, and they seem to favorably note the importance of this problem, as well as the novelty of the methods and the experiment outcomes. I therefore am inclined toward accepting.
| train | [
"iIb_XsS99oh",
"aCtoHmrcgLh",
"zoFK4MHAyIm",
"k3YcXz_Z9j",
"gqgagAl_Ez7",
"5OOMQs-N5Pv",
"c-WTEjcXzvl",
"CYEveRWq0Vm"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Q. “Regarding real-world applications, human mobility modeling would be a promising direction. In fact, there are some works in this area using representation learning techniques, where geo-locations are considered as spatial graphs, such as [1][2][3][4][5]. But in these prior works, geometry features are not inc... | [
-1,
-1,
-1,
-1,
-1,
6,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"aCtoHmrcgLh",
"gqgagAl_Ez7",
"CYEveRWq0Vm",
"c-WTEjcXzvl",
"5OOMQs-N5Pv",
"nips_2021_1LCtHgPC-l4",
"nips_2021_1LCtHgPC-l4",
"nips_2021_1LCtHgPC-l4"
] |
nips_2021_8bbevt2MKPX | Continuous-time edge modelling using non-parametric point processes | The mutually-exciting Hawkes process (ME-HP) is a natural choice to model reciprocity, which is an important attribute of continuous-time edge (dyadic) data. However, existing ways of implementing the ME-HP for such data are either inflexible, as the exogenous (background) rate functions are typically constant and the endogenous (excitation) rate functions are specified parametrically, or inefficient, as inference usually relies on Markov chain Monte Carlo methods with high computational costs. To address these limitations, we discuss various approaches to model design, and develop three variants of non-parametric point processes for continuous-time edge modelling (CTEM). The resulting models are highly adaptable as they generate intensity functions through sigmoidal Gaussian processes, and so provide greater modelling flexibility than parametric forms. The models are implemented via a fast variational inference method enabled by a novel edge modelling construction. The superior performance of the proposed CTEM models is demonstrated through extensive experimental evaluations on four real-world continuous-time edge data sets.
| accept | Overall, the reviewers were positive about your work, though there are a number of suggestions that you should make to improve the manuscript. I remain a little perplexed by the performance on Overflow and Ubuntu--isn't it odd that AUC is highest but log-likelihood is lowest? Could this be due to the use of the mean-field variational Bayes approximation which can give very wrong posteriors? | train | [
"iorF1VRa0vy",
"gN4kXpQF1Jk",
"OkdWDuHt3R",
"8RHgqdwSEIQ",
"O1g2zj9NE7",
"x1DItholV3",
"2NdxYDxp5Kj",
"FWW646pGM-B",
"kC03p1KTC7q",
"i3Ur0BwocZ2",
"MYJQtVvdAaU"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" **Answer:** Thank you so much for suggesting these related works. We will of course cite these papers and include the following discussions in the revised version.\n\nInstead of using point processes to model the continuous-time edges, the approaches of [1,2] studied time-discretised networks and used Bernoulli e... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"gN4kXpQF1Jk",
"nips_2021_8bbevt2MKPX",
"8RHgqdwSEIQ",
"O1g2zj9NE7",
"gN4kXpQF1Jk",
"MYJQtVvdAaU",
"i3Ur0BwocZ2",
"kC03p1KTC7q",
"nips_2021_8bbevt2MKPX",
"nips_2021_8bbevt2MKPX",
"nips_2021_8bbevt2MKPX"
] |
nips_2021_9pt6F8w1Jgs | Deep inference of latent dynamics with spatio-temporal super-resolution using selective backpropagation through time | Modern neural interfaces allow access to the activity of up to a million neurons within brain circuits. However, bandwidth limits often create a trade-off between greater spatial sampling (more channels or pixels) and the temporal frequency of sampling. Here we demonstrate that it is possible to obtain spatio-temporal super-resolution in neuronal time series by exploiting relationships among neurons, embedded in latent low-dimensional population dynamics. Our novel neural network training strategy, selective backpropagation through time (SBTT), enables learning of deep generative models of latent dynamics from data in which the set of observed variables changes at each time step. The resulting models are able to infer activity for missing samples by combining observations with learned latent dynamics. We test SBTT applied to sequential autoencoders and demonstrate more efficient and higher-fidelity characterization of neural population dynamics in electrophysiological and calcium imaging data. In electrophysiology, SBTT enables accurate inference of neuronal population dynamics with lower interface bandwidths, providing an avenue to significant power savings for implanted neuroelectronic interfaces. In applications to two-photon calcium imaging, SBTT accurately uncovers high-frequency temporal structure underlying neural population activity, substantially outperforming the current state-of-the-art. Finally, we demonstrate that performance could be further improved by using limited, high-bandwidth sampling to pretrain dynamics models, and then using SBTT to adapt these models for sparsely-sampled data.
| accept | In the initial evaluations, all four reviewers recommended rejecting the paper, with the two primary reasons mentioned being that the methodological contributions of the paper are not sufficiently substantial to warranted publication at Neurips, and that comparisons (e.g. with interpolation methods) were missing. Even after the rebuttal phase, and internal discussions, reviewers decided to not change their opinion (and while not all might have updated their review, they did indicate so in internal discussions). Your AC does not necessarily agree with all of these criticisms, but does have to respect the consensus of the reviewers, and did not see sufficient grounds for overruling their unanimous opinion. I do hope that the feedback from the reviewers will allow you to further improve the manuscript.
| train | [
"1BIDL7jBK9g",
"Tf74PHxcGnd",
"UIW_Mr8Xfp7",
"g7bgwrk9Hj",
"1s20gTMwTtg",
"nVb7-sgCAmu",
"WL_T0OU8kI6",
"AUDTFqzupz7",
"p6-qh-HccK_",
"oOnH8hwwKwP",
"jPdeKu2HMe8"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" >**Summary:**\n\n>This paper addresses the problem of bandwidth limits of BCIs, and proposes to obtain spatio-temporal super-resolution data by inferring missing samples of neural data with a latent low-dimensional dynamics. The main contribution of this paper lies in that it proposes a selective backpropagation ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"oOnH8hwwKwP",
"p6-qh-HccK_",
"jPdeKu2HMe8",
"jPdeKu2HMe8",
"AUDTFqzupz7",
"AUDTFqzupz7",
"nips_2021_9pt6F8w1Jgs",
"nips_2021_9pt6F8w1Jgs",
"nips_2021_9pt6F8w1Jgs",
"nips_2021_9pt6F8w1Jgs",
"nips_2021_9pt6F8w1Jgs"
] |
nips_2021_C1mPUP7uKNp | Memory-efficient Patch-based Inference for Tiny Deep Learning | Tiny deep learning on microcontroller units (MCUs) is challenging due to the limited memory size. We find that the memory bottleneck is due to the imbalanced memory distribution in convolutional neural network (CNN) designs: the first several blocks have an order of magnitude larger memory usage than the rest of the network. To alleviate this issue, we propose a generic patch-by-patch inference scheduling, which operates only on a small spatial region of the feature map and significantly cuts down the peak memory. However, naive implementation brings overlapping patches and computation overhead. We further propose receptive field redistribution to shift the receptive field and FLOPs to the later stage and reduce the computation overhead. Manually redistributing the receptive field is difficult. We automate the process with neural architecture search to jointly optimize the neural architecture and inference scheduling, leading to MCUNetV2. Patch-based inference effectively reduces the peak memory usage of existing networks by4-8×. Co-designed with neural networks, MCUNetV2 sets a record ImageNetaccuracy on MCU (71.8%) and achieves >90% accuracy on the visual wake words dataset under only 32kB SRAM. MCUNetV2 also unblocks object detection on tiny devices, achieving 16.9% higher mAP on Pascal VOC compared to the state-of-the-art result. Our study largely addressed the memory bottleneck in tinyML and paved the way for various vision applications beyond image classification.
| accept | The authors tackle an important and salient problem: inference on microcontrollers for DL, where memory is very constrained. The proposed scheme, patch by patch inference, works well, the reviewers were happy with the paper and the method itself, therefore I recommend acceptance. | val | [
"Uxu5YhT8z4b",
"NJIoynU16BG",
"ESHBFP2GQuF",
"s6airwMZuZ",
"ifMzwhUcXqp",
"0Zmfg4DNur9",
"SyZime6prz0",
"qo6VJFyHVHW",
"7N-qsOAxj6x",
"xvnF7pzhVSZ",
"TE7-iZj_Ei6",
"G80NqqJcx_b"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for addressing the points I mentioned when reviewing this paper. I stand by my original score of 7",
" Thank you so much for the detailed response across all reviews. After going through them, I am retaining my score and believe this work will be a valuable contribution at Neur... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
4
] | [
"s6airwMZuZ",
"qo6VJFyHVHW",
"0Zmfg4DNur9",
"TE7-iZj_Ei6",
"7N-qsOAxj6x",
"G80NqqJcx_b",
"nips_2021_C1mPUP7uKNp",
"xvnF7pzhVSZ",
"nips_2021_C1mPUP7uKNp",
"nips_2021_C1mPUP7uKNp",
"nips_2021_C1mPUP7uKNp",
"nips_2021_C1mPUP7uKNp"
] |
nips_2021_YlM3tey8Z5I | Self-Interpretable Model with Transformation Equivariant Interpretation | With the proliferation of machine learning applications in the real world, the demand for explaining machine learning predictions continues to grow especially in high-stakes fields. Recent studies have found that interpretation methods can be sensitive and unreliable, where the interpretations can be disturbed by perturbations or transformations of input data. To address this issue, we propose to learn robust interpretation through transformation equivariant regularization in a self-interpretable model. The resulting model is capable of capturing valid interpretation that is equivariant to geometric transformations. Moreover, since our model is self-interpretable, it enables faithful interpretations that reflect the true predictive mechanism. Unlike existing self-interpretable models, which usually sacrifice expressive power for the sake of interpretation quality, our model preserves the high expressive capability comparable to the state-of-the-art deep learning models in complex tasks, while providing visualizable and faithful high-quality interpretation. We compare with various related methods and validate the interpretation quality and consistency of our model.
| accept | This paper considers the problem of training self-interpretable models. It first shows that existing approaches change the interpretation with input transformations even though the model is invariant to them. It then proposes a new method that fixes this problem. While the reviewers initially had concerns about the empirical evaluation, the rebuttal clarified most of the issues and all of them unanimously recommend acceptance. So, I am recommending the paper for acceptance. At the same time, there are also some concerns such as comparison with Chen et al. work that the rebuttal did not address properly. I would ask the authors to fix this and other issues raised in the reviews below in the final version. | train | [
"nUsIy0af_hk",
"i2XzzcYc36T",
"Yo2YGbs0ag3",
"WfPLkV38-A2",
"iXvPY9iJj6E",
"MQWT4MZBno3",
"_uyUenm97o",
"XenTm9mkz1O",
"_6k9zll04iH",
"3A32xVIKqFr",
"E56RgGnXA2",
"uuOCjCoWXu-",
"naLABYDET8o",
"w8C0qifYQhw",
"0FkTJZd_FQ6",
"E9eESXTlkBH",
"JkfY_76E_Xz"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thank you again for taking the time to review our paper and provide valuable feedback! We will revise our final paper based on the reviewer’s comments.\n",
"This paper proposes a self-interpretable model for images, which can produce interpretations robust to geometric transformations by learning it with transf... | [
-1,
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
3,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4
] | [
"Yo2YGbs0ag3",
"nips_2021_YlM3tey8Z5I",
"w8C0qifYQhw",
"_uyUenm97o",
"XenTm9mkz1O",
"nips_2021_YlM3tey8Z5I",
"0FkTJZd_FQ6",
"E56RgGnXA2",
"nips_2021_YlM3tey8Z5I",
"uuOCjCoWXu-",
"nips_2021_YlM3tey8Z5I",
"naLABYDET8o",
"JkfY_76E_Xz",
"i2XzzcYc36T",
"MQWT4MZBno3",
"E56RgGnXA2",
"nips_2... |
nips_2021_Efqe8E4Bww | Solving Min-Max Optimization with Hidden Structure via Gradient Descent Ascent | Many recent AI architectures are inspired by zero-sum games, however, the behavior of their dynamics is still not well understood. Inspired by this, we study standard gradient descent ascent (GDA) dynamics in a specific class of non-convex non-concave zero-sum games, that we call hidden zero-sum games. In this class, players control the inputs of smooth but possibly non-linear functions whose outputs are being applied as inputs to a convex-concave game. Unlike general zero-sum games, these games have a well-defined notion of solution; outcomes that implement the von-Neumann equilibrium of the ``hidden" convex-concave game. We provide conditions under which vanilla GDA provably converges not merely to local Nash, but the actual von-Neumann solution. If the hidden game lacks strict convexity properties, GDA may fail to converge to any equilibrium, however, by applying standard regularization techniques we can prove convergence to a von-Neumann solution of a slightly perturbed zero-sum game. Our convergence results are non-local despite working in the setting of non-convex non-concave games. Critically, under proper assumptions we combine the Center-Stable Manifold Theorem along with novel type of initialization dependent Lyapunov functions to prove that almost all initial conditions converge to the solution. Finally, we discuss diverse applications of our framework ranging from generative adversarial networks to evolutionary biology.
| accept | Most reviewers appreciated the novelty of the class of the problems studied and found it interesting. There are concerns about the expressivity of HCC games in practical settings that has not been fully addressed by the authors. There were also questions about the contributions w.r.t. reference [66]. However, as authors mentioned, the invertibility assumption is one of the key differences of their work with reference [66]. Another comment was about defining various concept needed to understand the paper. It would help the reader a lot if the authors can define all necessary concepts to understand the paper (e.g. La Salle's principle Vo Neumann solution, etc.)
| test | [
"5N8hPaA_36k",
"uuz7cE5i3vU",
"aDGHJxd-zc2",
"HUIMAKj_NBR",
"03nqlhP6OYn",
"l3Sh1jA9Cl",
"i4AFH6fiewh",
"ujbNxKdb2ec",
"-8STZkwP8O3"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Given that the discussion period ends soon, we wanted to make sure that reviewers do not have any lingering questions. Let us know if we can help clarify something.",
" Thank you very much for the thoughtful review, and for your positive evaluation and assessment. We reply to your precise question below:\n\nIn ... | [
-1,
-1,
-1,
-1,
-1,
5,
6,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
3
] | [
"nips_2021_Efqe8E4Bww",
"ujbNxKdb2ec",
"i4AFH6fiewh",
"l3Sh1jA9Cl",
"-8STZkwP8O3",
"nips_2021_Efqe8E4Bww",
"nips_2021_Efqe8E4Bww",
"nips_2021_Efqe8E4Bww",
"nips_2021_Efqe8E4Bww"
] |
nips_2021_q6h7jVe0wE3 | Preserved central model for faster bidirectional compression in distributed settings | We develop a new approach to tackle communication constraints in a distributed learning problem with a central server. We propose and analyze a new algorithm that performs bidirectional compression and achieves the same convergence rate as algorithms using only uplink (from the local workers to the central server) compression. To obtain this improvement, we design MCM, an algorithm such that the downlink compression only impacts local models, while the global model is preserved. As a result, and contrary to previous works, the gradients on local servers are computed on perturbed models. Consequently, convergence proofs are more challenging and require a precise control of this perturbation. To ensure it, MCM additionally combines model compression with a memory mechanism. This analysis opens new doors, e.g. incorporating worker dependent randomized-models and partial participation.
| accept | This work proposes MCM, a method that performs bidirectional (i.e., both uplink and downlink) compression in distributed learning. A benefit of the approach is that it is able to match convergence rates of methods that use compression in only a single direction. The reviewers were generally in agreement that the work was well-written and motivated. However, many of the reviewers were unclear about the novelty of the approach, and there were a significant number of additional experiments included as part of the discussion period (e.g., relating to test metrics, new datasets). I believe the paper will be significantly strengthened if the authors carefully incorporate the discussed changes (specifically, more clearly outlining the main contributions/novelty and including the additional experiments that have been run since the time of submission). Although one reviewer finds the results unsurprising, this view was not shared by others, and I agree with the remainder of the reviewers that this work provides a solid contribution in communication-efficient learning. | train | [
"7Q8pQvsIaEv",
"VSVtx5VXVeZ",
"HESR9QZWny",
"ayvZ24py9WL",
"vja4QJOgn7",
"ZPUqI557kkY",
"I_sbVMnAV5K",
"Xaq_owd1jNA",
"yevqb3oMrw",
"DIk0stcTtcc",
"hDY3G8IeMbZ",
"SOvZw9D7iPa",
"cQAxq1fd7iW",
"-K-tw4dNM9o"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes MCM ~ algorithm to perform bidirectional compression in distributed setting. The authors claim similar convergence guarantees as vanilla setting. \nThey introduce the notion of sending different models to different clients while keeping the global model preserved. \n Overall, the presentation h... | [
5,
-1,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2
] | [
"nips_2021_q6h7jVe0wE3",
"hDY3G8IeMbZ",
"Xaq_owd1jNA",
"nips_2021_q6h7jVe0wE3",
"I_sbVMnAV5K",
"nips_2021_q6h7jVe0wE3",
"ayvZ24py9WL",
"-K-tw4dNM9o",
"cQAxq1fd7iW",
"ZPUqI557kkY",
"7Q8pQvsIaEv",
"nips_2021_q6h7jVe0wE3",
"nips_2021_q6h7jVe0wE3",
"nips_2021_q6h7jVe0wE3"
] |
nips_2021_a5-37ER8qTI | Understanding Instance-based Interpretability of Variational Auto-Encoders | Instance-based interpretation methods have been widely studied for supervised learning methods as they help explain how black box neural networks predict. However, instance-based interpretations remain ill-understood in the context of unsupervised learning. In this paper, we investigate influence functions [20], a popular instance-based interpretation method, for a class of deep generative models called variational auto-encoders (VAE). We formally frame the counter-factual question answered by influence functions in this setting, and through theoretical analysis, examine what they reveal about the impact of training samples on classical unsupervised learning methods. We then introduce VAE-TracIn, a computationally efficient and theoretically sound solution based on Pruthi et al. [28], for VAEs. Finally, we evaluate VAE-TracIn on several real world datasets with extensive quantitative and qualitative analysis.
| accept | The paper investigates the instance-based interpretation method based on influence functions in the Variational Auto-Encoder framework. The paper is well-organized and rather easy to follow. The reviewers tend to agree on the positive aspects of the paper, such as:
- Even though some ideas were introduced elsewhere, the application to VAEs is novel and interesting.
- The paper is well-positioned in the literature review.
The main disadvantage of the paper is the experimental part:
- The paper does not compare their experimental results with other existing data cleaning methods.
- There is little experimental evidence for illustrating the benefits and potential applications of using VAE-TracIn in training and test samples.
Overall, all reviews have a tendency towards the acceptance with one reviewer being positive about the acceptance. Therefore, I believe that the paper could be accepted.
| val | [
"qGAgP1ClMgc",
"JHWY1BDABwo",
"iNDpdhxw5Ld",
"ukR80tCHG36",
"xwCX9BuojJ2",
"Fyix2mjj6e8",
"O0VLVZVjlk9",
"Ew0Q2oC94bc",
"eYKV5uXms5l",
"QfybYiNc2lY",
"8ap9BWMfj2K",
"du542sHeT3-"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for describing the potential uses of VAE-TracIn and the benefits of knowing the influence of samples. I highly recommend adding them to the final version to emphasize the significance of this work. Also thank you for the additional experiments for evaluating how the VAE improves after removal of high se... | [
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5
] | [
"eYKV5uXms5l",
"nips_2021_a5-37ER8qTI",
"xwCX9BuojJ2",
"nips_2021_a5-37ER8qTI",
"QfybYiNc2lY",
"O0VLVZVjlk9",
"du542sHeT3-",
"8ap9BWMfj2K",
"JHWY1BDABwo",
"ukR80tCHG36",
"nips_2021_a5-37ER8qTI",
"nips_2021_a5-37ER8qTI"
] |
nips_2021_ZdV8fv_7fPt | Voxel-based 3D Detection and Reconstruction of Multiple Objects from a Single Image | Inferring 3D locations and shapes of multiple objects from a single 2D image is a long-standing objective of computer vision. Most of the existing works either predict one of these 3D properties or focus on solving both for a single object. One fundamental challenge lies in how to learn an effective representation of the image that is well-suited for 3D detection and reconstruction. In this work, we propose to learn a regular grid of 3D voxel features from the input image which is aligned with 3D scene space via a 3D feature lifting operator. Based on the 3D voxel features, our novel CenterNet-3D detection head formulates the 3D detection as keypoint detection in the 3D space. Moreover, we devise an efficient coarse-to-fine reconstruction module, including coarse-level voxelization and a novel local PCA-SDF shape representation, which enables fine detail reconstruction and two orders of magnitude faster inference than prior methods. With complementary supervision from both 3D detection and reconstruction, one enables the 3D voxel features to be geometry and context preserving, benefiting both tasks. The effectiveness of our approach is demonstrated through 3D detection and reconstruction on single-object and multiple-object scenarios.
| accept | This paper initially had mixed reviews (8,8,5,5). After reading the rebuttal and checking the other reviewers' comments, the two negative reviewers were convinced and raised their score to 6. Reviewer 7tQr appreciated the simple innovation this paper offers, which is of broad appeal to the 3D reconstruction community. Reviewer NjXM stated that the rebuttal addressed their concerns adequately. Hence, this paper now has 4 positive scores and it should be accepted.
| train | [
"XhjlqsZHZce",
"XtJoBMkwCqo",
"_3hfHID5Mpw",
"srcF-IotwfD",
"UCQgLC4OkKl",
"D_V_MaAi8s4",
"2WW_LDENrZF",
"Xu-5zAP2DN2",
"I1imXtBjuG",
"-j1kdg5JgN"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the comments. I am happy to maintain my rating.",
"This paper presents a joint framework for detecting multiple 3D objects from input images while also reconstructing the 3D shapes. The proposed method improves upon previous work [20,21] for joint 3D object detection and reconstruction mostly at t... | [
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
8,
8
] | [
-1,
4,
-1,
3,
-1,
-1,
-1,
-1,
4,
4
] | [
"2WW_LDENrZF",
"nips_2021_ZdV8fv_7fPt",
"UCQgLC4OkKl",
"nips_2021_ZdV8fv_7fPt",
"XtJoBMkwCqo",
"-j1kdg5JgN",
"I1imXtBjuG",
"srcF-IotwfD",
"nips_2021_ZdV8fv_7fPt",
"nips_2021_ZdV8fv_7fPt"
] |
nips_2021_e_yvNqkJKAW | Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization | This paper presents a new algorithm for domain generalization (DG), \textit{test-time template adjuster (T3A)}, aiming to robustify a model to unknown distribution shift. Unlike existing methods that focus on \textit{training phase}, our method focuses \textit{test phase}, i.e., correcting its prediction by itself during test time. Specifically, T3A adjusts a trained linear classifier (the last layer of deep neural networks) with the following procedure: (1) compute a pseudo-prototype representation for each class using online unlabeled data augmented by the base classifier trained in the source domains, (2) and then classify each sample based on its distance to the pseudo-prototypes. T3A is back-propagation-free and modifies only the linear layer; therefore, the increase in computational cost during inference is negligible and avoids the catastrophic failure might caused by stochastic optimization. Despite its simplicity, T3A can leverage knowledge about the target domain by using off-the-shelf test-time data and improve performance. We tested our method on four domain generalization benchmarks, namely PACS, VLCS, OfficeHome, and TerraIncognita, along with various backbone networks including ResNet18, ResNet50, Big Transfer (BiT), Vision Transformers (ViT), and MLP-Mixer. The results show T3A stably improves performance on unseen domains across choices of backbone networks, and outperforms existing domain generalization methods.
| accept | The paper is proposing a test-time adaptation method for domain generalization. The proposed idea is quite simple and effective. It uses the predictions as pseudo labels and only updates the final classification layer without any gradient-based optimization. Hence, it is not only simple and effective but also lightweight. All reviewers agreed that the proposed method has merits; however, there were some concerns with the empirical study. The authors provided significant additional empirical support during rebuttal and reviewers appreciated it. I believe it is a solid paper that will likely have a significant impact on the community. | train | [
"VhqwdwL0Hcu",
"JuXdOoHLQG",
"dIo562dq1d",
"RJMbRKoEjzD",
"0Fog_pGr1Hm",
"gVVY9lNHA4B",
"_DI1o13yGcO",
"mpoxcVpe8ZJ",
"f9lFMBu4iYl",
"ZLV1TSkmtdN",
"QsKwbqmBymR",
"wkplTZ0bilw"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
" Dear authors,\n\nThank you again for your detailed response – it addressed my concerns. I think you need to clearly mention the above reasons for the degraded SHOT performance in the offline test-domain-validation case – no label smoothing (key to the method) and a hyperparameter range that was designed for TENT,... | [
-1,
6,
-1,
-1,
7,
-1,
7,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
-1,
-1,
5,
-1,
5,
-1,
-1,
-1,
-1,
-1
] | [
"dIo562dq1d",
"nips_2021_e_yvNqkJKAW",
"gVVY9lNHA4B",
"mpoxcVpe8ZJ",
"nips_2021_e_yvNqkJKAW",
"ZLV1TSkmtdN",
"nips_2021_e_yvNqkJKAW",
"QsKwbqmBymR",
"JuXdOoHLQG",
"0Fog_pGr1Hm",
"_DI1o13yGcO",
"nips_2021_e_yvNqkJKAW"
] |
nips_2021_GWRkOYr4jxQ | Luna: Linear Unified Nested Attention | The quadratic computational and memory complexities of the Transformer's attention mechanism have limited its scalability for modeling long sequences. In this paper, we propose Luna, a linear unified nested attention mechanism that approximates softmax attention with two nested linear attention functions, yielding only linear (as opposed to quadratic) time and space complexity. Specifically, with the first attention function, Luna packs the input sequence into a sequence of fixed length. Then, the packed sequence is unpacked using the second attention function. As compared to a more traditional attention mechanism, Luna introduces an additional sequence with a fixed length as input and an additional corresponding output, which allows Luna to perform attention operation linearly, while also storing adequate contextual information. We perform extensive evaluations on three benchmarks of sequence modeling tasks: long-context sequence modelling, neural machine translation and masked language modeling for large-scale pretraining. Competitive or even better experimental results demonstrate both the effectiveness and efficiency of Luna compared to a variety of strong baseline methods including the full-rank attention and other efficient sparse and dense attention methods.
| accept | This paper proposes a linear attention based on a pack and unpack mechanism of an input sequence. Experiments on LRA, pretraining and finetuning, and machine translation demonstrate the benefit of the proposed approach. I think this is a useful addition to the linear attention literature. Al reviewers generally agree that this is a good paper and provided some suggestions, which the authors mentioned will be added to the paper. I recommend accepting this paper. | train | [
"V-KPEVLNj41",
"s3UtN3f8UZw",
"65ptk4z42uU",
"M0e-mvBTNlT",
"KGiPS4mgEbv",
"mCal7hjmGdS",
"bjcpRHadB-5"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewers,\n\nThanks the reviewers for giving us a lot of constructive and valuable feedback, again!\nWe have posted detailed responses to your questions and concerns, and look forward to your post-rebuttal feedback and discussion.",
" Thanks for your comments and positive feedback! \nWe respond below to y... | [
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
5,
5,
4
] | [
"nips_2021_GWRkOYr4jxQ",
"bjcpRHadB-5",
"mCal7hjmGdS",
"KGiPS4mgEbv",
"nips_2021_GWRkOYr4jxQ",
"nips_2021_GWRkOYr4jxQ",
"nips_2021_GWRkOYr4jxQ"
] |
nips_2021_Y2OaOLYQYA | Iterative Causal Discovery in the Possible Presence of Latent Confounders and Selection Bias | We present a sound and complete algorithm, called iterative causal discovery (ICD), for recovering causal graphs in the presence of latent confounders and selection bias. ICD relies on the causal Markov and faithfulness assumptions and recovers the equivalence class of the underlying causal graph. It starts with a complete graph, and consists of a single iterative stage that gradually refines this graph by identifying conditional independence (CI) between connected nodes. Independence and causal relations entailed after any iteration are correct, rendering ICD anytime. Essentially, we tie the size of the CI conditioning set to its distance on the graph from the tested nodes, and increase this value in the successive iteration. Thus, each iteration refines a graph that was recovered by previous iterations having smaller conditioning sets---a higher statistical power---which contributes to stability. We demonstrate empirically that ICD requires significantly fewer CI tests and learns more accurate causal graphs compared to FCI, FCI+, and RFCI algorithms.
| accept | All reviewers are favorable after the author responses.
Clearly, this is a fairly significant contribution. Reducing the number of CI tests within the FCI family of algorithms (handling both selection bias AND confounders) is very important. I am impressed by the significant reduction in number of CI tests and significant improvement in orientation accuracy (which is non trivial) empirically. Further the anytime guarantees and associated theoretical results make it a solid paper. The authors even clarified key technical concerns after which reviewers felt more positive.
Please do add experimental results quoted in the review response about FPR/FNR vs other FCI based algorithms and please do make changes as recommended by reviewers regarding rephrasing of Lemma 1.
| test | [
"rpITkNSFg8o",
"YVmsX3ja3J7",
"c18jrjXEMLe",
"qPMsXoA1HBU",
"Iqqp_dXEBV_",
"Nq5ctehfARJ",
"NShvbm5bcta",
"Xd0xjotbVRn",
"Z6y_Zkcm74",
"t_q3bmZbbi",
"MKZSu1BGHJo",
"5dcRXFQihig"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper proposes a method for causal discovery in graphs with potential latent common causes, which compared to other similar methods requires significantly less conditional independence tests and smaller conditioning sets. It achieves that by incrementally increasing the size of the conditioning set relative to... | [
6,
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7
] | [
3,
3,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
2
] | [
"nips_2021_Y2OaOLYQYA",
"nips_2021_Y2OaOLYQYA",
"t_q3bmZbbi",
"Z6y_Zkcm74",
"NShvbm5bcta",
"nips_2021_Y2OaOLYQYA",
"YVmsX3ja3J7",
"nips_2021_Y2OaOLYQYA",
"Nq5ctehfARJ",
"5dcRXFQihig",
"rpITkNSFg8o",
"nips_2021_Y2OaOLYQYA"
] |
nips_2021_FeFIzwifdoL | Hindsight Task Relabelling: Experience Replay for Sparse Reward Meta-RL | Meta-reinforcement learning (meta-RL) has proven to be a successful framework for leveraging experience from prior tasks to rapidly learn new related tasks, however, current meta-RL approaches struggle to learn in sparse reward environments. Although existing meta-RL algorithms can learn strategies for adapting to new sparse reward tasks, the actual adaptation strategies are learned using hand-shaped reward functions, or require simple environments where random exploration is sufficient to encounter sparse reward. In this paper we present a formulation of hindsight relabelling for meta-RL, which relabels experience during meta-training to enable learning to learn entirely using sparse reward. We demonstrate the effectiveness of our approach on a suite of challenging sparse reward environments that previously required dense reward during meta-training to solve. Our approach solves these environments using the true sparse reward function, with performance comparable to training with a proxy dense reward function.
| accept | We thank the authors for rebuttal and the reviewers for engaging in discussions. We reached a consensus for weak acceptance of the paper. However, the method is straightforward given PEARL and HER, and the reviewers and I agree that the current tasks are not challenging enough and do not convincingly demonstrate the value of this approach, and highly encourage the authors to add additional validations. | val | [
"mFdp_Yom2Cb",
"eLlfmPc9cmL",
"KR5vAteQ756",
"eWHnNK3341",
"XAJG700Ksb5",
"9Brn1FQvqwR",
"-hrgQMJAv_T",
"0-3SlhvAxZM",
"mQhspfku-Z",
"PNntEH_eiBq"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read the author responses and other reviews. I appreciate the discussion here and trust it can be used to polish the paper somewhat further.\n\nNonetheless, I agree with the other reviewers that the tasks do not particularly highlight the value of the method (to handle settings where dense rewards are diff... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"9Brn1FQvqwR",
"nips_2021_FeFIzwifdoL",
"-hrgQMJAv_T",
"mQhspfku-Z",
"eLlfmPc9cmL",
"PNntEH_eiBq",
"0-3SlhvAxZM",
"nips_2021_FeFIzwifdoL",
"nips_2021_FeFIzwifdoL",
"nips_2021_FeFIzwifdoL"
] |
nips_2021_WN8ChCARq2 | A Bayesian-Symbolic Approach to Reasoning and Learning in Intuitive Physics | Humans can reason about intuitive physics in fully or partially observed environments even after being exposed to a very limited set of observations. This sample-efficient intuitive physical reasoning is considered a core domain of human common sense knowledge. One hypothesis to explain this remarkable capacity, posits that humans quickly learn approximations to the laws of physics that govern the dynamics of the environment. In this paper, we propose a Bayesian-symbolic framework (BSP) for physical reasoning and learning that is close to human-level sample-efficiency and accuracy. In BSP, the environment is represented by a top-down generative model of entities, which are assumed to interact with each other under unknown force laws over their latent and observed properties. BSP models each of these entities as random variables, and uses Bayesian inference to estimate their unknown properties. For learning the unknown forces, BSP leverages symbolic regression on a novel grammar of Newtonian physics in a bilevel optimization setup. These inference and regression steps are performed in an iterative manner using expectation-maximization, allowing BSP to simultaneously learn force laws while maintaining uncertainty over entity properties. We show that BSP is more sample-efficient compared to neural alternatives on controlled synthetic datasets, demonstrate BSP's applicability to real-world common sense scenes and study BSP's performance on tasks previously used to study human physical reasoning.
| accept | The paper presents a neural-symbolic Bayesian framework to learning physical models in a similar way as human, by making abduction from a few observations provided by the environment and domain knowledge given by grammar. It aims to address an important problem. The proposed solution is reasonable with promising results.
A few concerns have been raised: (1) evaluation: stronger baselines should be included and some results are not as good as human; (2) how the grammar was learned - ad hoc versus principled ways; (3) how general the models can be - only for some specific physical systems versus general systems. The authors were able to address these concerns to a certain extent. After discussion, it was agreed that while the paper still has some limitations, the work is interesting enough to be presented to the NeurIPS community. | train | [
"IC4rrb3FF8l",
"cSuoXkg4YTu",
"v6d1KZcD9TO",
"g5gqTRblTfB",
"0yECH8eLvDh",
"7GbWR-kN6mN",
"gKWoxsOtWt",
"6niVPKvaK0O",
"vLwxVX0_OTt",
"vLtbmjy7U0",
"U3P45Nd-I0y",
"8kbgw3gZq9t",
"Z7AUlWgyOd7"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose the Bayesian symbolic (BSP) framework for inferring both the symbolic force laws and the values of latent properties of entities.\nThey contribute a grammar of Newtonian physics over which they perform inference with the BSP framework.\nEmpirical results show higher sample efficiency than neura... | [
6,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_WN8ChCARq2",
"v6d1KZcD9TO",
"g5gqTRblTfB",
"0yECH8eLvDh",
"7GbWR-kN6mN",
"vLtbmjy7U0",
"nips_2021_WN8ChCARq2",
"Z7AUlWgyOd7",
"8kbgw3gZq9t",
"IC4rrb3FF8l",
"gKWoxsOtWt",
"nips_2021_WN8ChCARq2",
"nips_2021_WN8ChCARq2"
] |
nips_2021_hl3v8io3ZYt | Associating Objects with Transformers for Video Object Segmentation | This paper investigates how to realize better and more efficient embedding learning to tackle the semi-supervised video object segmentation under challenging multi-object scenarios. The state-of-the-art methods learn to decode features with a single positive object and thus have to match and segment each target separately under multi-object scenarios, consuming multiple times computing resources. To solve the problem, we propose an Associating Objects with Transformers (AOT) approach to match and decode multiple objects uniformly. In detail, AOT employs an identification mechanism to associate multiple targets into the same high-dimensional embedding space. Thus, we can simultaneously process multiple objects' matching and segmentation decoding as efficiently as processing a single object. For sufficiently modeling multi-object association, a Long Short-Term Transformer is designed for constructing hierarchical matching and propagation. We conduct extensive experiments on both multi-object and single-object benchmarks to examine AOT variant networks with different complexities. Particularly, our R50-AOT-L outperforms all the state-of-the-art competitors on three popular benchmarks, i.e., YouTube-VOS (84.1% J&F), DAVIS 2017 (84.9%), and DAVIS 2016 (91.1%), while keeping more than 3X faster multi-object run-time. Meanwhile, our AOT-T can maintain real-time multi-object speed on the above benchmarks. Based on AOT, we ranked 1st in the 3rd Large-scale VOS Challenge.
| accept | The rebuttal addressed all of the reviewers concerns, and all reviewers recommend acceptance. The AC agrees with this recommendation. | train | [
"eTryQxFdn82",
"DdPPx_RI12x",
"emV9MbkZAq",
"3XXZd4uQpBf",
"kVl1asgSuu",
"-y33-BsTNMw",
"6qxJC86k7x9",
"ZxHA0ButVO",
"AmMXFpWlTNm",
"6uSriKZFmOC"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I raised two main concerns during the review (scalability and randomness) and the authors addressed both concerns well. I look forward to seeing the experiments with videos with a lot of objects. I will keep my initial rating as 7. ",
" I thank the authors for providing a detailed rebuttal. The authors answer m... | [
-1,
-1,
8,
-1,
-1,
-1,
-1,
8,
7,
8
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"kVl1asgSuu",
"3XXZd4uQpBf",
"nips_2021_hl3v8io3ZYt",
"6uSriKZFmOC",
"AmMXFpWlTNm",
"emV9MbkZAq",
"ZxHA0ButVO",
"nips_2021_hl3v8io3ZYt",
"nips_2021_hl3v8io3ZYt",
"nips_2021_hl3v8io3ZYt"
] |
nips_2021_NPOWF_ZLfC5 | Automatic Symmetry Discovery with Lie Algebra Convolutional Network | Existing equivariant neural networks require prior knowledge of the symmetry group and discretization for continuous groups. We propose to work with Lie algebras (infinitesimal generators) instead of Lie groups. Our model, the Lie algebra convolutional network (L-conv) can automatically discover symmetries and does not require discretization of the group. We show that L-conv can serve as a building block to construct any group equivariant feedforward architecture. Both CNNs and Graph Convolutional Networks can be expressed as L-conv with appropriate groups. We discover direct connections between L-conv and physics: (1) group invariant loss generalizes field theory (2) Euler-Lagrange equation measures the robustness, and (3) equivariance leads to conservation laws and Noether current. These connections open up new avenues for designing more general equivariant networks and applying them to important problems in physical sciences.
| accept | In this work, the authors build group equivariant neural network layers. These are built from convolutions with sufficiently local functions that can be lifted from the Lie group to the Lie algebra. This approach is sufficiently general to encompass previous equivariant architectures. This approach also comes with a universal approximation property, which the authors need to highlight.
== Why accept this paper?
The paper offers a clearly creative and novel contribution, with a number of high value theoretical hindsights. The community will greatly benefit from having access to these ideas, and I look forward to seeing what and how the present authors, or other researchers, will build on top of those ideas.
== Why not a higher rating?
As mentioned by all reviewers, this paper is particularly hard to read. Part of the reason is that it deals with a difficult problem, but we can only praise the authors for this, not blame them. But another reason for the difficulty to read the paper is its very high density with a lack of focus and clarity. Following the discussion with reviewers, the authors received a number of suggestions to improve their paper, and they should implement as many of those suggestions as possible. | train | [
"9JyVnugPOP5",
"_nFGS9buiO",
"QaTFLiNRdEV",
"8ZYSOQUsKkg",
"ivQ23m7-G-S",
"uqJU9-RIjq",
"GiaHQVDgDKH",
"-KgzrF818ub",
"7LH_x6ssapd",
"aHiO9yN1QH-",
"16XCt6KnuuJ",
"iNEwYNt-GQa",
"X9lUyI5DuHS",
"juMYfUhYRuc",
"5hQWfm1x0f8",
"rjn7hFvrnwP",
"g2-VrCnK_i4",
"NZ42Jk8GMFk",
"MgLJsMQTacd... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" > I think my problem, however, is that I have difficulties in thinking about how to construct useful architectures without lifting the data to the group. \nMy issue is that I get stuck when trying to figure out how to build e.g. SE(2) equivariant architectures for 2D images. To me this is most naturally done by l... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"QaTFLiNRdEV",
"8ZYSOQUsKkg",
"ivQ23m7-G-S",
"5hQWfm1x0f8",
"GiaHQVDgDKH",
"7LH_x6ssapd",
"16XCt6KnuuJ",
"nips_2021_NPOWF_ZLfC5",
"rjn7hFvrnwP",
"MgLJsMQTacd",
"MgLJsMQTacd",
"nips_2021_NPOWF_ZLfC5",
"g2-VrCnK_i4",
"g2-VrCnK_i4",
"NZ42Jk8GMFk",
"-KgzrF818ub",
"nips_2021_NPOWF_ZLfC5",... |
nips_2021_14-dXLRn4fE | Zero Time Waste: Recycling Predictions in Early Exit Neural Networks | The problem of reducing processing time of large deep learning models is a fundamental challenge in many real-world applications. Early exit methods strive towards this goal by attaching additional Internal Classifiers (ICs) to intermediate layers of a neural network. ICs can quickly return predictions for easy examples and, as a result, reduce the average inference time of the whole model. However, if a particular IC does not decide to return an answer early, its predictions are discarded, with its computations effectively being wasted. To solve this issue, we introduce Zero Time Waste (ZTW), a novel approach in which each IC reuses predictions returned by its predecessors by (1) adding direct connections between ICs and (2) combining previous outputs in an ensemble-like manner. We conduct extensive experiments across various datasets and architectures to demonstrate that ZTW achieves a significantly better accuracy vs. inference time trade-off than other recently proposed early exit methods.
| accept | This paper introduces an architecture of using all the outputs from the internal classifiers in an early-exit network. It is claimed the proposed method can achieve both more accurate results and faster prediction. Initially this paper receives 3 negative and 1 positive scores. The main problem of negative reviews concerns the novelty of the proposed method. The main trick of adding direct connections between ICs is very similar to what has been proposed in previous work of "Improved techniques for training adaptive deep network" by Zhang et al., although the setting is claimed to be different. After discussion, the positive reviewer realizes he/she has missed this reference, and decreases the original score. A more clear discussion of the differences between the existing technique and the proposed one should be conducted in the revision. I personally also think testing the proposed method on larger scale datasets such as ImageNet would be beneficial. I suggest the authors revise the paper according the the detailed reviews, which I believe would make it much stronger for future submissions. | val | [
"pYLsUuSemdi",
"Q_SdQz0mc_b",
"AhSHb7C56zl",
"uFM-HhJMh-e",
"xPadR7mVUqN",
"P4MlJe1pHI",
"zICG_p-JnsD",
"9kjxj1b_YmK",
"NkmufGhtNEL"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I like to thank the authors for putting in a lot of hard work to produce this wonderful piece of work. Also thank for authors for carefully response to my reviews. Would be really helpful to see the improvements in the paper. Look forward to that.",
" Thank you for your interest and, in particular, noticing tha... | [
-1,
-1,
-1,
-1,
-1,
6,
4,
4,
4
] | [
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
4
] | [
"Q_SdQz0mc_b",
"NkmufGhtNEL",
"9kjxj1b_YmK",
"zICG_p-JnsD",
"P4MlJe1pHI",
"nips_2021_14-dXLRn4fE",
"nips_2021_14-dXLRn4fE",
"nips_2021_14-dXLRn4fE",
"nips_2021_14-dXLRn4fE"
] |
nips_2021_t9gKUW9T8fX | On Model Calibration for Long-Tailed Object Detection and Instance Segmentation | Vanilla models for object detection and instance segmentation suffer from the heavy bias toward detecting frequent objects in the long-tailed setting. Existing methods address this issue mostly during training, e.g., by re-sampling or re-weighting. In this paper, we investigate a largely overlooked approach --- post-processing calibration of confidence scores. We propose NorCal, Normalized Calibration for long-tailed object detection and instance segmentation, a simple and straightforward recipe that reweighs the predicted scores of each class by its training sample size. We show that separately handling the background class and normalizing the scores over classes for each proposal are keys to achieving superior performance. On the LVIS dataset, NorCal can effectively improve nearly all the baseline models not only on rare classes but also on common and frequent classes. Finally, we conduct extensive analysis and ablation studies to offer insights into various modeling choices and mechanisms of our approach. Our code is publicly available at https://github.com/tydpan/NorCal.
| accept | The reviewers were generally positive about this paper, and most of the raised concerns were cleared after the responce. Thus I recommend accepting the paper. | train | [
"NWuQFEjp3Se",
"XXz0MvK0jnW",
"yy7FsSN3KoE",
"1YhnXn1407m",
"8EEpxf0o5m",
"UMIeEW69haN",
"YaRYbv3IJyy",
"IN4CfgVE3vl",
"AY-BBAfIbSF",
"LMZv7z2ale5",
"1SB3EOYI78",
"KXAwftgtVQ3"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewers and AC,\n\nThank you for reading our paper and rebuttal. We have tried to address all concerns raised by the reviewers. We appreciate that Reviewer PQBv has provided positive feedback to our rebuttal. Reviewer PQBv also said, \"I've looked at the comments of the other reviewers and noticed that we ... | [
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
6,
5,
6
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"nips_2021_t9gKUW9T8fX",
"yy7FsSN3KoE",
"AY-BBAfIbSF",
"nips_2021_t9gKUW9T8fX",
"KXAwftgtVQ3",
"1YhnXn1407m",
"LMZv7z2ale5",
"1SB3EOYI78",
"nips_2021_t9gKUW9T8fX",
"nips_2021_t9gKUW9T8fX",
"nips_2021_t9gKUW9T8fX",
"nips_2021_t9gKUW9T8fX"
] |
nips_2021_ErivP29kYnx | ReSSL: Relational Self-Supervised Learning with Weak Augmentation | Self-supervised Learning (SSL) including the mainstream contrastive learning has achieved great success in learning visual representations without data annotations. However, most of methods mainly focus on the instance level information (\ie, the different augmented images of the same instance should have the same feature or cluster into the same class), but there is a lack of attention on the relationships between different instances. In this paper, we introduced a novel SSL paradigm, which we term as relational self-supervised learning (ReSSL) framework that learns representations by modeling the relationship between different instances. Specifically, our proposed method employs sharpened distribution of pairwise similarities among different instances as \textit{relation} metric, which is thus utilized to match the feature embeddings of different augmentations. Moreover, to boost the performance, we argue that weak augmentations matter to represent a more reliable relation, and leverage momentum strategy for practical efficiency. Experimental results show that our proposed ReSSL significantly outperforms the previous state-of-the-art algorithms in terms of both performance and training efficiency.
| accept | This paper proposes a more nuanced take on contrastive self-supervised learning (SSL). Rather than frame the objective of SSL as a binary similar/not similar target, the authors propose to use multiple sampled augmentations as a similarity distribution. The goal of SSL becomes to embed the training data in a way that agrees with the distribution. The result can be viewed either as a relaxed form of contrastive SSL, in which multiple views of the same instance only need to be similar, not the same, or---as one reviewer pointed out---a kernel-based approach that operates on the similarities among instances.
While the idea is intuitively appealing, the authors show that simply plugging it into existing SSL setups gives very poor performance, so the authors provide a recipe for this alternative approach. In particular, only weak augmentations (as opposed to changing the instances heavily) leads to better results, in contrast with existing contrastive learning approaches where the augmentation must be strong to make the pretext task challenging. Experimental results show that the learned representations lead to significant gains on object classification tasks.
The reviewers generally agree that this paper is well written and makes a significant contribution. During the discussion phase, the authors introduced several follow up experiments, which the reviewers encourage them to include in the final version. | test | [
"QM52gkkBCE8",
"h2Hx7K91nql",
"rgL3bMiADXJ",
"AbilWpnht2B",
"kQoQ5tZ_l0k",
"ZBOEJVICacw",
"tWhuvblDIqL",
"GcowoYTEST",
"WND93_iCMbz",
"cOw3WijRCa",
"M0__eb9i-MJ",
"ZM5BBTrflQ",
"JdKIWawjj9B",
"mnzAOr5N8YU",
"OZ6tgiKO6j2"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" First of all: thank you for your detailed response to my review, and apologies for the delay in responding in turn on my end!\n\nThank you for the clarification re: ablations and the connection to kernel-based approaches! I will leave my review as is at this time, which is inclined towards acceptance; I greatly ... | [
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"WND93_iCMbz",
"ZM5BBTrflQ",
"AbilWpnht2B",
"M0__eb9i-MJ",
"nips_2021_ErivP29kYnx",
"tWhuvblDIqL",
"GcowoYTEST",
"cOw3WijRCa",
"JdKIWawjj9B",
"kQoQ5tZ_l0k",
"OZ6tgiKO6j2",
"mnzAOr5N8YU",
"nips_2021_ErivP29kYnx",
"nips_2021_ErivP29kYnx",
"nips_2021_ErivP29kYnx"
] |
nips_2021_RQUl8gZnN7O | Learning to See by Looking at Noise | Current vision systems are trained on huge datasets, and these datasets come with costs: curation is expensive, they inherit human biases, and there are concerns over privacy and usage rights. To counter these costs, interest has surged in learning from cheaper data sources, such as unlabeled images. In this paper we go a step further and ask if we can do away with real image datasets entirely, instead learning from procedural noise processes. We investigate a suite of image generation models that produce images from simple random processes. These are then used as training data for a visual representation learner with a contrastive loss. In particular, we study statistical image models, randomly initialized deep generative models, and procedural graphics models.Our findings show that it is important for the noise to capture certain structural properties of real data but that good performance can be achieved even with processes that are far from realistic. We also find that diversity is a key property to learn good representations.
| accept | The paper addresses an under-explored research question related to the necessity of large datasets and the promise of doing representation learning pretraining using only noisy processes. The reviewers found the topic as well as the comprehensive study very interesting and great area of study. I concur in that I believe the paper could help foster more research in this direction, so I believe it is a great candidate for a spotlight. However, there are some lingering requests from the reviewers: I strongly advise the authors to follow up on these. | train | [
"IIAnd6xSJh",
"SlwmGyomk3H",
"6a-8WrVjIy",
"UyqjvbUSHRK",
"taWbtP8lYO-",
"b8b_rvZJZYG",
"pHh-vonxJci",
"yFgUVQr6ZP4",
"mDBUhgYLpAE",
"_5OSmlbTm7g",
"d2nkKcFC_su",
"OBbEulxOyiz",
"33dNqiejULt",
"e3MreaAhqJ",
"MsxhV8qy1S7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thank you authors for your detailed responses. \n\nI think the paper provides some intriguing ideas and experiments and remain of the opinion that it is a good paper.",
"This paper provides a comprehensive study of using not naturally looking, synthetic images to train image feature extractors. Most of the synt... | [
-1,
6,
-1,
6,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
4,
-1,
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"_5OSmlbTm7g",
"nips_2021_RQUl8gZnN7O",
"b8b_rvZJZYG",
"nips_2021_RQUl8gZnN7O",
"d2nkKcFC_su",
"pHh-vonxJci",
"33dNqiejULt",
"nips_2021_RQUl8gZnN7O",
"OBbEulxOyiz",
"MsxhV8qy1S7",
"UyqjvbUSHRK",
"yFgUVQr6ZP4",
"SlwmGyomk3H",
"nips_2021_RQUl8gZnN7O",
"nips_2021_RQUl8gZnN7O"
] |
nips_2021_EHUsTBGIP17 | Explicit loss asymptotics in the gradient descent training of neural networks | Maksim Velikanov, Dmitry Yarotsky | accept | Four reviewers recommend this paper for acceptance. One reviewer indicated that s/he is not as concerned with the NTK assumption and that this work could initiate some interesting follow-up studies on the benefit of certain power law scalings, and raised her/his initial score. Another reviewer raised her/his score as the authors sufficiently addressed his/her questions and proposed convincing improvements. Another reviewer concludes that the primary limitation of this work is the NTK assumption, but that the that the clarity of the submission is high compared with other submissions, and that the reviewers agree that the results are sufficiently interesting and could initiate a number of different inquiries both theoretical and numerical. In summary, the general consensus is for acceptance. I agree with this view and hence I am recommending this submission for acceptance. The reviewers made several suggestions and several improvements were proposed in the authors’ responses. I request the authors take these carefully into consideration when preparing the final manuscript. | train | [
"t0c1Xp2UJK",
"775TGEdZRY5",
"XSQx8DHPI6",
"gGIPjL97Kc",
"X-koZ-W9Oo1",
"MJP3Q6853C6",
"tvRTicOyL6S",
"GcWK2kY5iHu",
"NWpkpwp-MXW",
"dda_7Fkexky",
"XSmxLeWDqZV",
"Z3QAzaPZt3",
"LX9f4ruWNEx",
"N_jemh-gHXp"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper identifies a polynomial decay rate in the loss function during gradient descent under the condition that both the eigenvalues of the semigroup operator, and the coefficients in the eigenfunction expansion of an initial loss (g), exhibit power law decay. These assumptions are verified for the semigroup op... | [
7,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6
] | [
4,
-1,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_EHUsTBGIP17",
"dda_7Fkexky",
"tvRTicOyL6S",
"nips_2021_EHUsTBGIP17",
"GcWK2kY5iHu",
"GcWK2kY5iHu",
"NWpkpwp-MXW",
"gGIPjL97Kc",
"N_jemh-gHXp",
"t0c1Xp2UJK",
"LX9f4ruWNEx",
"nips_2021_EHUsTBGIP17",
"nips_2021_EHUsTBGIP17",
"nips_2021_EHUsTBGIP17"
] |
nips_2021_cwSkaedP-wz | Test-Time Personalization with a Transformer for Human Pose Estimation | We propose to personalize a 2D human pose estimator given a set of test images of a person without using any manual annotations. While there is a significant advancement in human pose estimation, it is still very challenging for a model to generalize to different unknown environments and unseen persons. Instead of using a fixed model for every test case, we adapt our pose estimator during test time to exploit person-specific information. We first train our model on diverse data with both a supervised and a self-supervised pose estimation objectives jointly. We use a Transformer model to build a transformation between the self-supervised keypoints and the supervised keypoints. During test time, we personalize and adapt our model by fine-tuning with the self-supervised objective. The pose is then improved by transforming the updated self-supervised keypoints. We experiment with multiple datasets and show significant improvements on pose estimations with our self-supervised personalization. Project page with code is available at https://liyz15.github.io/TTP/.
| accept | This work addresses the problem of test time personalization for human keypoint estimation from images. It builds on existing self-supervised keypoint discovery methods (e.g. [26]) by adding a Transformer-based adaptor for learning the transformation between self-supervised and supervised keypoints. The novelty lies in the use of a learned affinity matrix that maps between the two representations. During personalization, the model is adapted to the test set in either an online or offline setting. Quantitative results are presented on three different human pose datasets.
There were concerns in the reviews regarding the choice of datasets used. For instance, in Human3.6M, while there can be changes induced by the different cameras, the overall variation is very small and is highly correlated with the individual cameras. One reviewer was also concerned that the method, as presented, could not be trained on the large numbers of existing static datasets (e.g. COCO) due to the need for source and target frames featuring the same individual. The choice of datasets directly influence the difficulty of the task e.g. the range of poses, the appearance variation in the backgrounds, etc.
During the discussion, the authors have provided additional experiments investigating occlusion, more training iterations, and cross dataset transfer. These experiments were valuable for better understanding the strengths and weaknesses of the proposed method. In the end, some of the reviewers increased their scores and the majority leaned towards acceptance. The authors are strongly encouraged to include these additional results and explanations in the revised paper along with adding additional discussion of the limitations mentioned by the reviewers of the experiments (e.g. the challenges associated with large changes in background appearance).
| train | [
"UWVYdR0umnE",
"dKhOvX3jZvQ",
"PAjepV8psdA",
"KveixlbPLOX",
"AbhLJKY8uaF",
"JSpUI11m9ed",
"A4Kr3S4eEK",
"aFpMI38-zgv",
"_3M3Cuq2kp",
"vSiEo2I5qgx",
"08f9pO4i5x",
"JLwDWnUiBno",
"dFBBD-tcvcJ",
"851BemPJIoi"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" We answer the question on the COCO dataset in our first reply above. Our answer is more focusing on how our method is applied to diverse images with high background, occlusion variations, but not on the single image aspect. We would like to discuss more here. \n\nSince our work is focusing on “personalization”, t... | [
-1,
5,
-1,
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
5,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"PAjepV8psdA",
"nips_2021_cwSkaedP-wz",
"JLwDWnUiBno",
"nips_2021_cwSkaedP-wz",
"vSiEo2I5qgx",
"nips_2021_cwSkaedP-wz",
"08f9pO4i5x",
"_3M3Cuq2kp",
"851BemPJIoi",
"KveixlbPLOX",
"JSpUI11m9ed",
"dKhOvX3jZvQ",
"nips_2021_cwSkaedP-wz",
"nips_2021_cwSkaedP-wz"
] |
nips_2021_X8SLExrO2Lp | Towards Scalable Unpaired Virtual Try-On via Patch-Routed Spatially-Adaptive GAN | Image-based virtual try-on is one of the most promising applications of human-centric image generation due to its tremendous real-world potential. Yet, as most try-on approaches fit in-shop garments onto a target person, they require the laborious and restrictive construction of a paired training dataset, severely limiting their scalability. While a few recent works attempt to transfer garments directly from one person to another, alleviating the need to collect paired datasets, their performance is impacted by the lack of paired (supervised) information. In particular, disentangling style and spatial information of the garment becomes a challenge, which existing methods either address by requiring auxiliary data or extensive online optimization procedures, thereby still inhibiting their scalability. To achieve a scalable virtual try-on system that can transfer arbitrary garments between a source and a target person in an unsupervised manner, we thus propose a texture-preserving end-to-end network, the PAtch-routed SpaTially-Adaptive GAN (PASTA-GAN), that facilitates real-world unpaired virtual try-on. Specifically, to disentangle the style and spatial information of each garment, PASTA-GAN consists of an innovative patch-routed disentanglement module for successfully retaining garment texture and shape characteristics. Guided by the source person's keypoints, the patch-routed disentanglement module first decouples garments into normalized patches, thus eliminating the inherent spatial information of the garment, and then reconstructs the normalized patches to the warped garment complying with the target person pose. Given the warped garment, PASTA-GAN further introduces novel spatially-adaptive residual blocks that guide the generator to synthesize more realistic garment details. Extensive comparisons with paired and unpaired approaches demonstrate the superiority of PASTA-GAN, highlighting its ability to generate high-quality try-on images when faced with a large variety of garments(e.g. vests, shirts, pants), taking a crucial step towards real-world scalable try-on.
| accept | The paper was reviewed by four expert reviewers in the community. Most reviewers appreciate the novelty on unpaired training for virtual try-on (although paired data is not a critical restriction in this problem domain). The conditional extension of StyleGAN2 is somewhat similar to StylePoseGAN. All reviewers are aware that StylePoseGAN is an unpublished work so the reviewer assessment of this paper is not affected by an arXiv paper. There were an extensive discussions between the authors and Reviewer 5Zj3. The AC reads the reviews, authors' rebuttal, and the discussions. While there are still some clarifications required for the method exposition and several other limiting factors (e.g., low resolution results only, similar approach as concurrent work), the AC thinks the paper has sufficient merits and could inspire future research work. The AC thus recommends to accept. | train | [
"DRSJ8sT1J79",
"y_bIX6h9WtQ",
"CiwXli-wFf",
"Kv8IPD8X2Ix",
"dosdBjS9nAe",
"yp4muKFBCkU",
"LB4Hv-poXY6",
"Q5MIpkK6r3t",
"OvcxxxSI-zl",
"8ZIqh_Miyrc",
"uXN49fL9tA4",
"czvSBKhOTKW",
"GrnjjM-5KX",
"zQicZQ4PqlY",
"6PESR0qPE0",
"7hrDKxylZ4X",
"r8QC4U7mvAL",
"LPqeUu2GH88",
"I4cvUrg9ejD"... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" As the discussion phase is coming to its end, we would like to provide another clarification and reply to the updated review of Reviewer 5Zj3. In particular:\n\n1. Our approach, does not neglect the training test gap. We design a purposeful designed module in the network that reduces this gap and in addition make... | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
8
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"Kv8IPD8X2Ix",
"CiwXli-wFf",
"dosdBjS9nAe",
"nips_2021_X8SLExrO2Lp",
"yp4muKFBCkU",
"LB4Hv-poXY6",
"8ZIqh_Miyrc",
"OvcxxxSI-zl",
"6PESR0qPE0",
"uXN49fL9tA4",
"czvSBKhOTKW",
"GrnjjM-5KX",
"r8QC4U7mvAL",
"eYwe2R0aGIN",
"I4cvUrg9ejD",
"LPqeUu2GH88",
"Kv8IPD8X2Ix",
"nips_2021_X8SLExrO2... |
nips_2021_DsWYWm6ozxx | Bias Out-of-the-Box: An Empirical Analysis of Intersectional Occupational Biases in Popular Generative Language Models | The capabilities of natural language models trained on large-scale data have increased immensely over the past few years. Open source libraries such as HuggingFace have made these models easily available and accessible. While prior research has identified biases in large language models, this paper considers biases contained in the most popular versions of these models when applied `out-of-the-box' for downstream tasks. We focus on generative language models as they are well-suited for extracting biases inherited from training data. Specifically, we conduct an in-depth analysis of GPT-2, which is the most downloaded text generation model on HuggingFace, with over half a million downloads per month. We assess biases related to occupational associations for different protected categories by intersecting gender with religion, sexuality, ethnicity, political affiliation, and continental name origin. Using a template-based data collection pipeline, we collect 396K sentence completions made by GPT-2 and find: (i) The machine-predicted jobs are less diverse and more stereotypical for women than for men, especially for intersections; (ii) Intersectional interactions are highly relevant for occupational associations, which we quantify by fitting 262 logistic models; (iii) For most occupations, GPT-2 reflects the skewed gender and ethnicity distribution found in US Labor Bureau data, and even pulls the societally-skewed distribution towards gender parity in cases where its predictions deviate from real labor market observations. This raises the normative question of what language models \textit{should} learn - whether they should reflect or correct for existing inequalities.
| accept | This paper investigates associations between occupations and demographic attributes in text generated by GPT-2 from a narrow range of templated prompts, and shows that GPT-2's generations indeed tend to mirror observed trends in the US labor market, opening up the potential for it to reinforce existing inequities.
Reviewers raised significant concerns about the novelty and significance of this work, but all agreed that the core claims are sound. During discussion, a consensus formed that it would be helpful to publish this work to help continue conversations about bias in self-supervised models at NeurIPS. | train | [
"l7U2iC9Emy5",
"RwbX0ay-p_",
"rGW881IsvWw",
"Zlehe_2cujw",
"fD5KWQTggQA",
"GR-oJkjtcHV",
"hjkeiHvlXaP",
"iLAZoDp-63N",
"L8WIygWsf4X",
"GmPWmYQFeND",
"j4w06BNy1X",
"A0urr2U1MMa"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for addressing my comment.\n\nOverall, I have recommended accepting this paper. Other reviewers have listed venue and generalization issues. I advise the authors to address these issues particularly generalization issues (do you think such intersectional bias issues are prevalent in other models and why... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5
] | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"fD5KWQTggQA",
"L8WIygWsf4X",
"nips_2021_DsWYWm6ozxx",
"hjkeiHvlXaP",
"j4w06BNy1X",
"hjkeiHvlXaP",
"rGW881IsvWw",
"A0urr2U1MMa",
"GmPWmYQFeND",
"nips_2021_DsWYWm6ozxx",
"nips_2021_DsWYWm6ozxx",
"nips_2021_DsWYWm6ozxx"
] |
nips_2021_uVPZCMVtsSG | Weisfeiler and Lehman Go Cellular: CW Networks | Graph Neural Networks (GNNs) are limited in their expressive power, struggle with long-range interactions and lack a principled way to model higher-order structures. These problems can be attributed to the strong coupling between the computational graph and the input graph structure. The recently proposed Message Passing Simplicial Networks naturally decouple these elements by performing message passing on the clique complex of the graph. Nevertheless, these models can be severely constrained by the rigid combinatorial structure of Simplicial Complexes (SCs). In this work, we extend recent theoretical results on SCs to regular Cell Complexes, topological objects that flexibly subsume SCs and graphs. We show that this generalisation provides a powerful set of graph "lifting" transformations, each leading to a unique hierarchical message passing procedure. The resulting methods, which we collectively call CW Networks (CWNs), are strictly more powerful than the WL test and not less powerful than the 3-WL test. In particular, we demonstrate the effectiveness of one such scheme, based on rings, when applied to molecular graph problems. The proposed architecture benefits from provably larger expressivity than commonly used GNNs, principled modelling of higher-order signals and from compressing the distances between nodes. We demonstrate that our model achieves state-of-the-art results on a variety of molecular datasets.
| accept | This paper proposes a new graph neural network framework allowing message passing for high-order structures. The paper provides clear theoretical foundation and also delivers convincing empirical evidence. Although it was mentioned that the contribution is a bit incremental over existing work [4,24], overall the paper was positively/passionately accepted. One reviewer did raise strong concern about the presentation of the paper. But based on the paper and all other reviewers' opinions, the presentation concern does not seem general. Therefore, the AC recommends the paper to be accepted. | train | [
"BKSOJ5Q83L4",
"GlqABiPJ8TL",
"weV9LNNQfRG",
"_jn85Q5sX5",
"K84EQBNwTpj",
"JmqD7VZ2Qxt",
"NFGYRDi6p1L",
"pypEIa2d0kx",
"NiZ4btGR_1K",
"r2o4d_BXoxk",
"_cr37zEuHdg",
"6mtP-sWH525",
"ST0DW1EKTds",
"FZ6KiNTWzZl",
"I-MVTRnEKo5",
"fzT6BGmqIk3",
"MEOVmyIemz",
"1dT9V8eOT3a"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Recently, there have been several attempts to put higher-order structures on graphs and perform message passing over these structures. One example is the Message Passing Simplicial Networks proposed in e.g, ref[4]. The present paper extends the higher-order structures from simplicial complexes (SCs) to regular cel... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
4,
8
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3
] | [
"nips_2021_uVPZCMVtsSG",
"weV9LNNQfRG",
"_cr37zEuHdg",
"K84EQBNwTpj",
"JmqD7VZ2Qxt",
"NFGYRDi6p1L",
"6mtP-sWH525",
"NiZ4btGR_1K",
"ST0DW1EKTds",
"nips_2021_uVPZCMVtsSG",
"1dT9V8eOT3a",
"BKSOJ5Q83L4",
"MEOVmyIemz",
"MEOVmyIemz",
"fzT6BGmqIk3",
"nips_2021_uVPZCMVtsSG",
"nips_2021_uVPZC... |
nips_2021_SMU_hbhhEQ | Learning Conjoint Attentions for Graph Neural Nets | In this paper, we present Conjoint Attentions (CAs), a class of novel learning-to-attend strategies for graph neural networks (GNNs). Besides considering the layer-wise node features propagated within the GNN, CAs can additionally incorporate various structural interventions, such as node cluster embedding, and higher-order structural correlations that can be learned outside of GNN, when computing attention scores. The node features that are regarded as significant by the conjoint criteria are therefore more likely to be propagated in the GNN. Given the novel Conjoint Attention strategies, we then propose Graph conjoint attention networks (CATs) that can learn representations embedded with significant latent features deemed by the Conjoint Attentions. Besides, we theoretically validate the discriminative capacity of CATs. CATs utilizing the proposed Conjoint Attention strategies have been extensively tested in well-established benchmarking datasets and comprehensively compared with state-of-the-art baselines. The obtained notable performance demonstrates the effectiveness of the proposed Conjoint Attentions.
| accept | There is general consensus among the reviewers that the paper should be accepted.
The authors made extensive effort to answer reviewer comments and provide additional empirical results.
We expect the authors to implements all clarity improvements, as requested by the reviewers.
Figure 1 looks pixelized, it should be converted to vector graphics. | test | [
"AQ4Zn8m-ah",
"1t8tbDb-yTP",
"MiLaWjiYDUh",
"JJ20UO1rZnZ",
"IS_FxCNP7F3",
"1ZNEoR5ov9a",
"nSPSR0txIMJ",
"Pbt9jOOk4s",
"G5bQeqZL_2K",
"Umcepjuusl1",
"VOrh387h-1y",
"kSjaY1yJg3K",
"AiLfINWpgbs",
"i-0j4Ful6px",
"8sHN4ybQpD",
"ieWEE7aBe7",
"qsHlAPqskRp",
"SjTsXSpAvC",
"hWLsahNbh8H",
... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
... | [
"This paper proposes a new technique for augmenting Graph Attentions with information other than node feature embedding in GNNs. The Conjoint Attentions (CAs) proposed in this paper parametrize the similarity hints between nodes, such as adjacency matrix, in the form of matrix factorization, and modify the attentio... | [
7,
-1,
-1,
-1,
7,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
-1,
3,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"nips_2021_SMU_hbhhEQ",
"JJ20UO1rZnZ",
"1ZNEoR5ov9a",
"gid2Gii25kr",
"nips_2021_SMU_hbhhEQ",
"kSjaY1yJg3K",
"G5bQeqZL_2K",
"nips_2021_SMU_hbhhEQ",
"qsHlAPqskRp",
"sGDBx7l1SM7",
"nips_2021_SMU_hbhhEQ",
"Umcepjuusl1",
"IS_FxCNP7F3",
"8sHN4ybQpD",
"SjTsXSpAvC",
"nips_2021_SMU_hbhhEQ",
"... |
nips_2021_h3M00I96Ed | Hybrid Regret Bounds for Combinatorial Semi-Bandits and Adversarial Linear Bandits | This study aims to develop bandit algorithms that automatically exploit tendencies of certain environments to improve performance, without any prior knowledge regarding the environments. We first propose an algorithm for combinatorial semi-bandits with a hybrid regret bound that includes two main features: a best-of-three-worlds guarantee and multiple data-dependent regret bounds. The former means that the algorithm will work nearly optimally in all environments in an adversarial setting, a stochastic setting, or a stochastic setting with adversarial corruptions. The latter implies that, even if the environment is far from exhibiting stochastic behavior, the algorithm will perform better as long as the environment is "easy" in terms of certain metrics. The metrics w.r.t. the easiness referred to in this paper include cumulative loss for optimal actions, total quadratic variation of losses, and path-length of a loss sequence. We also show hybrid data-dependent regret bounds for adversarial linear bandits, which include a first path-length regret bound that is tight up to logarithmic factors.
| accept | The reviewers agree that this is an interesting and significant contribution. | train | [
"zIR0_UNMBLI",
"E0HcoYq6CC_",
"nfJi0yw_u9Q",
"T_mqvqCHEN7",
"z2z7kvlLINJ",
"sMqaC33DXMW",
"aJ1BlS3JOK",
"4JIUT-wV2nJ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This paper provides an algorithm for the combinatorial semi-bandits that achieves regret bound with three well known notions of data dependence regret in adversary regime. Besides that the provided algorithms attains best of both worlds result (Zimmert et al., 2019) up to a logarithmic factor in adversary setting.... | [
7,
7,
6,
7,
-1,
-1,
-1,
-1
] | [
4,
3,
3,
4,
-1,
-1,
-1,
-1
] | [
"nips_2021_h3M00I96Ed",
"nips_2021_h3M00I96Ed",
"nips_2021_h3M00I96Ed",
"nips_2021_h3M00I96Ed",
"nfJi0yw_u9Q",
"T_mqvqCHEN7",
"E0HcoYq6CC_",
"zIR0_UNMBLI"
] |
nips_2021_4J_H903nUE | Pay Better Attention to Attention: Head Selection in Multilingual and Multi-Domain Sequence Modeling | Hongyu Gong, Yun Tang, Juan Pino, Xian Li | accept | This paper analyzes multi-head attention in multilingual and multi-domain sequence modeling tasks. The paper claims that non-selective attention sharing is sub-optimal for achieving good generalization across all languages and domains, and proposes new attention sharing strategies for different languages and domains to mitigate interference. Experiments are reported in speech recognition, text-to-text and speech-to-text translation, with consistent gains.
Most reviewers agree that the proposed method is novel and interesting and the paper is clear and well written, though some experiments need to be clarified (addressed in the rebuttal). The main weaknesses pointed out by the reviewers is a non-standard conditional VAE formulation, which the authors clarified I their rebuttal, and the lack of discussion with and positioning with respect to studies that learn to share components for multilingual and multidomain settings (including adaptors). In a future iteration, I urge the authors to take into account the detailed comments made by the reviewers when preparing a new version of their paper. | train | [
"ETO8qNdLVLK",
"qAPMJUOWllE",
"VPjOeBOl6vV",
"6Ah50RGd1Iz",
"ceYDTzVYC9a",
"gb11H1l0yij",
"h5C_aQirSJA",
"i8Wvi16nen8",
"Co6AEk2zvC5",
"rc7zqxrBbGZ",
"OkVKgZkXGNv",
"gLMKMueHbon",
"3-2EhOAH_OP",
"WlnLo0oRLEw",
"03esATi-sVs",
"lb5iWlcg4H",
"aw8GTDU9jV",
"piKy9G9mtg"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"o... | [
"Multi-head attention is an essential component for popular Transformer models. This paper proposes to learn shared and specialized attention heads for different languages and domains. The authors formulate attention selection as latent variables and adopt Gumbel softmax to select attention heads. Experiments on t... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_4J_H903nUE",
"VPjOeBOl6vV",
"6Ah50RGd1Iz",
"ceYDTzVYC9a",
"gb11H1l0yij",
"h5C_aQirSJA",
"i8Wvi16nen8",
"3-2EhOAH_OP",
"lb5iWlcg4H",
"WlnLo0oRLEw",
"nips_2021_4J_H903nUE",
"03esATi-sVs",
"ETO8qNdLVLK",
"aw8GTDU9jV",
"OkVKgZkXGNv",
"piKy9G9mtg",
"nips_2021_4J_H903nUE",
"ni... |
nips_2021_gkyg2aOE6MU | Cardinality-Regularized Hawkes-Granger Model | We propose a new sparse Granger-causal learning framework for temporal event data. We focus on a specific class of point processes called the Hawkes process. We begin by pointing out that most of the existing sparse causal learning algorithms for the Hawkes process suffer from a singularity in maximum likelihood estimation. As a result, their sparse solutions can appear only as numerical artifacts. In this paper, we propose a mathematically well-defined sparse causal learning framework based on a cardinality-regularized Hawkes process, which remedies the pathological issues of existing approaches. We leverage the proposed algorithm for the task of instance-wise causal event analysis, where sparsity plays a critical role. We validate the proposed framework with two real use-cases, one from the power grid and the other from the cloud data center management domain.
| accept | The paper addresses an important problem, reviewers consider the method sound. There were some concerns that the work is too incremental I would not over estimate this issue given the relevance of the problem.
| train | [
"RfS8ieuWonF",
"Og8i8C97zDD",
"iKmtZ5FSYv3",
"n5Khdsw4h_e",
"eRl6AnXJwxW",
"IZlg85fqHMT",
"Q0YqAMlzOca",
"4UgvzKZA7td",
"MEChfrGl3Y8",
"oMhVcrIbW8M"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for taking time to read our response. If allowed, we will definitely update the text to better describe the points you raised. ",
" Thank you for the response.\nI almost seem to understand your responses.\nHowever, it may be difficult to read such things in the current paper. I hope the related presen... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
4,
2
] | [
"Og8i8C97zDD",
"Q0YqAMlzOca",
"n5Khdsw4h_e",
"IZlg85fqHMT",
"nips_2021_gkyg2aOE6MU",
"eRl6AnXJwxW",
"oMhVcrIbW8M",
"MEChfrGl3Y8",
"nips_2021_gkyg2aOE6MU",
"nips_2021_gkyg2aOE6MU"
] |
nips_2021_zAuDbrHC6fq | Aligned Structured Sparsity Learning for Efficient Image Super-Resolution | Yulun Zhang, Huan Wang, Can Qin, Yun Fu | accept | This paper studies the issue of pruning for super-resolution networks. The reviewers agree that the approach to pruning architecture-specific components, like resnet blocks, offers some interesting advances. However the authors seem to agree that the paper could do a better job communicating the novelty of the proposed methods, and that experiments show fairly small improvements over existing methods for pruning networks of this type.
| train | [
"ukXbw0A67jM",
"QylV7DMDgDM",
"mv6cOjDKUA",
"yBXfGIOgzLW",
"NVXPFIB6OKB",
"VmGHZaNaq4i",
"GoAbytLZfet"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a novel aligned structured sparsity learning (ASSL) method to prune the SR network. To tackle the pruned filter location mismatch issue in SR networks, a sparsity structure alignment penalty term is introduced to align the pruned filter indices across different layers. The final pruned network a... | [
5,
-1,
-1,
-1,
-1,
4,
4
] | [
5,
-1,
-1,
-1,
-1,
4,
3
] | [
"nips_2021_zAuDbrHC6fq",
"ukXbw0A67jM",
"ukXbw0A67jM",
"VmGHZaNaq4i",
"GoAbytLZfet",
"nips_2021_zAuDbrHC6fq",
"nips_2021_zAuDbrHC6fq"
] |
nips_2021_UAjh00C0BhT | Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural Networks | The lottery ticket hypothesis (LTH) states that learning on a properly pruned network (the winning ticket) has improved test accuracy over the original unpruned network. Although LTH has been justified empirically in a broad range of deep neural network (DNN) involved applications like computer vision and natural language processing, the theoretical validation of the improved generalization of a winning ticket remains elusive. To the best of our knowledge, our work, for the first time, characterizes the performance of training a pruned neural network by analyzing the geometric structure of the objective function and the sample complexity to achieve zero generalization error. We show that the convex region near a desirable model with guaranteed generalization enlarges as the neural network model is pruned, indicating the structural importance of a winning ticket. Moreover, as the algorithm for training a pruned neural network is specified as an (accelerated) stochastic gradient descent algorithm, we theoretically show that the number of samples required for achieving zero generalization error is proportional to the number of the non-pruned weights in the hidden layer. With a fixed number of samples, training a pruned neural network enjoys a faster convergence rate to the desired model than training the original unpruned one, providing a formal justification of the improved generalization of the winning ticket. Our theoretical results are acquired from learning a pruned neural network of one hidden layer, while experimental results are further provided to justify the implications in pruning multi-layer neural networks.
| accept | We thank the authors for this submission. The paper well-motivates the approach. The authors have provided extensive responses to the concerns raised and the AC + reviewers really thank them for their effort. Overall, the new results obtained during the rebuttal definitely improve the quality of the paper. We all believe that the inclusion of these results (summarized) during the rebuttal period is something that does not heavily change the message of this paper.
There was discussion and consensus that this work is interesting. Having in mind issues/concerns raised by the reviewers and the authors (via private communication), the main points of reviewers during further discussion were that this paper deserves publication, given the promised fixes by the authors during the discussion period. | train | [
"SHbcmQeWzzL",
"_H1Ec0IZVrE",
"36EUihamED",
"FkJS8T_Ck5D",
"NOnewKWl7W6",
"2vSf2fkH8jY",
"-oK68bjIBLB",
"FZdha9eb7XW",
"Nle_7HaTD4J",
"mHLi0ES-kYs",
"bSd20phLSmY",
"qrYYbqnBlq",
"gc36RB_TzcA",
"ftU29LQD2gL",
"7TkDQJueRBJ",
"kL-2lrG-vXy",
"ANQUbDQQG-",
"KWeDz0sSi_2"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thank you for checking our response and the other reviewers' comments. In the meantime, we would like to provide further responses to the two concerns that you raised.\n\n$\\textbf{Why assumption is needed: One-hidden-layer.}$\n\nMost existing results in the line of neural network theory are centered on the one-h... | [
-1,
3,
-1,
-1,
-1,
7,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
4,
-1,
-1,
-1,
3,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"_H1Ec0IZVrE",
"nips_2021_UAjh00C0BhT",
"_H1Ec0IZVrE",
"NOnewKWl7W6",
"gc36RB_TzcA",
"nips_2021_UAjh00C0BhT",
"Nle_7HaTD4J",
"bSd20phLSmY",
"kL-2lrG-vXy",
"nips_2021_UAjh00C0BhT",
"qrYYbqnBlq",
"mHLi0ES-kYs",
"2vSf2fkH8jY",
"_H1Ec0IZVrE",
"_H1Ec0IZVrE",
"mHLi0ES-kYs",
"KWeDz0sSi_2",
... |
nips_2021__1HETTYd7Wr | Constrained Robust Submodular Partitioning | Shengjie Wang, Tianyi Zhou, Chandrashekhar Lavania, Jeff A. Bilmes | accept | The reviewers find that the theoretical results in this paper are strong and that there are interesting novel ideas. In particular, the reviewers agree that adding constraints to the robust submodular partitioning problem introduces significant technical challenges. Overall, this paper makes significant progress to the area of robust submodular maximization. | train | [
"3BZxRGNhdH",
"bShiDzcUZ1K",
"Mt_1QfnZYM4",
"li9PFodYGwW",
"R3AefQfzMvy",
"7z_3kR2e28v",
"x5ZlAxA99xl",
"axol7c-iMG",
"sxX02t9-Iz4"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Provides constant approximation algorithms for certain \"robust\" or min-max submodular maximization problems. They consider cardinality and matroid constraints. Also provide good experimental results on partitioning a training dataset. The paper is on a “robust” submodular maximization problem. There is a monot... | [
7,
-1,
7,
-1,
-1,
-1,
-1,
7,
8
] | [
4,
-1,
3,
-1,
-1,
-1,
-1,
5,
4
] | [
"nips_2021__1HETTYd7Wr",
"R3AefQfzMvy",
"nips_2021__1HETTYd7Wr",
"3BZxRGNhdH",
"sxX02t9-Iz4",
"Mt_1QfnZYM4",
"axol7c-iMG",
"nips_2021__1HETTYd7Wr",
"nips_2021__1HETTYd7Wr"
] |
nips_2021_rMm9d_aDtOa | Online Knapsack with Frequency Predictions | There has been recent interest in using machine-learned predictions to improve the worst-case guarantees of online algorithms. In this paper we continue this line of work by studying the online knapsack problem, but with very weak predictions: in the form of knowing an upper and lower bound for the number of items of each value. We systematically derive online algorithms that attain the best possible competitive ratio for any fixed prediction; we also extend the results to more general settings such as generalized one-way trading and two-stage online knapsack. Our work shows that even seemingly weak predictions can be utilized effectively to provably improve the performance of online algorithms.
| accept | The paper proposes a new algorithm for the online knapsack problem. The algorithm utilizes the knowledge of upper and lower bounds for the number of items of each value, and its competitive ratio depends on the gap between the bounds.
The reviewers found the problem formulation and the algorithm to be interesting and novel. However, there were concerns whether the proposed method can be viewed as a “learning-augmented algorithm”, given that the paper does not explicitly address the case where the predictions are inaccurate. Furthermore, some reviewers commented that the experiments demonstrating the applicability of the proposed algorithm were quite limited, and did not illustrate where the appropriate predictions would come from.
| train | [
"g-OVk0V825c",
"BqjojHdbWK9",
"rFOOanf-0gO",
"plAAP0hU4tn",
"bQ2VMerPqBX",
"Xy4aXBjgSzp"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your careful review and comments. \n\n**(1)**\nWe respectfully disagree with this assessment. Most prior works on learning-augmented algorithms assume predictions regarding specific values of input parameters, whereas we consider a setting where the predictions only specify a range in which the par... | [
-1,
-1,
-1,
4,
4,
7
] | [
-1,
-1,
-1,
4,
4,
3
] | [
"plAAP0hU4tn",
"Xy4aXBjgSzp",
"bQ2VMerPqBX",
"nips_2021_rMm9d_aDtOa",
"nips_2021_rMm9d_aDtOa",
"nips_2021_rMm9d_aDtOa"
] |
nips_2021_zO6Q8q2AmbV | On Component Interactions in Two-Stage Recommender Systems | Jiri Hron, Karl Krauth, Michael Jordan, Niki Kilbertus | accept | Two-stage recommenders are critically important in production recommender systems, yet as the authors and reviewers point out, there has been very little theoretical analysis of this problem. While the reviewers do reasonably question whether bandit-based analysis is really the right theoretical framework, the author response rightfully points out that their analysis captures generic characteristics of some deployed two-stage recommenders. Post-rebuttal, the majority of reviewers agreed with a decision to accept and believe this work can inspire follow-on research in this potentially high impact area. The authors are strongly encouraged to address review concerns (e.g., on power law and zipf-like distributions) and integrate their insightful rebuttal discussion into the paper (or Appendix), even when it may be self-critical of the present work. | train | [
"xMnwDfLr6xb",
"hNFFdIl0qdv",
"SJU3koKGPSO",
"l-h9aQKcJ58",
"kvs-HoVirjs",
"cDw_zAHTqs8",
"yIRfYAwuK6t",
"Hcs0lVTkZJK",
"C6w-ZXzG9g"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" > “My main concern is what are the differences between RS and others. The mentioned ranker and nominator are also existed in some information systems like Q&A, Web search, sentence matching, … Without clearly addressing those differences, readers may feel that the current version can be applied mechanically to ot... | [
-1,
-1,
-1,
-1,
-1,
7,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"C6w-ZXzG9g",
"Hcs0lVTkZJK",
"yIRfYAwuK6t",
"cDw_zAHTqs8",
"nips_2021_zO6Q8q2AmbV",
"nips_2021_zO6Q8q2AmbV",
"nips_2021_zO6Q8q2AmbV",
"nips_2021_zO6Q8q2AmbV",
"nips_2021_zO6Q8q2AmbV"
] |
nips_2021_x6z8J_17LP3 | Lip to Speech Synthesis with Visual Context Attentional GAN | In this paper, we propose a novel lip-to-speech generative adversarial network, Visual Context Attentional GAN (VCA-GAN), which can jointly model local and global lip movements during speech synthesis. Specifically, the proposed VCA-GAN synthesizes the speech from local lip visual features by finding a mapping function of viseme-to-phoneme, while global visual context is embedded into the intermediate layers of the generator to clarify the ambiguity in the mapping induced by homophene. To achieve this, a visual context attention module is proposed where it encodes global representations from the local visual features, and provides the desired global visual context corresponding to the given coarse speech representation to the generator through audio-visual attention. In addition to the explicit modelling of local and global visual representations, synchronization learning is introduced as a form of contrastive learning that guides the generator to synthesize a speech in sync with the given input lip movements. Extensive experiments demonstrate that the proposed VCA-GAN outperforms existing state-of-the-art and is able to effectively synthesize the speech from multi-speaker that has been barely handled in the previous works.
| accept | The authors investigate speech generation out of silent videos based on the so-called visual context attentional GAN. The problem is interesting and challenging. The authors propose to use global visual context to reduce the ambiguity when mapping visemes to phonemes and introduce a synchronization mechanism via contrastive learning to make speech and lip movements in sync. Extensive experiments and evaluation are conducted to compare its performance with a variety of existing techniques. The proposed VCA-GAN reports the state-of-the-art performance in the lip-to-speech domain. The paper is well written and easy to follow. The experiments are extensive yet controlled. The rebuttal has clarifies most of the concerns in the review. All reviewers are supportive on accepting the paper. The authors should revise the submission to address the concerns (including the ethical concerns) raised by the reviewers. | train | [
"jzmePZX9iLb",
"FxcJI5pAkl",
"3Twut3oN_H3",
"BEK9Hh2cDR0",
"LB68eKTB4_7",
"tToma6PU-t-",
"IHXpJh9qqYs",
"IZBsEnXn9tg",
"s2h2IcFbS89",
"fsqgOjA6dky",
"GsOda9XApO",
"s0amb7oyOxU",
"mKsdjDhUU9",
"_jGRzXgrGq"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for this response. I appreciate the effort in addressing the comments raised in my and other reviews.",
" Thanks for responding to the review comments and for clarifying the issues raised in the review. I would like to retain my original score.",
" We would like to thank the reviewer for the valuabl... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"IZBsEnXn9tg",
"tToma6PU-t-",
"fsqgOjA6dky",
"s2h2IcFbS89",
"_jGRzXgrGq",
"mKsdjDhUU9",
"GsOda9XApO",
"s0amb7oyOxU",
"nips_2021_x6z8J_17LP3",
"nips_2021_x6z8J_17LP3",
"nips_2021_x6z8J_17LP3",
"nips_2021_x6z8J_17LP3",
"nips_2021_x6z8J_17LP3",
"nips_2021_x6z8J_17LP3"
] |
nips_2021_gZLhHMyxa- | Non-convex Distributionally Robust Optimization: Non-asymptotic Analysis | Jikai Jin, Bohang Zhang, Haiyang Wang, Liwei Wang | accept | The paper provides convergence guarantees for a stochastic method on a family of non-convex DRO problems, getting around the non-trivial challenge of lacking global smoothness and variance bounds. The reviewers appreciated the quality of the writing, relevance of the topic, and the technical novelty of the results, and from my own reading of the paper I received a similar impression. When revising the paper, please take care to thoroughly address the comments provided by the reviewer and myself. In particular, it is crucial to provide test performance of the model you train, in order to show that the problems and hyperparameters for which you solve DRO are such that it really provides an improvement in robustness. | train | [
"1koie_mG2fC",
"nhsukFtm1b0",
"KXKpQxmhupB",
"PVn6w_evrJA",
"LCp87ZK5YqA",
"peffOjwdBB",
"AaFBFqlJQw",
"9Yk03jazIOu",
"9PHQ0WiajIp",
"jQyw6vLH4BC",
"1GqNqYfEmve"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank AC for the careful reading and valuable comments/questions. Below are our responses to these comments/questions.\n\n**Regarding the comment on scaling.** We have followed your advice by using the criterion $\\Vert \\nabla_x \\mathcal{L}(x,\\eta) \\Vert + G| \\nabla_\\eta \\mathcal{L}(x,\\eta) | \\le \\ep... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
4,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
5,
3
] | [
"nhsukFtm1b0",
"nips_2021_gZLhHMyxa-",
"LCp87ZK5YqA",
"1GqNqYfEmve",
"9PHQ0WiajIp",
"jQyw6vLH4BC",
"9Yk03jazIOu",
"nips_2021_gZLhHMyxa-",
"nips_2021_gZLhHMyxa-",
"nips_2021_gZLhHMyxa-",
"nips_2021_gZLhHMyxa-"
] |
nips_2021_bsGr_8zmRos | Goal-Aware Cross-Entropy for Multi-Target Reinforcement Learning | Learning in a multi-target environment without prior knowledge about the targets requires a large amount of samples and makes generalization difficult. To solve this problem, it is important to be able to discriminate targets through semantic understanding. In this paper, we propose goal-aware cross-entropy (GACE) loss, that can be utilized in a self-supervised way using auto-labeled goal states alongside reinforcement learning. Based on the loss, we then devise goal-discriminative attention networks (GDAN) which utilize the goal-relevant information to focus on the given instruction. We evaluate the proposed methods on visual navigation and robot arm manipulation tasks with multi-target environments and show that GDAN outperforms the state-of-the-art methods in terms of task success ratio, sample efficiency, and generalization. Additionally, qualitative analyses demonstrate that our proposed method can help the agent become aware of and focus on the given instruction clearly, promoting goal-directed behavior.
| accept | All reviewers agree that the multi-target RL settings, where the agent is to reach specified and variable goals (called targets) depending on the task specification, is an important setup and the proposed extension of RL formulation -- a loss to predict goal states and a second to focus on the features relevant for the goal -- are novel, well motivated, and empirically supported. Some of the reviewers recommend to slightly improve the writing and further clarify the self-supervision setup, which we believe can be easy accomplished for the final version. Hence accept. | train | [
"MSOhuffHjsK",
"-ygLOa6aMJp",
"fub7J2uekRi",
"zJgq05Ehxq_",
"gfYPCyrw8RZ",
"yY6h3r2viSN",
"mP6p87Tl7C4",
"iUeTLcy2x6J",
"KtcysmOXS9G",
"aZ137BTTsA7",
"bHrgmgp9aBv",
"2ofwbKKF__r",
"65TMyttduZ0",
"v6TAjyjDx-"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" We are delighted to hear that your concerns have been addressed.\nThank you again for your valuable comments.\n\nSincerely yours, Authors.",
" We are glad that most of your concerns about the comparison and experiments have been addressed.\nAgain, we highly appreciate your suggestions for new experiments, as th... | [
-1,
-1,
-1,
7,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
-1,
-1,
4,
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"yY6h3r2viSN",
"iUeTLcy2x6J",
"KtcysmOXS9G",
"nips_2021_bsGr_8zmRos",
"nips_2021_bsGr_8zmRos",
"aZ137BTTsA7",
"nips_2021_bsGr_8zmRos",
"bHrgmgp9aBv",
"2ofwbKKF__r",
"gfYPCyrw8RZ",
"mP6p87Tl7C4",
"v6TAjyjDx-",
"zJgq05Ehxq_",
"nips_2021_bsGr_8zmRos"
] |
nips_2021_yxsak5ND2pA | Smooth Normalizing Flows | Normalizing flows are a promising tool for modeling probability distributions in physical systems. While state-of-the-art flows accurately approximate distributions and energies, applications in physics additionally require smooth energies to compute forces and higher-order derivatives. Furthermore, such densities are often defined on non-trivial topologies. A recent example are Boltzmann Generators for generating 3D-structures of peptides and small proteins. These generative models leverage the space of internal coordinates (dihedrals, angles, and bonds), which is a product of hypertori and compact intervals. In this work, we introduce a class of smooth mixture transformations working on both compact intervals and hypertori.Mixture transformations employ root-finding methods to invert them in practice, which has so far prevented bi-directional flow training. To this end, we show that parameter gradients and forces of such inverses can be computed from forward evaluations via the inverse function theorem.We demonstrate two advantages of such smooth flows: they allow training by force matching to simulation data and can be used as potentials in molecular dynamics simulations.
| accept | The paper introduces a flow architecture referred to as smooth normalizing flows. These are C^K-smooth maps that work on compact intervals and hypertori.
One of the main contributions is to propose a smooth transformation on the unit-interval.
The paper is overall well-written and technically sound. Conceptually, there is not much innovation in the training setup. But the experimental results are quite promising both with toy and real data. Testing learning models by using them on MD simulations (Langevin diffusions) is a neat idea as it requires that the model log-likelihood's gradient to be sufficiently accurate.
I am quite surprised to see no connections made between "force matching" and the whole literature of "score matching" [e.g. 1, 2, 3, 4, 5, 6] in section 3. I strongly advise the authors to add further discussions and references in this regard.
[1] Hyvärinen, A. and Dayan, P., 2005. Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research, 6(4).
[2] Song, Y., Sohl-Dickstein, J., Kingma, D.P., Kumar, A., Ermon, S. and Poole, B., 2020. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456.
[3] Song, Y. and Ermon, S., 2019. Generative modeling by estimating gradients of the data distribution. arXiv preprint arXiv:1907.05600.
[4] Bordes, F., Honari, S. and Vincent, P., 2017. Learning to generate samples from noise through infusion training. arXiv preprint arXiv:1703.06975.
[5] Song, Y. and Ermon, S., 2020. Improved techniques for training score-based generative models. arXiv preprint arXiv:2006.09011. | train | [
"qNvW5ITiN_",
"dig_lEoLgex",
"l65zu8oQMSX",
"zLTgqtMsd7l",
"u2Qra85iZzP",
"PiIHa7gJ73O",
"5I4ayuJAVdK",
"9jS6OgU43J",
"MG9AwPll_5s",
"jlQheYILtpo"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper is proposing smooth normalizing flows for modeling probability distribution in physical systems. This is because the existing flow-based model approximates distributions and energies by computing forces and higher-order derivatives based on smoothed energies. This work aims at addressing this challenge... | [
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
2,
5
] | [
"nips_2021_yxsak5ND2pA",
"nips_2021_yxsak5ND2pA",
"u2Qra85iZzP",
"PiIHa7gJ73O",
"jlQheYILtpo",
"MG9AwPll_5s",
"dig_lEoLgex",
"qNvW5ITiN_",
"nips_2021_yxsak5ND2pA",
"nips_2021_yxsak5ND2pA"
] |
nips_2021_Q-PA3D1OsDz | MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images | In this paper, we aim to create generalizable and controllable neural signed distance fields (SDFs) that represent clothed humans from monocular depth observations. Recent advances in deep learning, especially neural implicit representations, have enabled human shape reconstruction and controllable avatar generation from different sensor inputs. However, to generate realistic cloth deformations from novel input poses, watertight meshes or dense full-body scans are usually needed as inputs. Furthermore, due to the difficulty of effectively modeling pose-dependent cloth deformations for diverse body shapes and cloth types, existing approaches resort to per-subject/cloth-type optimization from scratch, which is computationally expensive. In contrast, we propose an approach that can quickly generate realistic clothed human avatars, represented as controllable neural SDFs, given only monocular depth images. We achieve this by using meta-learning to learn an initialization of a hypernetwork that predicts the parameters of neural SDFs. The hypernetwork is conditioned on human poses and represents a clothed neural avatar that deforms non-rigidly according to the input poses. Meanwhile, it is meta-learned to effectively incorporate priors of diverse body shapes and cloth types and thus can be much faster to fine-tune, compared to models trained from scratch. We qualitatively and quantitatively show that our approach outperforms state-of-the-art approaches that require complete meshes as inputs while our approach requires only depth frames as inputs and runs orders of magnitudes faster. Furthermore, we demonstrate that our meta-learned hypernetwork is very robust, being the first to generate avatars with realistic dynamic cloth deformations given as few as 8 monocular depth frames.
| accept | This submission introduces a method that enables the generation of realistic clothed human avatars from monocular depth images. While the initial reviews are mixed, after rebuttal, all reviewers are positive and recommend acceptance. The AC agrees. The authors should try to address the reviewers' concerns in the camera-ready version. This includes adding the results from the rebuttal period, clarifying the concerns on meta-learning as reviewer XKUG suggested, among others. | train | [
"HD7HEeME2Hi",
"jmTTcue26q7",
"F4KnaIec4oB",
"jgmKAKWmhuQ",
"JT4iZ6PfFyV",
"p1iF0G0BPzs",
"dpEFPthN8EZ",
"MfsUtAvAWN3",
"j3oKBgTnLYE",
"1Zhi4sJ6-L",
"wSv6LJpBcyU"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your responses!\nI think that the experiments makes the submission stronger.",
" Thank you for your suggestion! We followed your suggestion and added additional experiments as requested:\n\n**Setting**: We use the official SCANimate release code, which comes with 16 training raw scans of subject 033... | [
-1,
-1,
8,
6,
-1,
-1,
-1,
-1,
-1,
7,
8
] | [
-1,
-1,
4,
2,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"jmTTcue26q7",
"JT4iZ6PfFyV",
"nips_2021_Q-PA3D1OsDz",
"nips_2021_Q-PA3D1OsDz",
"MfsUtAvAWN3",
"F4KnaIec4oB",
"jgmKAKWmhuQ",
"wSv6LJpBcyU",
"1Zhi4sJ6-L",
"nips_2021_Q-PA3D1OsDz",
"nips_2021_Q-PA3D1OsDz"
] |
nips_2021_edCFRvlWqV | Distributed Principal Component Analysis with Limited Communication | We study efficient distributed algorithms for the fundamental problem of principal component analysis and leading eigenvector computation on the sphere, when the data are randomly distributed among a set of computational nodes. We propose a new quantized variant of Riemannian gradient descent to solve this problem, and prove that the algorithm converges with high probability under a set of necessary spherical-convexity properties. We give bounds on the number of bits transmitted by the algorithm under common initialization schemes, and investigate the dependency on the problem dimension in each case.
| accept | All reviewers support acceptance for the contributions on communication-efficient distributed PCA and the theoretical aspects of the proposed approach. I also recommend acceptance. However, please make sure to incorporate experimental results in the final version to validate the theoretical findings. A discussion on how to compute multiple eigenvectors would also strengthen the paper. | train | [
"upCZdAXLFqw",
"CFDQO7J3FMU",
"7v5DTe3lH6S",
"fglIylyJ54",
"G3Clk9_-Yl8",
"X7VNJ-bLXvy",
"Iz35jYWPAh1",
"_yTy1jdfr2",
"1TmPPyG7D2i",
"vB2VXolMsb",
"LpKSPjd9Qx",
"x4tit1a9Qor",
"0LiZyFyqWH"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper considers the problem of distributed computation of the leading eigenvector of a data covariance matrix. To solve this problem, a distributed variant of Riemannian gradient descent is proposed. Convergence is proven and an upper bound is given on the number of bits that need to be transmitted to estimate... | [
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_edCFRvlWqV",
"upCZdAXLFqw",
"G3Clk9_-Yl8",
"nips_2021_edCFRvlWqV",
"X7VNJ-bLXvy",
"Iz35jYWPAh1",
"LpKSPjd9Qx",
"0LiZyFyqWH",
"upCZdAXLFqw",
"nips_2021_edCFRvlWqV",
"fglIylyJ54",
"nips_2021_edCFRvlWqV",
"nips_2021_edCFRvlWqV"
] |
nips_2021_Bl0GlLmNGLV | Newton-LESS: Sparsification without Trade-offs for the Sketched Newton Update | In second-order optimization, a potential bottleneck can be computing the Hessian matrix of the optimized function at every iteration. Randomized sketching has emerged as a powerful technique for constructing estimates of the Hessian which can be used to perform approximate Newton steps. This involves multiplication by a random sketching matrix, which introduces a trade-off between the computational cost of sketching and the convergence rate of the optimization. A theoretically desirable but practically much too expensive choice is to use a dense Gaussian sketching matrix, which produces unbiased estimates of the exact Newton step and offers strong problem-independent convergence guarantees. We show that the Gaussian matrix can be drastically sparsified, substantially reducing the computational cost, without affecting its convergence properties in any way. This approach, called Newton-LESS, is based on a recently introduced sketching technique: LEverage Score Sparsified (LESS) embeddings. We prove that Newton-LESS enjoys nearly the same problem-independent local convergence rate as Gaussian embeddings for a large class of functions. In particular, this leads to a new state-of-the-art convergence result for an iterative least squares solver. Finally, we substantially extend LESS embeddings to include uniformly sparsified random sign matrices which can be implemented efficiently and perform well in numerical experiments.
| accept | The reviewers all appreciated the improved benefits of the new dimensionality reduction scheme over existing methods for achieving subspace embeddings. There was initially some confusion about why existing methods could not achieve similar results, as well as to which second order optimization methods the method could be applied to, but these were clarified well in the rebuttal, and all authors were in agreement and enjoyed reading this paper. | train | [
"FpgfLDIy5s2",
"fpxzKuXw-Bn",
"hdz7WoaVg_-",
"5hXY2QwEg5I",
"h-zllaq91T9",
"6GCfqFldDph",
"ZgqAC2jhn3h",
"lGMzm1C7Xcg",
"4F9wNdXmqR",
"VlFgVmGqtzI",
"1oxzFKQgVgv"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper studies a method to speed up Newton-type optimization methods using randomized sketching techniques (specifically, leverage score sparsification-based sketching). Newton-type optimization methods are hampered by their expensive computational costs (primarily arising due to the need to compute an invers... | [
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7
] | [
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_Bl0GlLmNGLV",
"lGMzm1C7Xcg",
"nips_2021_Bl0GlLmNGLV",
"ZgqAC2jhn3h",
"nips_2021_Bl0GlLmNGLV",
"1oxzFKQgVgv",
"hdz7WoaVg_-",
"VlFgVmGqtzI",
"FpgfLDIy5s2",
"nips_2021_Bl0GlLmNGLV",
"nips_2021_Bl0GlLmNGLV"
] |
nips_2021_EAdJEN8xKUl | Confident Anchor-Induced Multi-Source Free Domain Adaptation | Unsupervised domain adaptation has attracted appealing academic attentions by transferring knowledge from labeled source domain to unlabeled target domain. However, most existing methods assume the source data are drawn from a single domain, which cannot be successfully applied to explore complementarily transferable knowledge from multiple source domains with large distribution discrepancies. Moreover, they require access to source data during training, which are inefficient and unpractical due to privacy preservation and memory storage. To address these challenges, we develop a novel Confident-Anchor-induced multi-source-free Domain Adaptation (CAiDA) model, which is a pioneer exploration of knowledge adaptation from multiple source domains to the unlabeled target domain without any source data, but with only pre-trained source models. Specifically, a source-specific transferable perception module is proposed to automatically quantify the contributions of the complementary knowledge transferred from multi-source domains to the target domain. To generate pseudo labels for the target domain without access to the source data, we develop a confident-anchor-induced pseudo label generator by constructing a confident anchor group and assigning each unconfident target sample with a semantic-nearest confident anchor. Furthermore, a class-relationship-aware consistency loss is proposed to preserve consistent inter-class relationships by aligning soft confusion matrices across domains. Theoretical analysis answers why multi-source domains are better than a single source domain, and establishes a novel learning bound to show the effectiveness of exploiting multi-source domains. Experiments on several representative datasets illustrate the superiority of our proposed CAiDA model. The code is available at https://github.com/Learning-group123/CAiDA.
| accept | The theoretical contribution for multi-source domain adaptation without source data is nice. Though some components in the proposed method are borrowed from existing techniques, the overall design of the proposed method is based on the insights of the theoretical analysis. As [1] is the first work (if I am not wrong), which only provides theoretical analysis, the authors should add a section to discuss the difference in terms of the theoretical findings between the proposed method and [1] in a revision.
Overall, I recommend acceptance for this work. | val | [
"uT5x1mPGer",
"dQdIWsxWx3e",
"bwlbRHl0UXG",
"REwCs6qaIa5",
"JylalHAvAdj",
"2G_vlxU52h3",
"28_2pfvcszr",
"0MEnRs6VMo",
"QuDrxvGd0O9",
"MrmoHePSXfk",
"5s2a5_Re9t",
"0vN2-ezgamU",
"xELFf7mpjG0",
"SFz5uzaE1cB",
"oBXsTTpezSJ",
"ejBWhz6MyUa",
"7jUiOp2f6s8"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, an algorithm for multi-source domain adaptation is proposed. The algorithm is developed in a source-free regime to address the concern for privacy. A loss function is designed to integrate the source-specific end-to-end classifiers using solely the target domain samples, which preserves privacy. The... | [
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7
] | [
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_EAdJEN8xKUl",
"nips_2021_EAdJEN8xKUl",
"REwCs6qaIa5",
"dQdIWsxWx3e",
"ejBWhz6MyUa",
"28_2pfvcszr",
"oBXsTTpezSJ",
"MrmoHePSXfk",
"uT5x1mPGer",
"uT5x1mPGer",
"dQdIWsxWx3e",
"dQdIWsxWx3e",
"ejBWhz6MyUa",
"ejBWhz6MyUa",
"7jUiOp2f6s8",
"nips_2021_EAdJEN8xKUl",
"nips_2021_EAdJE... |
nips_2021_vy9jsg8VyoG | Word2Fun: Modelling Words as Functions for Diachronic Word Representation | Benyou Wang, Emanuele Di Buccio, Massimo Melucci | accept | This paper was thoroughly vetted by reviewers who were able to confirm its novelty, limitations (some experimental comparisons would have better contextualised the results), the general importance of the problem (learning diachronic word representations) both in terms of direct applications, and the technical appropriateness of the solution. Like early work championed in ML venues such as LDA that went on to have an important impact in application areas, this is a serious technical contribution that will have an long afterlife in diachronic sociolinguistics. | train | [
"Zsyu7scX8LL",
"Yi1B-D5Wcy1",
"YsG1rbQu-pN",
"Ls7e3SeXBE",
"HdYx0Iqaixg",
"_nyrqItw4aI",
"SXaqbr-Hakw",
"CPawPDzvFJ2",
"I6KrY8SJ9P",
"_s2fPciZItj",
"14L4U-lcdKk",
"FTANny4P54",
"ck5JWJ5gM9e",
"2CgwbkHNUng"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" \n\nThe following table reports on a case study to show that the proposed Word2Fun could at least to some extent capture word meaning change regarding words with multiple senses. We used the word **gay** for the case study.\n\n |word | 1900s|1920s|1940s|1960s|1980s|2000s |\n|-------|--------------|---------|----... | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"Ls7e3SeXBE",
"nips_2021_vy9jsg8VyoG",
"Ls7e3SeXBE",
"nips_2021_vy9jsg8VyoG",
"2CgwbkHNUng",
"FTANny4P54",
"_nyrqItw4aI",
"Ls7e3SeXBE",
"Ls7e3SeXBE",
"ck5JWJ5gM9e",
"Ls7e3SeXBE",
"nips_2021_vy9jsg8VyoG",
"nips_2021_vy9jsg8VyoG",
"nips_2021_vy9jsg8VyoG"
] |
nips_2021_-S1V_oEOE52 | Iteratively Reweighted Least Squares for Basis Pursuit with Global Linear Convergence Rate | Christian Kümmerle, Claudio Mayrink Verdun, Dominik Stöger | accept | All the reviewers stressed that this paper presents strong theoretical and numerical contributions, and it should be accepted. They commented that the strengths of the paper (in terms of quality of the exposition and numerical evaluation) outweigh its weaknesses (in particular the use of the NSP). This being said, I strongly recommend that the author follow the recommendations of the reviewers to improve the quality of the paper. | train | [
"7SHOqszbet5",
"9o8gyUdjtRS",
"6PfbHvy2hUR",
"xeuBvlMNrKA",
"RxNWZOlzOm5",
"BS5bWS-b2q3",
"uwFUZycODlA",
"k0CnU7TU1TN",
"T3ZzmdKzobx",
"k_Jkhh3MXyT",
"mIoqkx53Ock"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response. Overall, I think this is a nice addition to the literature, and there are possibly interesting followups that the authors could do for a journal version.",
"In this paper, the authors study the convergence rate of the Iterative Reweighted Least Square to solve Basis Pursuit. After recal... | [
-1,
7,
-1,
-1,
7,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
4,
3
] | [
"uwFUZycODlA",
"nips_2021_-S1V_oEOE52",
"BS5bWS-b2q3",
"T3ZzmdKzobx",
"nips_2021_-S1V_oEOE52",
"9o8gyUdjtRS",
"RxNWZOlzOm5",
"mIoqkx53Ock",
"k_Jkhh3MXyT",
"nips_2021_-S1V_oEOE52",
"nips_2021_-S1V_oEOE52"
] |
nips_2021_Mcldz4OJ6QB | Low-Rank Constraints for Fast Inference in Structured Models | Structured distributions, i.e. distributions over combinatorial spaces, are commonly used to learn latent probabilistic representations from observed data. However, scaling these models is bottlenecked by the high computational and memory complexity with respect to the size of the latent representations. Common models such as Hidden Markov Models (HMMs) and Probabilistic Context-Free Grammars (PCFGs) require time and space quadratic and cubic in the number of hidden states respectively. This work demonstrates a simple approach to reduce the computational and memory complexity of a large class of structured models. We show that by viewing the central inference step as a matrix-vector product and using a low-rank constraint, we can trade off model expressivity and speed via the rank. Experiments with neural parameterized structured models for language modeling, polyphonic music modeling, unsupervised grammar induction, and video modeling show that our approach matches the accuracy of standard models at large state spaces while providing practical speedups.
| accept | This paper presents an an approach to speeding up marginalization in structured unsupervised models by using a low-rank approximation to corresponding matrix (HMM) and tensor multiplies (PCFG). Experiments are conducted on a variety of tasks. Reviewers are split -- two in favor and two opposed. Generally, reviewers appreciated the paper's clarity, viewed the underlying algorithmic idea positively, and saw the paper's goal of speeding up structured unsupervised models as important and well-motivated. The main concern shared by many reviewers relates to the empirical results and practicality of the proposed approach. While the proposed approach does reduce asymptotic bounds on computation, it also reduces model capacity. The question of whether the proposed approach offers a practical speedup (or reduction in memory consumption) at useful performance levels is an important one. The draft included very few experiments measuring runtime. Author response provided more complete runtime comparisons with baselines, but unfortunately showed very modest gains (and in some cases, none at all). Thus, taking these points in balance, I lean towards rejection. However, I do encourage authors to revise experiments and resubmit if more practical gains can be achieved. | train | [
"lI03KiK5m-Y",
"91F8Phiqeh_",
"vfuV5W0-VOM",
"fVKquYth-8d",
"4CkQw_zTD_d",
"ayiBGMUXCsC",
"WlT7JUoU1OG",
"IXklabekgI",
"f1YxZSPQ62d",
"t_MHCW0Z9m",
"d0Puc8_qLVR"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Appreciate your response. Decided to modify my score to reflect this.",
"This work proposes to reduce the runtime of the marginalization computation needed for inference in structured (graphical) models. Some examples of such models are: Hidden-Markov Models (HMM), Hidden Semi-Markov Models (HSMM), and Probabil... | [
-1,
6,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7
] | [
-1,
2,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"f1YxZSPQ62d",
"nips_2021_Mcldz4OJ6QB",
"nips_2021_Mcldz4OJ6QB",
"ayiBGMUXCsC",
"IXklabekgI",
"d0Puc8_qLVR",
"vfuV5W0-VOM",
"t_MHCW0Z9m",
"91F8Phiqeh_",
"nips_2021_Mcldz4OJ6QB",
"nips_2021_Mcldz4OJ6QB"
] |
nips_2021_4CrjylrL9vM | Accumulative Poisoning Attacks on Real-time Data | Collecting training data from untrusted sources exposes machine learning services to poisoning adversaries, who maliciously manipulate training data to degrade the model accuracy. When trained on offline datasets, poisoning adversaries have to inject the poisoned data in advance before training, and the order of feeding these poisoned batches into the model is stochastic. In contrast, practical systems are more usually trained/fine-tuned on sequentially captured real-time data, in which case poisoning adversaries could dynamically poison each data batch according to the current model state. In this paper, we focus on the real-time settings and propose a new attacking strategy, which affiliates an accumulative phase with poisoning attacks to secretly (i.e., without affecting accuracy) magnify the destructive effect of a (poisoned) trigger batch. By mimicking online learning and federated learning on MNIST and CIFAR-10, we show that model accuracy significantly drops by a single update step on the trigger batch after the accumulative phase. Our work validates that a well-designed but straightforward attacking strategy can dramatically amplify the poisoning effects, with no need to explore complex techniques.
| accept | The reviewers were overall positive regarding the paper, and they were satisfied by the rebuttal. After the discussion period, the reviewers requested for the following things to be included in the paper:
1. More description of the optimization problem in Algorithm 1.
2. A table about the capacity of the attacker in the federated learning setting.
3. Include the additional empirical evaluations mentioned mentioned by the authors during the rebuttal period. | test | [
"fjk0A1vfRxq",
"ohDwae3D8ad",
"kWzMffosqq",
"ndNoh7wRkMt",
"5zA25mpATPq",
"4f60PEn3p9U",
"1Yi6uMSn898",
"4OE68Fc2irD",
"F7cjPk8ZUS_",
"YcE5so9mUKL",
"PkDXCEpMU6S",
"puYqEoslLgn",
"w6-4d40Ok5",
"loxlOagNZOM",
"dNb5BmQXvsY",
"vc1zSDBBN9Z",
"OojrAxW0g9J",
"OIVMzVX6kGJ",
"icD0y9atdnm... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
... | [
" Thank you very much for the increase on rating and valuable feedback. We highly appreciate that. We'll try our best to further improve in the final version.",
" Thank you for your thorough response and additional experiments. I have increased my score to reflect my concerns being addressed, and your commitment ... | [
-1,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"ohDwae3D8ad",
"1Yi6uMSn898",
"nips_2021_4CrjylrL9vM",
"4f60PEn3p9U",
"nips_2021_4CrjylrL9vM",
"4OE68Fc2irD",
"loxlOagNZOM",
"icD0y9atdnm",
"PkDXCEpMU6S",
"nips_2021_4CrjylrL9vM",
"UI6vm9rfI1K",
"w6-4d40Ok5",
"OojrAxW0g9J",
"dNb5BmQXvsY",
"OIVMzVX6kGJ",
"nips_2021_4CrjylrL9vM",
"S0AJ... |
nips_2021_Jhp38rtUTV | UCB-based Algorithms for Multinomial Logistic Regression Bandits | Sanae Amani, Christos Thrampoulidis | accept | This paper proposes a UCB-based algorithm for multinomial logit bandits. Its initial scores were 6, 6, 5, and 5; and they did not change during the discussion. The reviewers liked the rebuttal of the authors and agreed that this paper would a good contribution to NeurIPS. The main factor in this decision was the importance of the application. On the other hand, the paper builds heavily on Faury et al. (2020) and contains no experiments. In my opinion, this paper is borderline plus and I support its acceptance. | val | [
"Vj5Ytga9Gtr",
"q7NfjaPkjyq",
"PkLYcD92h4w",
"jBlO9rIay60",
"5fTFP3uXfQw",
"c7g67QGnTHv",
"XMApBG4of9Y",
"1IKqBy0opy",
"PhGZNG1B1SN",
"zZ1dSaB6rUE",
"milTQ2cdsOz",
"GH1V4RD2NfW",
"eMOEk6OJy1r"
] | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewers and AC,\n\nAs the discussion period reaches its end, we wanted to thank you all for your time spent on our submission. \n\nWe appreciate your feedback. If there are more questions, we are always happy to discuss and clarify.\n\nBest regards,\nAuthors\n\n",
" Dear Reviewer,\n\nThank you again for ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4,
4
] | [
"nips_2021_Jhp38rtUTV",
"PkLYcD92h4w",
"c7g67QGnTHv",
"5fTFP3uXfQw",
"1IKqBy0opy",
"GH1V4RD2NfW",
"eMOEk6OJy1r",
"milTQ2cdsOz",
"zZ1dSaB6rUE",
"nips_2021_Jhp38rtUTV",
"nips_2021_Jhp38rtUTV",
"nips_2021_Jhp38rtUTV",
"nips_2021_Jhp38rtUTV"
] |
nips_2021_nzqoh6FN6sF | Estimating the Long-Term Effects of Novel Treatments | Policy makers often need to estimate the long-term effects of novel treatments, while only having historical data of older treatment options. We propose a surrogate-based approach using a long-term dataset where only past treatments were administered and a short-term dataset where novel treatments have been administered. Our approach generalizes previous surrogate-style methods, allowing for continuous treatments and serially-correlated treatment policies while maintaining consistency and root-n asymptotically normal estimates under a Markovian assumption on the data and the observational policy. Using a semi-synthetic dataset on customer incentives from a major corporation, we evaluate the performance of our method and discuss solutions to practical challenges when deploying our methodology.
| accept | Thanks to the authors for this engaging and submission. The reviewers were all quite positive about this work, and I'm happy to recommend acceptance.
One shared concern among the reviewers is clarity of the submission --- as reviewer ZEEt notes, sections 2 and 3 are a bit confusing in their presentation, and should be streamlined in the final version. And as reviewer SfcW discussed, the incorporation of a new synthetic experiment will help understand the relationship between estimation bias and strength of correlation, even if placed in the appendix and referred to in the main text. | train | [
"Ik6dJUXTuZ",
"kkWfHVsT22_",
"K6IP7lpmCdh",
"puNEhoyvgv3",
"dxiBoeVty_2",
"wWXX8dyvlUR",
"hJCxkIUvDsX",
"sYKs2DpNepj",
"0cqQoMVbOi8",
"K-kYdRCKAfb",
"_wD9-GC69pY",
"APkshgvLeEm",
"8nhbCMcRD4H"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper develops a method for estimating cumulative effects of treatments when available data is limited to historical data of old treatments and short-term data of new treatments. A key challenge in this setting is that treatment assignments over time are correlated with each other, making it difficult to isol... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"nips_2021_nzqoh6FN6sF",
"K-kYdRCKAfb",
"puNEhoyvgv3",
"dxiBoeVty_2",
"wWXX8dyvlUR",
"hJCxkIUvDsX",
"8nhbCMcRD4H",
"APkshgvLeEm",
"Ik6dJUXTuZ",
"_wD9-GC69pY",
"nips_2021_nzqoh6FN6sF",
"nips_2021_nzqoh6FN6sF",
"nips_2021_nzqoh6FN6sF"
] |
nips_2021_-K4tIyQLaY | Dual Progressive Prototype Network for Generalized Zero-Shot Learning | Generalized Zero-Shot Learning (GZSL) aims to recognize new categories with auxiliary semantic information, e.g., category attributes. In this paper, we handle the critical issue of domain shift problem, i.e., confusion between seen and unseen categories, by progressively improving cross-domain transferability and category discriminability of visual representations. Our approach, named Dual Progressive Prototype Network (DPPN), constructs two types of prototypes that record prototypical visual patterns for attributes and categories, respectively. With attribute prototypes, DPPN alternately searches attribute-related local regions and updates corresponding attribute prototypes to progressively explore accurate attribute-region correspondence. This enables DPPN to produce visual representations with accurate attribute localization ability, which benefits the semantic-visual alignment and representation transferability. Besides, along with progressive attribute localization, DPPN further projects category prototypes into multiple spaces to progressively repel visual representations from different categories, which boosts category discriminability. Both attribute and category prototypes are collaboratively learned in a unified framework, which makes visual representations of DPPN transferable and distinctive.Experiments on four benchmarks prove that DPPN effectively alleviates the domain shift problem in GZSL.
| accept | This paper addresses generalised ZSL via proposing a new prototype refinement-based scheme to reduce confusion between known and novel categories. Reviewers felt there were various sources of confusion and explanation quality. However they also agree that the methodology is solid enough and the evaluation is strong. I recommend accept, and encourage the authors to take reviewers comments on board in refining the quality and clarity of exposition in the final version. | train | [
"RkJRzZ_7yuq",
"snLpMPTk1j",
"1_3J7YlPU6W",
"kWRSnCRCdxE",
"uURCUMgklVZ",
"U2xtEcMidi",
"khKgKOWSeMg",
"K7DZd_z00jp",
"bA828cBr5qs",
"r8gEZuMIPOH",
"UYxC367Gv6",
"1ZfgWr_Jusx"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your appreciation and further suggestions. We will add the results measured by AUSUC for all experiments in our revision. Here, we provide the important results that you are concerned in terms of AUSUC.\n\n\n\n**Effect of different aggregation mechanisms.**\n\n| Aggregation Mechanism | CUB AUSUC | AWA2... | [
-1,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
4,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"1_3J7YlPU6W",
"nips_2021_-K4tIyQLaY",
"r8gEZuMIPOH",
"khKgKOWSeMg",
"nips_2021_-K4tIyQLaY",
"uURCUMgklVZ",
"UYxC367Gv6",
"bA828cBr5qs",
"1ZfgWr_Jusx",
"snLpMPTk1j",
"nips_2021_-K4tIyQLaY",
"nips_2021_-K4tIyQLaY"
] |
nips_2021_NVAOPWZWYlv | Derivative-Free Policy Optimization for Linear Risk-Sensitive and Robust Control Design: Implicit Regularization and Sample Complexity | Policy-based model-free reinforcement learning (RL) methods have shown great promise for continuous control applications. However, their performances on risk-sensitive/robust control tasks have not been fully understood, which has been generally considered to be one important open problem in the seminal work (Fazel et al., 2018). We make a step toward addressing this open problem, by providing the first sample complexity results for policy gradient (PG) methods in two fundamental risk-sensitive/robust control settings: the linear exponential quadratic Gaussian, and the linear-quadratic (LQ) disturbance attenuation problems. The optimization landscapes for these problems are by nature more challenging than that of the LQ regulator problem, due to lack of coercivity of their objective functions. To overcome this challenge, we obtain the first implicit regularization results for model-free PG methods, certifying that the controller remains robust during the learning process, which further lead to the sample complexity guarantees. As a by-product, our results also provide the first sample complexity of PG methods in two-player zero-sum LQ dynamic games, a baseline in multi-agent RL.
| accept | The paper analyzes policy gradient methods in the context of a two-player LQR game (which can be shown to correspond to certain robust control problems) and establishes polynomial sample complexity. The authors show that the optimization landscape is more difficult, as unlike standard LQR, a decrease in the objective does not ensure feasibility of the iterates. They propose a double-loop algorithm that alternates between optimizing controllers for the two players. For the problematic outer loop, they show that NPG and Gauss-Newton with certain step size preserve feasibility. They extend the results to the case where gradients are estimated from samples, and show a polynomial sample complexity.
The established sample complexity seems suboptimal, and the paper could be organized better, especially when it comes to terms of positioning contributions in the context of existing work. However, overall the contributions are strong enough to warrant acceptance.
| train | [
"jM46TPMmriV",
"VYDb9A5S0SE",
"mtpLykgWNxG",
"2QxbH94o-n",
"5jYbJ5ncwm-",
"QheoLYrqTfl",
"-CHc456oe6H",
"Gdto-B_IyFl",
"xft5zQwLfnz",
"3xHSlQZZiWk"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors established the first sample complexity result of the policy gradient method for a class of risk-sensitive linear-quadratic control problems. This class of control problems can also be viewed as a class of zero-sum two-player games.\n Pros:\n\nCompared to the seminal work (Fazel et. al ... | [
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
2,
3
] | [
"nips_2021_NVAOPWZWYlv",
"mtpLykgWNxG",
"nips_2021_NVAOPWZWYlv",
"nips_2021_NVAOPWZWYlv",
"3xHSlQZZiWk",
"mtpLykgWNxG",
"jM46TPMmriV",
"xft5zQwLfnz",
"nips_2021_NVAOPWZWYlv",
"nips_2021_NVAOPWZWYlv"
] |
nips_2021__CmrI7UrmCl | G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of Teacher Discriminators | Yunhui Long, Boxin Wang, Zhuolin Yang, Bhavya Kailkhura, Aston Zhang, Carl Gunter, Bo Li | accept | The reviews for the paper were mixed. Unfortunately, the reviewers still did not change the score even after the rebuttal. I am still supporting acceptance because of the novel gradient aggregation scheme in the paper.
In the following I list some of the concerns that may not have been addressed adequately in the rebuttal, and the authors need to take care of them.
1. There was a concern that the advantage of the proposed G-PATE comes from the random projection and gradient discretization instead of the PATE framework. The authors did some initial experiments on DP-GAN to show that that the gradient discretization actually impacts the improvement in accuracy. However, these experiments are fairly preliminary, and needs a thorough discussion in the main text of the paper.
2. The quality of the images generated were not that great. There were some concerns about the efficacy of the algorithm at higher epsilons >10, when compared to other approaches. Since single digit epsilons seem to be mostly the industry standard as of now, it is important to understand the efficacy of the algorithm in this privacy regime. | train | [
"5SY_nlKOWJ",
"pP6w_MrjYhM",
"sLS9yUg5IQK",
"dEcAT2c32ir",
"ul-F6VTjArS",
"e_qeMWcOlw",
"AA8o-aEkjhY",
"izgoklFBAyE",
"E7PCKc0Xjg",
"saXS4WhzilU",
"lzzVPCYzJn6",
"NytVI7R4t65",
"vrHjBiij6W",
"qTxtaK4dXZu",
"odcLq0geUj-",
"rk0pLyjx5c3",
"zYWn0U0jTtV"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This study proposes a new method for generating differentially private synthetic datasets called G-PATE. \n\nThis approach combines GANs (Generative Adversarial Neural Networks) with the PATE framework (Private Aggregation of Teacher Ensembles). The proposal utilizes known techniques from the differential privacy ... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"nips_2021__CmrI7UrmCl",
"sLS9yUg5IQK",
"dEcAT2c32ir",
"ul-F6VTjArS",
"e_qeMWcOlw",
"E7PCKc0Xjg",
"lzzVPCYzJn6",
"vrHjBiij6W",
"saXS4WhzilU",
"NytVI7R4t65",
"zYWn0U0jTtV",
"5SY_nlKOWJ",
"rk0pLyjx5c3",
"odcLq0geUj-",
"nips_2021__CmrI7UrmCl",
"nips_2021__CmrI7UrmCl",
"nips_2021__CmrI7U... |
nips_2021_iQICgKcrGpE | On the Existence of The Adversarial Bayes Classifier | Adversarial robustness is a critical property in a variety of modern machine learning applications. While it has been the subject of several recent theoretical studies, many important questions related to adversarial robustness are still open. In this work, we study a fundamental question regarding Bayes optimality for adversarial robustness. We provide general sufficient conditions under which the existence of a Bayes optimal classifier can be guaranteed for adversarial robustness. Our results can provide a useful tool for a subsequent study of surrogate losses in adversarial robustness and their consistency properties.
| accept | This paper studies a variety of theoretical questions regarding the Bayes optimal classifier and associated robustness guarantees. This is an important topic, adding to the growing discussion around theoretical tools for robustness-related questions. The reviewers found the theoretical results to be substantial, using novel arguments in linear algebra and measure theory. The reviewers also believe that the results are interesting, well-motivated, and technically sound. Therefore, I recommend accepting this paper.
In the reviews, there were two main concerns that the authors should address for the final version of the paper: (1) certain relevant references were missing and should be discussed, and (2) there was a strong suggestion to improve the readability of the supplemental results and proofs in the paper. I agree that both of these should be addressed, and that writing a good theory paper can be challenging but is ultimately a good service to the community. Specifically, for (1), I think the authors can also be clearer in the paper about the relationship with prior work, focusing more on the theoretical comparison between the prior results in some of the references and the present results (e.g., a proper technical overview may be a good way to address both points and set up the exposition for the appendix and proofs). | val | [
"qTHkBsUZ-3o",
"7SznlSjY0SF",
"QvLrmEEg3zT",
"n25chU6E3GD",
"NiZb2dhATGq",
"T-UVmLWmriL",
"IIYZAXx2QJL",
"8g_WnwQGL7_",
"icGr8FWlsQ6",
"fv3itZqJl3O",
"BGjWrhhl-A",
"IzF7GOlC1g",
"cuG1PrcTDR"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We would like to point out that our result is aimed at understanding when the adversarial Bayes optimal classifier will exist. In general such a adversarial Bayes optimal classifier may not lie in any common function class (this can happen in non-adversarial cases too), although as the function class gets richer ... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
3,
4
] | [
"QvLrmEEg3zT",
"n25chU6E3GD",
"BGjWrhhl-A",
"NiZb2dhATGq",
"8g_WnwQGL7_",
"icGr8FWlsQ6",
"nips_2021_iQICgKcrGpE",
"fv3itZqJl3O",
"IIYZAXx2QJL",
"cuG1PrcTDR",
"IzF7GOlC1g",
"nips_2021_iQICgKcrGpE",
"nips_2021_iQICgKcrGpE"
] |
nips_2021_gaftyBQ4Lu | Convex-Concave Min-Max Stackelberg Games | Min-max optimization problems (i.e., min-max games) have been attracting a great deal of attention because of their applicability to a wide range of machine learning problems. Although significant progress has been made recently, the literature to date has focused on games with independent strategy sets; little is known about solving games with dependent strategy sets, which can be characterized as min-max Stackelberg games. We introduce two first-order methods that solve a large class of convex-concave min-max Stackelberg games, and show that our methods converge in polynomial time. Min-max Stackelberg games were first studied by Wald, under the posthumous name of Wald’s maximin model, a variant of which is the main paradigm used in robust optimization, which means that our methods can likewise solve many convex robust optimization problems. We observe that the computation of competitive equilibria in Fisher markets also comprises a min-max Stackelberg game. Further, we demonstrate the efficacy and efficiency of our algorithms in practice by computing competitive equilibria in Fisher markets with varying utility structures. Our experiments suggest potential ways to extend our theoretical results, by demonstrating how different smoothness properties can affect the convergence rate of our algorithms.
| accept | This paper studies a class of min-max games where each player's actions are subject to coupled constraints, i.e., the actions avaible to one player depend on the action chosen by the other, and vice versa. The authors examine two sequential algorithms for solving the game - one based on best responses and another based on a nested version of gradient descent - and they derive a series of polynomial complexity guarantees for the algorithms under study.
The reviewers appreciated the contributions of the paper and were initially positive. However, during the discussion phase, it became clear that the paper is ignoring an extensive literature on generalized Nash equilibrium problems. This literature dates back at least to Debreu's seminal 1952 paper and it treats essentially the same problem as the authors: in particular, there is a series of asymptotic convergence results (for different classes of games) that should be discussed by the authors. More precisely, even though the literature on learning in generalized Nash equilibrium problems does not provide rates of convergence, the results provided therein prove the convergence of the actual sequence of play (something which does not seem possible with the authors' approach and techniques).
Another - more minor - concern had to do with the relevance of such models for the ML/AI community, but this was partially addressed by the authors.
Overall, the paper definitely has merits: to the best of the committee's knowledge, this is the first polynomial complexity result of its kind in the class of games with coupled constraints (which is perhaps not surprising since the literature on generalized Nash equilibrium problems otherwise focuses on the convergence of the actual sequence of play, not the "best" or "time-averaged" iterate). However, the lack of proper positioning is a major issue that has to be fixed for the camera-ready version; specifically, the committee has the following expectations:
1. The authors should provide adequate pointers to the literature on generalized Nash equilibrium problems, especially on early work by Facchinei and co-authors, as well as more recent contributions by Grammatico and co-authors - e.g., the recent preprint by Fabiani et al., "Local Stackelberg equilibrium seeking in generalized aggregative games" should be discussed in detail. [Certain references were already provided during the discussion phase, and the authors would be expected to expand on those]
2. The authors should present in detail the notion of generalized Nash equilibria in the paper (during the discussion phase, there seemed to be some confusion on this point by the authors, but it was eventually cleared up). In particular, the committe expects the authors to provide an in-depth comparison to the model of Fabiani et al. - not in regard to the _class of games_ considered, but as to the _notion of equilibrium_ under study. Put simply, now that the authors are aware of the literature on the problem, the committee would expect an in-depth and balanced treatment.
3. The algorithms studied by the authors should be incorporated in the main text - otherwise, the statements of the authors' theorems are impossible to parse.
Modulo the above, I am happy to recommend acceptance. | train | [
"X1VN8sPDInn",
"wh5qy7wfNO",
"Kqo-FFmqBUV",
"kfwca-OdXUw",
"1oaYxqHDBky",
"VbW7czC3v9i",
"q1bEBKnS9eJ",
"mghNO9JCk7H",
"Rf1baYXroT6",
"bztqMczk99"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your feedback and score increase, we will make sure to address your concerns!",
"This paper studies the minmax problem where the second player's (follower) action set depends on the action of the first player. Naturally, Stackelberg equilibrium is the concept to study here. The authors use a sub-g... | [
-1,
7,
6,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
3,
3,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"kfwca-OdXUw",
"nips_2021_gaftyBQ4Lu",
"nips_2021_gaftyBQ4Lu",
"mghNO9JCk7H",
"wh5qy7wfNO",
"bztqMczk99",
"Rf1baYXroT6",
"Kqo-FFmqBUV",
"nips_2021_gaftyBQ4Lu",
"nips_2021_gaftyBQ4Lu"
] |
nips_2021_kbzx0uNZdS | Misspecified Gaussian Process Bandit Optimization | Ilija Bogunovic, Andreas Krause | accept | Dear authors,
Following the discussion period, there is still some disagreement about this paper, hence I took a look at the paper.
After reading the paper, and taking into account the full discussion, I consider the paper is worth publication, assuming the authors incorporate all the clarification points in the final version (which I believe can be done with little effort).
Hence I recommend acceptation. | val | [
"aPSAyjcbBsy",
"kRJrK31NcJ",
"vUAJQ9kqkT2",
"WyxhjI9MsSm",
"pv5djtlLkt2",
"OFh-eqoyQCx",
"34tmYPiU3RE",
"Co6R5sIyFHt",
"j2zgRSvHrt",
"BnKPhIh6y3e"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank the reviewer for the detailed discussion and comments.\n\nWe agree with the reviewer that our algorithm is also based on the enlarged confidence bounds as the one in [5]. We are unaware of other BO algorithms that make use of this idea (as also noted by the reviewer), but we also note that ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"kRJrK31NcJ",
"vUAJQ9kqkT2",
"WyxhjI9MsSm",
"OFh-eqoyQCx",
"BnKPhIh6y3e",
"j2zgRSvHrt",
"Co6R5sIyFHt",
"nips_2021_kbzx0uNZdS",
"nips_2021_kbzx0uNZdS",
"nips_2021_kbzx0uNZdS"
] |
nips_2021_-646c8bpgPl | Visual Adversarial Imitation Learning using Variational Models | Reward function specification, which requires considerable human effort and iteration, remains a major impediment for learning behaviors through deep reinforcement learning. In contrast, providing visual demonstrations of desired behaviors presents an easier and more natural way to teach agents. We consider a setting where an agent is provided a fixed dataset of visual demonstrations illustrating how to perform a task, and must learn to solve the task using the provided demonstrations and unsupervised environment interactions. This setting presents a number of challenges including representation learning for visual observations, sample complexity due to high dimensional spaces, and learning instability due to the lack of a fixed reward or learning signal. Towards addressing these challenges, we develop a variational model-based adversarial imitation learning (V-MAIL) algorithm. The model-based approach provides a strong signal for representation learning, enables sample efficiency, and improves the stability of adversarial training by enabling on-policy learning. Through experiments involving several vision-based locomotion and manipulation tasks, we find that V-MAIL learns successful visuomotor policies in a sample-efficient manner, has better stability compared to prior work, and also achieves higher asymptotic performance. We further find that by transferring the learned models, V-MAIL can learn new tasks from visual demonstrations without any additional environment interactions. All results including videos can be found online at https://sites.google.com/view/variational-mail
| accept | This paper presents V-MAIL, a model-based algorithm for visual Adversarial Imitation Learning. V-MAIL learns both a variational observation model and a variational forward dynamics model. The algorithm is motivated by theoretical performance bounds, proposing the algorithm as a practical solution to the formulation. All the reviewers evaluated positive about the paper, and voted for acceptance. | train | [
"jY3wZNkfj4d",
"1mK-0wBUf6F",
"yHfZy9BkeEW",
"p6rrnhN1XPc",
"PHQE2vRXwOM",
"eIfJNUEDiKq",
"jwKbZWxcbM",
"tFEa3ifLiIn",
"IFG2XE08yJe",
"o3nzZDXeSil",
"GPbSVxq_v_",
"N5TsOVmmOBT",
"IUEMjA3BPl9",
"yoDNMNsc1Co",
"IY6nzBXOLm",
"PdnMFhEmj9X",
"6iNy9FOQyIH",
"lwh8sSyVVj7",
"t94oQlPoi0B"... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"au... | [
"Motivated by several challenges that face the use of adversarial imitation learning (AIL) in practice, the authors propose V-MAIL—a model-based algorithm for visual AIL. V-MAIL addresses the challenges of sample inefficiency and difficult numerical optimization by learning and leveraging both a variational observa... | [
7,
-1,
6,
-1,
-1,
-1,
-1,
6,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"nips_2021_-646c8bpgPl",
"t94oQlPoi0B",
"nips_2021_-646c8bpgPl",
"yHfZy9BkeEW",
"yHfZy9BkeEW",
"IFG2XE08yJe",
"6iNy9FOQyIH",
"nips_2021_-646c8bpgPl",
"tFEa3ifLiIn",
"nips_2021_-646c8bpgPl",
"IY6nzBXOLm",
"tFEa3ifLiIn",
"yHfZy9BkeEW",
"nips_2021_-646c8bpgPl",
"lwh8sSyVVj7",
"6iNy9FOQyIH... |
nips_2021_FEhntTXAeHN | Object-Aware Regularization for Addressing Causal Confusion in Imitation Learning | Behavioral cloning has proven to be effective for learning sequential decision-making policies from expert demonstrations. However, behavioral cloning often suffers from the causal confusion problem where a policy relies on the noticeable effect of expert actions due to the strong correlation but not the cause we desire. This paper presents Object-aware REgularizatiOn (OREO), a simple technique that regularizes an imitation policy in an object-aware manner. Our main idea is to encourage a policy to uniformly attend to all semantic objects, in order to prevent the policy from exploiting nuisance variables strongly correlated with expert actions. To this end, we introduce a two-stage approach: (a) we extract semantic objects from images by utilizing discrete codes from a vector-quantized variational autoencoder, and (b) we randomly drop the units that share the same discrete code together, i.e., masking out semantic objects. Our experiments demonstrate that OREO significantly improves the performance of behavioral cloning, outperforming various other regularization and causality-based methods on a variety of Atari environments and a self-driving CARLA environment. We also show that our method even outperforms inverse reinforcement learning methods trained with a considerable amount of environment interaction.
| accept | This submission targets the problem of "causal confusion" in imitation learning, where spurious correlates in expert data hurt environmental performance. The approach proposed is simple: train a vector-quantized VAE representation, and apply dropout over it during imitation learning. The motivation is: VQ-VAEs can learn to disentangle objects in images, and this can help coherently omit semantic information.
The key strengths of this submission are the simplicity of the proposed approach, the breadth of experimental evaluation, and the strong empirical results. In particular, the evaluation uses 27 Atari environments, aside from new rebuttal experiments in simple CARLA settings. These may become valuable contributions to studying this problem.
This submission is very much borderline, and I only lightly lean towards acceptance. My main reasons for hesitation are:
- Like reviewers have also commented, the paper relies on learning not just good quality discrete codes, but disentangled *objects* through VQ-VAEs (or at least, it is motivated in this way). While this works well enough on Atari settings such as in the submission, I'm not convinced that it will in more complex settings.
- The CARLA settings tested in new rebuttal experiments ("straight", and "one turn") are simple, and not the standard town driving environments commonly used in the imitation learning literature.
- Dropout has previously been proposed as an approach to learn robust imitation policies that can tackle these problems: see [1]. This means that the proposed approach is effectively a combination of VQ-VAE with an existing technique
- Aside from new somewhat preliminary rebuttal experiments on 2 Atari environments in the "stacked inputs" setting, all experiments are performed in the more contrived setting in which past actions are overlaid on the input images. Other works in this literature have now moved away from this assumption, such as [1, 2].
[1] "Chauffeurnet: Learning to drive by imitating the best and synthesizing the worst."
[2] "Fighting Copycat Agents in Behavioral Cloning from Observation Histories" | train | [
"KhG0EjCJwL_",
"uARqtKxieZH",
"KqH38qaJPJO",
"5MQ62hYBqVz",
"xLL_ZLTFOzX",
"OFiukBzap2P",
"Ruyy9qltE4U",
"40WZiDxN0G0",
"bApGFrlZ45O",
"b2aLww06Tk",
"MQGg9RhRQv",
"CHoaMneFgs5",
"8pyWMnUcW32",
"dgtzTrQ0rrV",
"kRL9sQWJEBq",
"BNlR-jzq3gX"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are happy to hear that our rebuttal addressed your concerns well.\n\nThank you again for the valuable suggestions and comments, which we believe strengthen our paper.\n\nIf you have any remaining suggestions or concerns, please let us know!\n\nBest, Authors.\n",
" Thank you for the additional experimental re... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"uARqtKxieZH",
"xLL_ZLTFOzX",
"nips_2021_FEhntTXAeHN",
"OFiukBzap2P",
"Ruyy9qltE4U",
"MQGg9RhRQv",
"b2aLww06Tk",
"nips_2021_FEhntTXAeHN",
"dgtzTrQ0rrV",
"KqH38qaJPJO",
"BNlR-jzq3gX",
"kRL9sQWJEBq",
"dgtzTrQ0rrV",
"nips_2021_FEhntTXAeHN",
"nips_2021_FEhntTXAeHN",
"nips_2021_FEhntTXAeHN"... |
nips_2021_hNMOSUxE8o6 | Reliable and Trustworthy Machine Learning for Health Using Dataset Shift Detection | Unpredictable ML model behavior on unseen data, especially in the health domain, raises serious concerns about its safety as repercussions for mistakes can be fatal. In this paper, we explore the feasibility of using state-of-the-art out-of-distribution detectors for reliable and trustworthy diagnostic predictions. We select publicly available deep learning models relating to various health conditions (e.g., skin cancer, lung sound, and Parkinson's disease) using various input data types (e.g., image, audio, and motion data). We demonstrate that these models show unreasonable predictions on out-of-distribution datasets. We show that Mahalanobis distance- and Gram matrices-based out-of-distribution detection methods are able to detect out-of-distribution data with high accuracy for the health models that operate on different modalities. We then translate the out-of-distribution score into a human interpretable \textsc{confidence score} to investigate its effect on the users' interaction with health ML applications. Our user study shows that the \textsc{confidence score} helped the participants only trust the results with a high score to make a medical decision and disregard results with a low score. Through this work, we demonstrate that dataset shift is a critical piece of information for high-stake ML applications, such as medical diagnosis and healthcare, to provide reliable and trustworthy predictions to the users.
| accept | Thanks to the authors for an interesting and thorough study of OOD methods in the healthcare context.
The reviewers all agreed that this work was well done and interesting, but the main concern is relevance for the NeurIPS community. While I agree with reviewer Ku9s that the main interest of the this community is in ML methodology and theory, I do believe it is important to investigate real use cases of methods --- two of which were published recently in NeurIPS, and one in ICML. MLHC, CHIL, JAMIA, and JMIR may also be appropriate venues, but the NeurIPS/ICML community developed these methods, and are an important audience for this kind of evaluation. I tend to agree with the authors that their work falls squarely within the "Social Aspects of Machine Learning" (and "Applications"), so I am not as concerned about the relevance of this work for the ML community.
Beyond the relevance concern, all reviewers were quite positive about the work itself. Reviewer hRAn detailed some concerns about definitions (OOD, confidence score), which should be clarified in the main text. Reviewer QZBK also describes an improvement to the user study, which should be incorporated into the discussion.
| train | [
"6NAHsyO_BB",
"9DUlXGCOP42",
"ZRqX_gTy3UB",
"2zqifoHv_Z",
"JPzGKu26bN6",
"M1zSdG37Q0e",
"wFixd0GWqZ",
"APqNdnoxu5E"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This work applies known out-of-distribution (OOD) detection techniques on various medical datasets and confirms that the OOD detection techniques indeed works. Specifically, the medical prediction models tend to make more incorrect predictions for samples that OOD detection algorithms deem OOD. Pros\n- This paper... | [
4,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
5,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"nips_2021_hNMOSUxE8o6",
"wFixd0GWqZ",
"APqNdnoxu5E",
"M1zSdG37Q0e",
"6NAHsyO_BB",
"nips_2021_hNMOSUxE8o6",
"nips_2021_hNMOSUxE8o6",
"nips_2021_hNMOSUxE8o6"
] |
nips_2021_fJWmx5i5lOv | Multiclass Boosting and the Cost of Weak Learning | Nataly Brukhim, Elad Hazan, Shay Moran, Indraneel Mukherjee, Robert E. Schapire | accept | This paper studies the problem of multiclass boosting and proves a few different results. 1) proving that the number of samples required by a
weak learner is exponentially more than the number of samples needed by the booster; 2) proving that the weak learner's accuracy parameter must be smaller than an inverse polynomial; and 3) giving a trade-off between number of oracle calls and the
resources required of the weak learner. This is a nice set of interesting and even surprising set of results about multiclass boosting and would add to the conference. | train | [
"7-lVzD6-9F",
"wUD6lLxqdOJ",
"g32OZ8Wj_Wi",
"2rlLrN5208W",
"Qq2h87IKWvw",
"jdvE19spT9g",
"2b2Dw6hEk1n",
"7scEglE-vt5"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the great questions. \nTo answer the first question, our lower bound result shows that there exists a hypotheses class H, and a weak-learner satisfying the WL condition, such that for *any* possible boosting algorithm B, when used in this setup\nit will have to suffer a cost that is polynomial in k (th... | [
-1,
7,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
2,
-1,
-1,
-1,
-1,
4,
3
] | [
"g32OZ8Wj_Wi",
"nips_2021_fJWmx5i5lOv",
"2rlLrN5208W",
"7scEglE-vt5",
"wUD6lLxqdOJ",
"2b2Dw6hEk1n",
"nips_2021_fJWmx5i5lOv",
"nips_2021_fJWmx5i5lOv"
] |
nips_2021_jhd62iKzRuj | Partition-Based Formulations for Mixed-Integer Optimization of Trained ReLU Neural Networks | This paper introduces a class of mixed-integer formulations for trained ReLU neural networks. The approach balances model size and tightness by partitioning node inputs into a number of groups and forming the convex hull over the partitions via disjunctive programming. At one extreme, one partition per input recovers the convex hull of a node, i.e., the tightest possible formulation for each node. For fewer partitions, we develop smaller relaxations that approximate the convex hull, and show that they outperform existing formulations. Specifically, we propose strategies for partitioning variables based on theoretical motivations and validate these strategies using extensive computational experiments. Furthermore, the proposed scheme complements known algorithmic approaches, e.g., optimization-based bound tightening captures dependencies within a partition.
| accept | In this paper, the authors introduce novel mixed-integer programming formulations to optimize the output of a trained ReLU network. This is an important problem that arises, e.g., in verifying robustness of deep neural networks and adversarial training. The approach is based on partitioning node inputs into a number of groups and forming the convex hull over the partitions via disjunctive programming. In particular, this approach recovers the convex hull and Big-M formulations by choosing different partition sizes. The authors show that the proposed formulation provides computational advantages with respect to the baselines in neural network verification. The reviews all agreed that the paper introduces an interesting and effective strategy, only expressed minor concerns and suggestions, and recommended acceptance. Please take into account the updated reviews when preparing the final version to accommodate the requested changes. Thank you for your submission to NeurIPS. | train | [
"ER1rX9S1uLB",
"vEVT2LqJ1si",
"hsVDWcvu9sg",
"srEc90dBcAD",
"bwKxfwF6qFX",
"CaPmmiIaO_",
"DlOZ5k5uzvC",
"lELumYQ4BjU",
"FX0TVriUZaO",
"h7gUMBJi13b",
"iJPXs0LCVIG",
"sfYC4N7u3W"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a mixed-integer programming formulation for trained ReLU neural networks, which will enable people to solve many related problems like optimal adversary, verification and minimally distorted adversary. The main advantage of this formulation comes from partitioning node inputs into a number of g... | [
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
4
] | [
"nips_2021_jhd62iKzRuj",
"lELumYQ4BjU",
"nips_2021_jhd62iKzRuj",
"bwKxfwF6qFX",
"CaPmmiIaO_",
"FX0TVriUZaO",
"hsVDWcvu9sg",
"ER1rX9S1uLB",
"sfYC4N7u3W",
"iJPXs0LCVIG",
"nips_2021_jhd62iKzRuj",
"nips_2021_jhd62iKzRuj"
] |
nips_2021_2lZdja9xYzh | Hyperparameter Optimization Is Deceiving Us, and How to Stop It | Recent empirical work shows that inconsistent results based on choice of hyperparameter optimization (HPO) configuration are a widespread problem in ML research. When comparing two algorithms J and K searching one subspace can yield the conclusion that J outperforms K, whereas searching another can entail the opposite. In short, the way we choose hyperparameters can deceive us. We provide a theoretical complement to this prior work, arguing that, to avoid such deception, the process of drawing conclusions from HPO should be made more rigorous. We call this process epistemic hyperparameter optimization (EHPO), and put forth a logical framework to capture its semantics and how it can lead to inconsistent conclusions about performance. Our framework enables us to prove EHPO methods that are guaranteed to be defended against deception, given bounded compute time budget t. We demonstrate our framework's utility by proving and empirically validating a defended variant of random search.
| accept | This paper studies an important issue in machine learning research, namely the possibility of drawing wrong conclusions due to specific hyperparameter choices. While it is common practice to optimize hyperparameters, via random search or more advanced global optimization algorithms, a growing body of work suggests that the search space can be the determinant factor. The authors argue for a logical framework to asses whether deception occurs (that is, contradictory conclusions can be drawn). The authors make theoretical contributions acknowledged by all reviewers. They then apply the framework to deception in the context of random search. However, while the topic is important and the approach novel (or unusual for NeurIPS community), it is unclear what its practical implications would be and how researchers could make use of the framework in more complex settings and how it would help them draw more robust conclusions. | train | [
"9vVUdg3pKAa",
"6fl49rO4GEQ",
"v-56pK-Ipnu",
"hBuBk9Up8Eq",
"j7MbvCVkBRt",
"NQIl75DG4Xz",
"STmYJC7SqQq",
"bZDyvP5hVj",
"SKHByrZQxkc",
"faNin14IqAM",
"dGnhOtxsS8G",
"Osx9V2UuP0r",
"go82CJEdBvo",
"zyexdL6Zi9_",
"BXMK5rf-4g",
"VDYEzcs03lw",
"VSMvuLBLUQA",
"0PkQFr_9Emf",
"2MmKo5mMW_B... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_r... | [
"Motivated by recent work that empirically show how different conclusions can be made depending on the hyperparameter optimization (HPO) protocol, this paper aims to study the act of drawing conclusions from HPO in a rigorous manner. The authors use modal logic to formalize the idea of deception (where a wrong conc... | [
4,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
4,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_2lZdja9xYzh",
"v-56pK-Ipnu",
"j7MbvCVkBRt",
"nips_2021_2lZdja9xYzh",
"hBuBk9Up8Eq",
"STmYJC7SqQq",
"dGnhOtxsS8G",
"BXMK5rf-4g",
"go82CJEdBvo",
"VDYEzcs03lw",
"Osx9V2UuP0r",
"zyexdL6Zi9_",
"aX-X2kUSeZN",
"9vVUdg3pKAa",
"2MmKo5mMW_B",
"VSMvuLBLUQA",
"hBuBk9Up8Eq",
"nips_20... |
nips_2021_Xs-vglI4EBi | On the Convergence Theory of Debiased Model-Agnostic Meta-Reinforcement Learning | Alireza Fallah, Kristian Georgiev, Aryan Mokhtari, Asuman Ozdaglar | accept | All reviews find that this paper is a solid contribution making a reasonable contribution to the empirical as well as the theoretical state-of-the-art of MAML, so that I recommend to accept it. Please take into account the points raised in the reviews when preparing the final version. | train | [
"qCBaRXVecCJ",
"Td4ZUoBLgH",
"-Bm2Wiu9PmN",
"R2fPRzD7gIr",
"gf9fo_2RqI",
"AhdfiYAKODy",
"hpOsWcKC00y",
"2ncUeSgTxfq",
"6_swLm-qcuw",
"IJsTZ-zu8KD"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper studies the Model-Agnostic Meta-Reinforcement-Learning (MAMRL) from a theoretical perspective. In particular, it proposes a modified MAMRL objective along with its unbiased gradient estimator, establishing a new Meta-RL algorithm called SR-MRL. It theoreticallly proves the convergence of SR-MRL and expe... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5
] | [
"nips_2021_Xs-vglI4EBi",
"AhdfiYAKODy",
"hpOsWcKC00y",
"IJsTZ-zu8KD",
"6_swLm-qcuw",
"qCBaRXVecCJ",
"2ncUeSgTxfq",
"nips_2021_Xs-vglI4EBi",
"nips_2021_Xs-vglI4EBi",
"nips_2021_Xs-vglI4EBi"
] |
nips_2021_fG01Z_unHC | 3D Pose Transfer with Correspondence Learning and Mesh Refinement | 3D pose transfer is one of the most challenging 3D generation tasks. It aims to transfer the pose of a source mesh to a target mesh and keep the identity (e.g., body shape) of the target mesh. Some previous works require key point annotations to build reliable correspondence between the source and target meshes, while other methods do not consider any shape correspondence between sources and targets, which leads to limited generation quality. In this work, we propose a correspondence-refinement network to help the 3D pose transfer for both human and animal meshes. The correspondence between source and target meshes is first established by solving an optimal transport problem. Then, we warp the source mesh according to the dense correspondence and obtain a coarse warped mesh. The warped mesh will be better refined with our proposed Elastic Instance Normalization, which is a conditional normalization layer and can help to generate high-quality meshes. Extensive experimental results show that the proposed architecture can effectively transfer the poses from source to target meshes and produce better results with satisfied visual performance than state-of-the-art methods.
| accept | The reviewers identified a number of strengths and weaknesses in this submission. All scores are in the borderline range, but ultimately all reviewers recommend that the work should be accepted. I see not reason to overrule the reviewers in this respect and therefore recommend acceptance. | train | [
"-KXpXqd9cpB",
"5uRsQW2zd3d",
"HZEUeNIvdul",
"qHBNmTh5dHh",
"H5DOTWfaxu",
"sZ53TNplIxh",
"4bVxau3bt2",
"ftzTtlupNXt",
"iiQp4Kl2bWc",
"3KX6B2wZGhG",
"HrK22sIQCs-"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks authors for the rebuttal. I addressed all of my concerns. \n\nI will keep my rating. ",
" Thanks for the response from the authors. The ablation study and the newly reported inference time have resolved my previous concerns. Therefore, I will keep my original rating.",
" The rebuttal resolves my concer... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"H5DOTWfaxu",
"qHBNmTh5dHh",
"sZ53TNplIxh",
"HrK22sIQCs-",
"3KX6B2wZGhG",
"iiQp4Kl2bWc",
"ftzTtlupNXt",
"nips_2021_fG01Z_unHC",
"nips_2021_fG01Z_unHC",
"nips_2021_fG01Z_unHC",
"nips_2021_fG01Z_unHC"
] |
nips_2021_QT9ulkiN-LX | Framing RNN as a kernel method: A neural ODE approach | Building on the interpretation of a recurrent neural network (RNN) as a continuous-time neural differential equation, we show, under appropriate conditions, that the solution of a RNN can be viewed as a linear function of a specific feature set of the input sequence, known as the signature. This connection allows us to frame a RNN as a kernel method in a suitable reproducing kernel Hilbert space. As a consequence, we obtain theoretical guarantees on generalization and stability for a large class of recurrent networks. Our results are illustrated on simulated datasets.
| accept | The paper establishes rigorous connection between (continuous time) RNNs and kernel methods. This paper brings a novel theoretical tool to the theoretical analysis of RNNs driven by a finite duration input. They also provide generalization properties and design a penalty term which improves stability as seen in numerical experiments. For the final version, please make sure to take into account all discussions from the rebuttal process. | train | [
"HgzdMbrbR9u",
"qRY3gtEjMc",
"4_0EWsWi3fR",
"6eR4iHYR4B4",
"2Lt46hgdrha",
"GKEtKhiuyY",
"TXTCv423Zy8",
"r1ofD0tFF-",
"AxWphGQAn21"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors consider a continuous data/neural ode model for recurrent neural networks. They prove convergence of the discrete model to the continuum model as the length of input sequences increases (assuming that sequences are obtained by finer and finer discretizations of the same process). They then show that th... | [
7,
7,
-1,
-1,
-1,
-1,
-1,
8,
7
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_QT9ulkiN-LX",
"nips_2021_QT9ulkiN-LX",
"qRY3gtEjMc",
"HgzdMbrbR9u",
"AxWphGQAn21",
"r1ofD0tFF-",
"nips_2021_QT9ulkiN-LX",
"nips_2021_QT9ulkiN-LX",
"nips_2021_QT9ulkiN-LX"
] |
nips_2021_uOxe0CHI5dq | Contextual Similarity Aggregation with Self-attention for Visual Re-ranking | In content-based image retrieval, the first-round retrieval result by simple visual feature comparison may be unsatisfactory, which can be refined by visual re-ranking techniques. In image retrieval, it is observed that the contextual similarity among the top-ranked images is an important clue to distinguish the semantic relevance. Inspired by this observation, in this paper, we propose a visual re-ranking method by contextual similarity aggregation with self-attention. In our approach, for each image in the top-K ranking list, we represent it into an affinity feature vector by comparing it with a set of anchor images. Then, the affinity features of the top-K images are refined by aggregating the contextual information with a transformer encoder. Finally, the affinity features are used to recalculate the similarity scores between the query and the top-K images for re-ranking of the latter. To further improve the robustness of our re-ranking model and enhance the performance of our method, a new data augmentation scheme is designed. Since our re-ranking model is not directly involved with the visual feature used in the initial retrieval, it is ready to be applied to retrieval result lists obtained from various retrieval algorithms. We conduct comprehensive experiments on four benchmark datasets to demonstrate the generality and effectiveness of our proposed visual re-ranking method.
| accept | The paper introduces a new re-ranking method for image retrieval. It proposes to learn an affinity vector for each of the top-ranked candidates w.r.t a set of anchor images, using a transformer encoder. These affinity vectors are then used to re-estimate the similarity scores between the top-ranked candidates and the query image. The reviewers found the work to be technically sound, have novelty, and produce good results. There were some concerns about the fairness of the evaluations, and many other questions of detail, which the authors have addressed effectively in their rebuttal. | train | [
"7USA1UZfJWV",
"9evTB58DgRd",
"HoyoTMbtch0",
"4wn2BnI55xu",
"xrQEwrbfMAH",
"V4Pax7GqvhQ",
"t_vzXKmlqCz",
"aOnevVzTtR1",
"9-0TexqGK5I",
"rcwqblgLizp",
"jQzRcswJCBi",
"WJx2UBVyEs",
"lS4MC2G8zh",
"1gTe94iEO6a"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thanks for your valuable support!",
" Thanks for the detailed response. I found your answer here and to the questions posed by other reviewers satisfying, and I'm upgrading my recommendation from 6 to 7, good paper, accept.",
"Paper presents a method to perform top-k reranking on a (image) retrieval system. T... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
7,
6,
-1,
-1,
-1,
-1,
8
] | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
4
] | [
"9evTB58DgRd",
"jQzRcswJCBi",
"nips_2021_uOxe0CHI5dq",
"xrQEwrbfMAH",
"rcwqblgLizp",
"aOnevVzTtR1",
"9-0TexqGK5I",
"nips_2021_uOxe0CHI5dq",
"nips_2021_uOxe0CHI5dq",
"1gTe94iEO6a",
"HoyoTMbtch0",
"9-0TexqGK5I",
"aOnevVzTtR1",
"nips_2021_uOxe0CHI5dq"
] |
nips_2021_jBQaRXpEgO | Can Information Flows Suggest Targets for Interventions in Neural Circuits? | Motivated by neuroscientific and clinical applications, we empirically examine whether observational measures of information flow can suggest interventions. We do so by performing experiments on artificial neural networks in the context of fairness in machine learning, where the goal is to induce fairness in the system through interventions. Using our recently developed M-information flow framework, we measure the flow of information about the true label (responsible for accuracy, and hence desirable), and separately, the flow of information about a protected attribute (responsible for bias, and hence undesirable) on the edges of a trained neural network. We then compare the flow magnitudes against the effect of intervening on those edges by pruning. We show that pruning edges that carry larger information flows about the protected attribute reduces bias at the output to a greater extent. This demonstrates that M-information flow can meaningfully suggest targets for interventions, answering the title's question in the affirmative. We also evaluate bias-accuracy tradeoffs for different intervention strategies, to analyze how one might use estimates of desirable and undesirable information flows (here, accuracy and bias flows) to inform interventions that preserve the former while reducing the latter.
| accept | The paper considers how to use information flows to "debug" a neural net model toward desired fairness objectives. The idea seems novel and connects disparate fields in an interesting way. The reviewers appreciated the paper and agree it should be accepted, albeit their ratings were mostly borderline. The reviewers brought up many important concerns that the authors for the most part appropriately responded to. The authors are expected to address these points raised by reviewers in a final version, including as they outlined in their response and incorporate additional discussion or results from their responses into their paper in an appropriate manner. | train | [
"1lNGLx4YZxx",
"W-HYpmpA7V",
"hPrhqfLNNFR",
"TJ5K2T3Zkt6",
"a5sezNJVGsY",
"kYJvHXLAmbr",
"8tfP8roxL77",
"JMLOJPpcTxJ",
"c1gkCisbpXL",
"2yogCHHZ-gg"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper studies the possibility of using the measures of information flows to intervene with ANN for reducing bias while improving/maintaining accuracy. It adopts the M-information flow framework proposed by Venkaatesh et al., and proposes a quantitative notion of information flow and a method for estimating suc... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_jBQaRXpEgO",
"8tfP8roxL77",
"2yogCHHZ-gg",
"nips_2021_jBQaRXpEgO",
"1lNGLx4YZxx",
"c1gkCisbpXL",
"1lNGLx4YZxx",
"c1gkCisbpXL",
"nips_2021_jBQaRXpEgO",
"nips_2021_jBQaRXpEgO"
] |
nips_2021_ebQXflQre5a | AutoBalance: Optimized Loss Functions for Imbalanced Data | Imbalanced datasets are commonplace in modern machine learning problems. The presence of under-represented classes or groups with sensitive attributes results in concerns about generalization and fairness. Such concerns are further exacerbated by the fact that large capacity deep nets can perfectly fit the training data and appear to achieve perfect accuracy and fairness during training, but perform poorly during test. To address these challenges, we propose AutoBalance, a bi-level optimization framework that automatically designs a training loss function to optimize a blend of accuracy and fairness-seeking objectives. Specifically, a lower-level problem trains the model weights, and an upper-level problem tunes the loss function by monitoring and optimizing the desired objective over the validation data. Our loss design enables personalized treatment for classes/groups by employing a parametric cross-entropy loss and individualized data augmentation schemes. We evaluate the benefits and performance of our approach for the application scenarios of imbalanced and group-sensitive classification. Extensive empirical evaluations demonstrate the benefits of AutoBalance over state-of-the-art approaches. Our experimental findings are complemented with theoretical insights on loss function design and the benefits of the train-validation split. All code is available open-source.
| accept | The paper proposes a bi-level optimization framework to design a training loss for long-tail and group-fair learning. The reviews attracted a a lot of back-and-forth discussion with the authors, and I appreciate the authors for providing very detailed responses and additional experimental results.
While there were concerns raised about the proposed approach being prone to overfitting to the validation sample, I think the authors have satisfactorily explained how they take precautions to avoid it. I think Reviewer mSPX's concern about there being limited novelty does have some merit, but given that this is one of the first few works to successfully engineer an elegant loss-tuning procedure for long-tail learning and that the experimental results are significant, I would recommend an accept.
Having said this, I strongly urge the authors to use the feedback provided to improve their manuscript, and in particular include the following promised additions to the final version:
- a detailed discussion on the risk of over-fitting and the precautions taken to avoid it (do report the validation errors, perhaps in the appendix)
- comparison to other Bayesian optimization approaches
- comparison to the additional baselines mentioned by Reviewer GFgA
Finally, here are some additional references on prior work on learning loss functions for specialized tasks, which may be of some relevance:
https://arxiv.org/pdf/1905.10108.pdf
https://arxiv.org/pdf/1803.09050.pdf | val | [
"YrFwmgjgUAr",
"jzFm8poJFQv",
"tYaRq_XDgjE",
"JB53xJYHUZ",
"Y3Ij93ilTFp",
"yERemSr_Bcf",
"Q66J0zLOcn9",
"5L0YsGOJFc9",
"aDryMgEQp2Z",
"XdNv6oyES-6",
"mbJu5OzVLq3",
"NcEmFsJAmSk",
"JZtb0LPs0lY",
"ir2f5D9psI",
"JwUMzXdmx-f",
"MVm59ynKX8W",
"NVz8uZM4l8J",
"qcSUPlGJ0SL",
"DSKeh0PLa7U... | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
"This work introduces a novel algorithm (AutoBalance) to address the imbalance and group sensitive classification problems and maximise the fairness-seeking objectives. The algorithm uses a bi-level optimization framework to optimize the model weights over the training data while automatically fine-tuning the loss ... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
7,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
3
] | [
"nips_2021_ebQXflQre5a",
"Y3Ij93ilTFp",
"YBAg4_wot7j",
"nips_2021_ebQXflQre5a",
"tYaRq_XDgjE",
"Q66J0zLOcn9",
"aDryMgEQp2Z",
"NcEmFsJAmSk",
"mbJu5OzVLq3",
"DSKeh0PLa7U",
"qcSUPlGJ0SL",
"JZtb0LPs0lY",
"F0xtpexLIJ",
"YBAg4_wot7j",
"DSKeh0PLa7U",
"DSKeh0PLa7U",
"DSKeh0PLa7U",
"DSKeh0P... |
nips_2021_52YubM-VC6H | SyncTwin: Treatment Effect Estimation with Longitudinal Outcomes | Most of the medical observational studies estimate the causal treatment effects using electronic health records (EHR), where a patient's covariates and outcomes are both observed longitudinally. However, previous methods focus only on adjusting for the covariates while neglecting the temporal structure in the outcomes. To bridge the gap, this paper develops a new method, SyncTwin, that learns a patient-specific time-constant representation from the pre-treatment observations. SyncTwin issues counterfactual prediction of a target patient by constructing a synthetic twin that closely matches the target in representation. The reliability of the estimated treatment effect can be assessed by comparing the observed and synthetic pre-treatment outcomes. The medical experts can interpret the estimate by examining the most important contributing individuals to the synthetic twin. In the real-data experiment, SyncTwin successfully reproduced the findings of a randomized controlled clinical trial using observational data, which demonstrates its usability in the complex real-world EHR.
| accept | This paper proposes a new approach to estimating conditional average treatment effects based on longitudinal data. Overall, authors were responsive to criticism raised by reviewers. However, prior to publication two critical revisions should be made. First, explicitly state the key assumption that if the latent confounders are able to predict pretreatment then they will also be able to predict post treatment (this is somewhat implied by proposition 3, but never explicitly stated). Stating this assumption clearly is critical to ensuring that approach is not inappropriately applied in the future. Second, the authors deviate from the norm in this community reporting MAE in lieu of PEHE (or MSE), without providing a justification. In the revision, please include both MAE and MSE results, or provide a clear justification for why MSE results are omitted. | train | [
"WZ1p0g-pEe",
"ZluxQ5CM44Q",
"107E8G68MJN",
"lkflOT7VuQ8",
"VmXeH2PZ7IH",
"pZximnrqpun",
"blYZddhhobB",
"6L5IhjeH2Be",
"BTju8e05S86",
"e0g43LjC_f",
"6wc5HaG-jXB",
"dulimuGPgwZ",
"L7qAyqvu9Fh"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a method, SyncTwin, to estimate treatment effects in Longitudinal and Irregularly sampled data with Point treatment (LIP) setting. The idea is to create a synthetic twin from controls that closely matches a treated target in representation. By using the pretreatment time samples of the twin to ... | [
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"nips_2021_52YubM-VC6H",
"107E8G68MJN",
"blYZddhhobB",
"L7qAyqvu9Fh",
"dulimuGPgwZ",
"6wc5HaG-jXB",
"WZ1p0g-pEe",
"L7qAyqvu9Fh",
"dulimuGPgwZ",
"6wc5HaG-jXB",
"nips_2021_52YubM-VC6H",
"nips_2021_52YubM-VC6H",
"nips_2021_52YubM-VC6H"
] |
nips_2021_7pU_P1IbePx | Statistical Query Lower Bounds for List-Decodable Linear Regression | Ilias Diakonikolas, Daniel Kane, Ankit Pensia, Thanasis Pittas, Alistair Stewart | accept | This paper proves an SQ lower bound for the problem of list-decodable linear regression over gaussian covariates that suggests that the algorithm needs a running time of d^{poly(1/\alpha)} to solve the problem in d dimensions with \alpha fraction inliers. This matches the performance of the best-known algorithm for the problem and shows that list-decodable linear regression is likely harder than the related special case of mixed linear regression that admits a sub-exponential time algorithm (in 1/\alpha).
In authors' response, it was clarified that while SQ lower bound don't apply to SoS based algorithms, the authors believe that it does apply to the specific algorithm employed in prior work on this problem. This is a somewhat non-trivial statement and was brought up in reviewer discussion -- we recommend adding formal proof of this claim to the camera-ready version.
The authors' response also clarified that prior works on SQ lower bound only yield a fixed polynomial in d lower bound on the running time resolving the concern about the relationship to prior works on SQ lower bounds for related problems.
The paper proves an interesting lower bound on an important problem in algorithmic robust statistics and shows an interesting "separation" between list-decodable learning and the problem of learning components of a mixture. The techniques build on and improve on the SQ lower bounds proven in prior works. We recommend acceptance. | train | [
"Pl-EPqXwWGH",
"HeYQkvMMAd",
"CvPRDN9Chga",
"At717PiTt4-",
"TUKccDyzJXG",
"iJHqV2ty0om",
"hAAHVevxTTq",
"2FYb8mcOxGj",
"jnWitAlVmdT"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper the list-decodable linear regression problem, where the adversary can corrupt a fraction alpha>0.5 of the labels, and the goal is to return a list of vectors of size f(alpha) such that at least one of the vectors is close to the true regression vector. The paper proves computational-statistical tradeoffs... | [
7,
-1,
-1,
-1,
-1,
-1,
8,
8,
8
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"nips_2021_7pU_P1IbePx",
"Pl-EPqXwWGH",
"jnWitAlVmdT",
"2FYb8mcOxGj",
"hAAHVevxTTq",
"nips_2021_7pU_P1IbePx",
"nips_2021_7pU_P1IbePx",
"nips_2021_7pU_P1IbePx",
"nips_2021_7pU_P1IbePx"
] |
nips_2021_vCthaJ4ywT | Unsupervised Motion Representation Learning with Capsule Autoencoders | We propose the Motion Capsule Autoencoder (MCAE), which addresses a key challenge in the unsupervised learning of motion representations: transformation invariance. MCAE models motion in a two-level hierarchy. In the lower level, a spatio-temporal motion signal is divided into short, local, and semantic-agnostic snippets. In the higher level, the snippets are aggregated to form full-length semantic-aware segments. For both levels, we represent motion with a set of learned transformation invariant templates and the corresponding geometric transformations by using capsule autoencoders of a novel design. This leads to a robust and efficient encoding of viewpoint changes. MCAE is evaluated on a novel Trajectory20 motion dataset and various real-world skeleton-based human action datasets. Notably, it achieves better results than baselines on Trajectory20 with considerably fewer parameters and state-of-the-art performance on the unsupervised skeleton-based action recognition task.
| accept | This work introduces a novel capsule architecture that learns motion patterns in an unsupervised manner. Evaluation on synthetic and real-work skeleton-based human action datasets demonstrate the model good performance when compared to previous state-of-art methods.
Using the formulation of capsule autoencoders to explicitly learn transformable trajectory templates through self-supervised learning is novel and an interesting addition to the capsule network literature.
| test | [
"-9iymStFdh",
"L7eV2fMlyfQ",
"Gwar_TDnLnT",
"ahsj8IT2UaH",
"iiM3L0f_Z4v",
"i6FZYYp2rO4",
"DAttrJOjlld",
"wjr18MWcg4Z"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This work proposes a motion representation learning method based on capsule autoencoders. It decomposes an input 2D keypoint trajectory into snippets and segments, and form a hierarchical representation that is also transformation invariant. The proposed motion representation achieves state-of-the-art motion class... | [
7,
-1,
-1,
-1,
-1,
5,
7,
6
] | [
3,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"nips_2021_vCthaJ4ywT",
"-9iymStFdh",
"i6FZYYp2rO4",
"wjr18MWcg4Z",
"DAttrJOjlld",
"nips_2021_vCthaJ4ywT",
"nips_2021_vCthaJ4ywT",
"nips_2021_vCthaJ4ywT"
] |
nips_2021_sYNr-OqGC9m | VigDet: Knowledge Informed Neural Temporal Point Process for Coordination Detection on Social Media | Recent years have witnessed an increasing use of coordinated accounts on social media, operated by misinformation campaigns to influence public opinion and manipulate social outcomes. Consequently, there is an urgent need to develop an effective methodology for coordinated group detection to combat the misinformation on social media. However, existing works suffer from various drawbacks, such as, either limited performance due to extreme reliance on predefined signatures of coordination, or instead an inability to address the natural sparsity of account activities on social media with useful prior domain knowledge. Therefore, in this paper, we propose a coordination detection framework incorporating neural temporal point process with prior knowledge such as temporal logic or pre-defined filtering functions. Specifically, when modeling the observed data from social media with neural temporal point process, we jointly learn a Gibbs-like distribution of group assignment based on how consistent an assignment is to (1) the account embedding space and (2) the prior knowledge. To address the challenge that the distribution is hard to be efficiently computed and sampled from, we design a theoretically guaranteed variational inference approach to learn a mean-field approximation for it. Experimental results on a real-world dataset show the effectiveness of our proposed method compared to the SOTA model in both unsupervised and semi-supervised settings. We further apply our model on a COVID-19 Vaccine Tweets dataset. The detection result suggests the presence of suspicious coordinated efforts on spreading misinformation about COVID-19 vaccines.
| accept | This paper proposes a method for detecting groups of coordinated social media accounts which are engaged in misinformation campaigns based on temporal point processes. All the reviewers recognize that the application is very important as well as challenging and the methodology, even if not groundbreaking, is reasonable for the problem at hand. They found the experimental evaluation solid. | train | [
"zT2ZtzziBoP",
"EWIBUk-QwR",
"D9kZo9-5zQB",
"bElWMNiN4jN",
"xNX1HG59JO-",
"Kklai1_DwIU",
"gcKANCmfUW9",
"TRJJKO5Ik2B",
"uBDV2nEgY6"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer w5yn, \n\nIs there still any more clarification required or more questions based on the initial responses? We are willing to provide more clarifications if so. \n\n\nBest regards,\n\nAuthors of Paper4983",
"This paper proposes a method for detecting groups of coordinated social media accounts whic... | [
-1,
7,
7,
-1,
-1,
-1,
-1,
7,
5
] | [
-1,
4,
3,
-1,
-1,
-1,
-1,
3,
3
] | [
"uBDV2nEgY6",
"nips_2021_sYNr-OqGC9m",
"nips_2021_sYNr-OqGC9m",
"EWIBUk-QwR",
"uBDV2nEgY6",
"D9kZo9-5zQB",
"TRJJKO5Ik2B",
"nips_2021_sYNr-OqGC9m",
"nips_2021_sYNr-OqGC9m"
] |
nips_2021_x2lBl0GRav5 | An Improved Analysis and Rates for Variance Reduction under Without-replacement Sampling Orders | When applying a stochastic algorithm, one must choose an order to draw samples. The practical choices are without-replacement sampling orders, which are empirically faster and more cache-friendly than uniform-iid-sampling but often have inferior theoretical guarantees. Without-replacement sampling is well understood only for SGD without variance reduction. In this paper, we will improve the convergence analysis and rates of variance reduction under without-replacement sampling orders for composite finite-sum minimization.Our results are in two-folds. First, we develop a damped variant of Finito called Prox-DFinito and establish its convergence rates with random reshuffling, cyclic sampling, and shuffling-once, under both generally and strongly convex scenarios. These rates match full-batch gradient descent and are state-of-the-art compared to the existing results for without-replacement sampling with variance-reduction. Second, our analysis can gauge how the cyclic order will influence the rate of cyclic sampling and, thus, allows us to derive the optimal fixed ordering. In the highly data-heterogeneous scenario, Prox-DFinito with optimal cyclic sampling can attain a sample-size-independent convergence rate, which, to our knowledge, is the first result that can match with uniform-iid-sampling with variance reduction. We also propose a practical method to discover the optimal cyclic ordering numerically.
| accept | This paper is favored by two of four expert reviewers and is acceptable to the other two. Reviewers agree that it is clearly written and organized. There is also general agreement that parts of the analysis are new and insightful, and possibly useful outside of this paper, in particular the "order-specific" norm that is introduced.
Some reviewers make valid points about the limitations of the end result. For instance, reviewer VeN3 points out that the convergence rate is not necessarily encouraging even if it matches plain GD since the paper's method has memory requirements that scale linearly with the dataset size. The relative advantage then happens in an optimistic case where an optimal cyclic ordering is known (in practice, this ordering is replaced by a heuristic) and when the data is sufficiently heterogeneous (a technical condition detailed in the paper).
I will add that the reviews on this paper were quite thorough, and that there was detailed discussion among reviewers and authors. Ultimately, these exchanges addressed many reviewer concerns and left the authors with suggestions for improvement that might only strengthen the writing further. They've even led to some additional empirical measurements and several references to related work. | train | [
"Qss3amXmt7r",
"5c3VGMOK1xL",
"yTzmRTG-2HS",
"iIalgQtGN7",
"vboRlMWba7W",
"yRI-LvhFLqw",
"NnynbOLFxQ",
"f6hnR8XPeO9",
"p6hyzmz2smU",
"nI0wfw6rK8o",
"9ZxNpAVgfEi",
"BratK_cQtMO",
"zVPuYD4HAv",
"cQiobiDf28M"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for increasing the rating. We have read the reviewer's final comments carefully. \n\nWe are happy that our response has addressed most concerns of the reviewer. As to the limitation in the measure of the suboptimality, we will clearly acknowledge it in the revision. Below is our response to ... | [
-1,
-1,
-1,
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
-1,
-1,
4,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"5c3VGMOK1xL",
"nI0wfw6rK8o",
"BratK_cQtMO",
"nips_2021_x2lBl0GRav5",
"NnynbOLFxQ",
"nips_2021_x2lBl0GRav5",
"f6hnR8XPeO9",
"yRI-LvhFLqw",
"yRI-LvhFLqw",
"iIalgQtGN7",
"zVPuYD4HAv",
"cQiobiDf28M",
"nips_2021_x2lBl0GRav5",
"nips_2021_x2lBl0GRav5"
] |
nips_2021_YN4TMf3sv52 | Exploring Forensic Dental Identification with Deep Learning | Dental forensic identification targets to identify persons with dental traces.The task is vital for the investigation of criminal scenes and mass disasters because of the resistance of dental structures and the wide-existence of dental imaging. However, no widely accepted automated solution is available for this labour-costly task. In this work, we pioneer to study deep learning for dental forensic identification based on panoramic radiographs. We construct a comprehensive benchmark with various dental variations that can adequately reflect the difficulties of the task. By considering the task's unique challenges, we propose FoID, a deep learning method featured by: (\textit{i}) clinical-inspired attention localization, (\textit{ii}) domain-specific augmentations that enable instance discriminative learning, and (\textit{iii}) transformer-based self-attention mechanism that dynamically reasons the relative importance of attentions. We show that FoID can outperform traditional approaches by at least \textbf{22.98\%} in terms of Rank-1 accuracy, and outperform strong CNN baselines by at least \textbf{10.50\%} in terms of mean Average Precision (mAP). Moreover, extensive ablation studies verify the effectiveness of each building blocks of FoID. Our work can be a first step towards the automated system for forensic identification among large-scale multi-site databases. Also, the proposed techniques, \textit{e.g.}, self-attention mechanism, can also be meaningful for other identification tasks, \textit{e.g.}, pedestrian re-identification.Related data and codes can be found at \href{https://github.com/liangyuandg/FoID}{https://github.com/liangyuandg/FoID}.
| accept | This paper generated significant debate, both on technical contribution considerations (more general methodology) and ethics considerations. Thank you to the reviewers and authors for the detailed discussion.
On the ethics considerations, the authors have responded saying they will include a more detailed discussion of potential misuses, and importantly, results across different subpopulations. The authors should include as many of these subgroups as possible (across gender, race and multiple age brackets) along with discussion on any places where the method performs below average.
On the technical considerations, a big debate was between whether the methods proposed are too specialized to this application. After careful review of the discussion, I think this paper can be viewed as an applied contribution, that should be of interest to others in the community who are studying this application. | train | [
"xSfd91fTyT",
"d306Eg5Bif",
"3axG_8rYtBH",
"Gf2ocHdKFEl",
"mpUXmLEyqW4",
"xj6B_V__Mfw",
"clfnp-Pve7g",
"a26dKVvyRRz",
"edBQzTLX5Nk",
"mHEhQ4NTCg",
"WK8pz-zSzDz",
"HhpIkQ_H24S",
"StfTtESKqQw",
"Gd2NYdLACVP",
"8ZPjbHUVNzQ",
"m9B-5CP5if_",
"Nvrp3d44Ms6"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_revi... | [
" I realize I didn't respond here, but I just wanted to let you know that I read and considered your rebuttal and have entered in conversation with other reviewers. Thanks for the response both to me, and the update to the ethic reviewer; I have no further clarifications at this time!",
" Thank you for your respo... | [
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
5
] | [
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"StfTtESKqQw",
"3axG_8rYtBH",
"edBQzTLX5Nk",
"nips_2021_YN4TMf3sv52",
"nips_2021_YN4TMf3sv52",
"WK8pz-zSzDz",
"Gd2NYdLACVP",
"nips_2021_YN4TMf3sv52",
"nips_2021_YN4TMf3sv52",
"nips_2021_YN4TMf3sv52",
"Nvrp3d44Ms6",
"mpUXmLEyqW4",
"m9B-5CP5if_",
"8ZPjbHUVNzQ",
"nips_2021_YN4TMf3sv52",
"... |
nips_2021_Wua2zjxJdYo | Learning to Generate Realistic Noisy Images via Pixel-level Noise-aware Adversarial Training | Existing deep learning real denoising methods require a large amount of noisy-clean image pairs for supervision. Nonetheless, capturing a real noisy-clean dataset is an unacceptable expensive and cumbersome procedure. To alleviate this problem, this work investigates how to generate realistic noisy images. Firstly, we formulate a simple yet reasonable noise model that treats each real noisy pixel as a random variable. This model splits the noisy image generation problem into two sub-problems: image domain alignment and noise domain alignment. Subsequently, we propose a novel framework, namely Pixel-level Noise-aware Generative Adversarial Network (PNGAN). PNGAN employs a pre-trained real denoiser to map the fake and real noisy images into a nearly noise-free solution space to perform image domain alignment. Simultaneously, PNGAN establishes a pixel-level adversarial training to conduct noise domain alignment. Additionally, for better noise fitting, we present an efficient architecture Simple Multi-scale Network (SMNet) as the generator. Qualitative validation shows that noise generated by PNGAN is highly similar to real noise in terms of intensity and distribution. Quantitative experiments demonstrate that a series of denoisers trained with the generated noisy images achieve state-of-the-art (SOTA) results on four real denoising benchmarks.
| accept | This paper proposes a novel idea to synthesize real world noisy images for image denoising.
The main idea is to treat each noisy pixel as a random variable and then split noisy image generation into image domain alignment and noise domain alignment. Based on the idea, a GAN method is proposed to perform the pixel level adversarial training. The idea is interesting and novel, and the effectiveness of the proposed method is demonstrated in the experiments by comparing it with SOTA methods and detailed ablation studies. All the reviewers are also impressed by the idea and the promising experimental results. However, a major drawback of the work is the lack of theoretical guarantee on the distribution gap between the real noisy images and the generated noisy images. Given that this is the first work that moves towards real world noisy image generation, I recommend acceptance given the novelty and the practical significance of the proposed work.
| train | [
"AwmVv4R5Wba",
"Oecx7LSvNNW",
"UjeNttlzqFj",
"E2x4l-qjgky",
"DMbN-ONUtdC",
"3AZ3cJH65DO",
"9q-gXZ2z-9v",
"ExK6RSdyS_I",
"fcJC5BM2ltW",
"lSUYWmO7cCA",
"aC0MMI-7ta"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" All my concerns are well addressed in the rebuttal, and I would like to keep my initial rating.",
" The authors have addressed most of my concerns. I agree to accept this paper.",
"The authors propose a pixel-level noise-aware generative adversarial network (PNGAN). They improve the denoising performance by g... | [
-1,
-1,
8,
6,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
-1,
5,
4,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"ExK6RSdyS_I",
"3AZ3cJH65DO",
"nips_2021_Wua2zjxJdYo",
"nips_2021_Wua2zjxJdYo",
"9q-gXZ2z-9v",
"aC0MMI-7ta",
"E2x4l-qjgky",
"UjeNttlzqFj",
"lSUYWmO7cCA",
"nips_2021_Wua2zjxJdYo",
"nips_2021_Wua2zjxJdYo"
] |
nips_2021_hwoK62_GkiT | Multi-Agent Reinforcement Learning for Active Voltage Control on Power Distribution Networks | This paper presents a problem in power networks that creates an exciting and yet challenging real-world scenario for application of multi-agent reinforcement learning (MARL). The emerging trend of decarbonisation is placing excessive stress on power distribution networks. Active voltage control is seen as a promising solution to relieve power congestion and improve voltage quality without extra hardware investment, taking advantage of the controllable apparatuses in the network, such as roof-top photovoltaics (PVs) and static var compensators (SVCs). These controllable apparatuses appear in a vast number and are distributed in a wide geographic area, making MARL a natural candidate. This paper formulates the active voltage control problem in the framework of Dec-POMDP and establishes an open-source environment. It aims to bridge the gap between the power community and the MARL community and be a drive force towards real-world applications of MARL algorithms. Finally, we analyse the special characteristics of the active voltage control problems that cause challenges for state-of-the-art MARL approaches, and summarise the potential directions.
| accept | This paper presents an multi-agent RL environment for active voltage control on power distribution networks. This could be a new interesting and practicality-oriented benchmark environment for multi-agent RL research. Most of the reviewers' concerns were (1) lack of access to the implementation of the simulator, which authors explicitly promised to make the code public if the paper gets accepted, and (2) lack of reference to another power network environment L2RPN / GridOp, which is not exactly active voltage control but should have been referenced. AC is recommending borderline accept, and if the paper gets accepted, please address (1) and provide detailed comparison to L2RPN / GridOp in the related work section, so that readers understand what exact power network problem the paper is tackling. | train | [
"zyv9PGVVQiQ",
"boR0G8iOW9",
"o2PZ6q9HrMP",
"MAks7v8xRrb",
"OapRWdLwBW_",
"U61D-dQzYGC",
"tXgcMbhJ-JP",
"BEp7BrxC4AB",
"bXyGuMOvajC",
"lK-PPH0r6e7",
"8GJfIU49PMl",
"aEac0DRi0I4",
"tNeAMlfxcEs",
"u1ghmihn9IJ",
"Jg19OB6X2ZK",
"1KKNbxqrGXb"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your comments. We understand the concerns from you, however, we promise the detailed information of usage for the environment will be given. ",
" Thanks for your clarifications. \n\nUnfortunately, I cannot increase my score based only on the authors' promise that they will include all the necessary ... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"boR0G8iOW9",
"aEac0DRi0I4",
"nips_2021_hwoK62_GkiT",
"OapRWdLwBW_",
"tNeAMlfxcEs",
"bXyGuMOvajC",
"bXyGuMOvajC",
"nips_2021_hwoK62_GkiT",
"8GJfIU49PMl",
"nips_2021_hwoK62_GkiT",
"BEp7BrxC4AB",
"1KKNbxqrGXb",
"Jg19OB6X2ZK",
"o2PZ6q9HrMP",
"nips_2021_hwoK62_GkiT",
"nips_2021_hwoK62_GkiT... |
nips_2021_MSVlSMBbBt | Looking Beyond Single Images for Contrastive Semantic Segmentation Learning | We present an approach to contrastive representation learning for semantic segmentation. Our approach leverages the representational power of existing feature extractors to find corresponding regions across images. These cross-image correspondences are used as auxiliary labels to guide the pixel-level selection of positive and negative samples for more effective contrastive learning in semantic segmentation. We show that auxiliary labels can be generated from a variety of feature extractors, ranging from image classification networks that have been trained using unsupervised contrastive learning to segmentation models that have been trained on a small amount of labeled data. We additionally introduce a novel metric for rapidly judging the quality of a given auxiliary-labeling strategy, and empirically analyze various factors that influence the performance of contrastive learning for semantic segmentation. We demonstrate the effectiveness of our method both in the low-data as well as the high-data regime on various datasets. Our experiments show that contrastive learning with our auxiliary-labeling approach consistently boosts semantic segmentation accuracy when compared to standard ImageNet pretraining and outperforms existing approaches of contrastive and semi-supervised semantic segmentation.
| accept | 3 expert reviewers recommend acceptance but not without a little bit of hesitation.
The paper has a good experimental section and really drives home the point about the value of obtaining positives through correspondence across different scenes for contrastive learning (of segmentation in this case). The actual techniques proposed to achieve this were not the most elegant and there are some closely related papers, but overall seems like a valuable paper. | train | [
"nI3okVFr6rH",
"Y18WWio2Rbd",
"NTxWbcAjzPL",
"cMH-dJs10q8",
"h2updYbni9E",
"wIVoPM_lBQ",
"wpf8XBuv_1D",
"maPeMwT2AOr"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"The paper presents the data collection strategy for tackling contrastive representation leaning on semantic segmentation task. The core idea is the auxiliary labels, which guide the pixel-level training sample collection across images. In addition, the authors provide a metric, i.e., PmIoU, to evaluate the quality... | [
6,
-1,
6,
6,
-1,
-1,
-1,
-1
] | [
3,
-1,
4,
4,
-1,
-1,
-1,
-1
] | [
"nips_2021_MSVlSMBbBt",
"nI3okVFr6rH",
"nips_2021_MSVlSMBbBt",
"nips_2021_MSVlSMBbBt",
"wIVoPM_lBQ",
"cMH-dJs10q8",
"NTxWbcAjzPL",
"nips_2021_MSVlSMBbBt"
] |
nips_2021_wFuWSdCD7BN | A Constant Approximation Algorithm for Sequential Random-Order No-Substitution k-Median Clustering | We study k-median clustering under the sequential no-substitution setting. In this setting, a data stream is sequentially observed, and some of the points are selected by the algorithm as cluster centers. However, a point can be selected as a center only immediately after it is observed, before observing the next point. In addition, a selected center cannot be substituted later. We give the first algorithm for this setting that obtains a constant approximation factor on the optimal cost under a random arrival order, an exponential improvement over previous work. This is also the first constant approximation guarantee that holds without any structural assumptions on the input data. Moreover, the number of selected centers is only quasi-linear in k. Our algorithm and analysis are based on a careful cost estimation that avoids outliers, a new concept of a linear bin division, and a multi-scale approach to center selection.
| accept | The authors present the first constant approximation factor on the optimal risk for k-median with no -substitution under a random arrival order model. The problem studied in the paper is an interesting problem and the authors present a simple and elegant algorithm that outperforms previous work. The only concern with the paper is that the problem may not be of interest for a wide audience. Nevertheless, after the discussion phase, all the reviewers agree that it should be accepted. | val | [
"kLWxXBSdfL1",
"Ij6yH74TkXs",
"C61mvpkS8Ac",
"p8dZgFgGehC",
"ryw1ZYSnwNa",
"JGxNCIWqut",
"AX9u24Q3Mli",
"Sk5n6CxVJ8M",
"ilGy9qDbvtV",
"mTBENld9Rzg"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors design a sequential random-order no-substitution algorithm for k-median clustering. Their algorithm outputs O(k\\log^2 k) centers and gives constant factor approximation with high probability. The algorithm does not depend on any assumption on the dataset and gives substantially better guarantees over ... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
5
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"nips_2021_wFuWSdCD7BN",
"C61mvpkS8Ac",
"ryw1ZYSnwNa",
"ilGy9qDbvtV",
"mTBENld9Rzg",
"kLWxXBSdfL1",
"Sk5n6CxVJ8M",
"nips_2021_wFuWSdCD7BN",
"nips_2021_wFuWSdCD7BN",
"nips_2021_wFuWSdCD7BN"
] |
nips_2021_2rR3aBnhCaP | Dangers of Bayesian Model Averaging under Covariate Shift | Approximate Bayesian inference for neural networks is considered a robust alternative to standard training, often providing good performance on out-of-distribution data. However, Bayesian neural networks (BNNs) with high-fidelity approximate inference via full-batch Hamiltonian Monte Carlo achieve poor generalization under covariate shift, even underperforming classical estimation. We explain this surprising result, showing how a Bayesian model average can in fact be problematic under covariate shift, particularly in cases where linear dependencies in the input features cause a lack of posterior contraction. We additionally show why the same issue does not affect many approximate inference procedures, or classical maximum a-posteriori (MAP) training. Finally, we propose novel priors that improve the robustness of BNNs to many sources of covariate shift.
| accept | The paper explores when predictions relying on the posterior predictive distribution of a Bayesian neural network can be poor in the presence of covariate shift and provides a plausible explanation. Weakly informative likelihoods cause a lack of posterior contraction, causing the posterior to revert to the prior for a subset of the model parameters. The resulting posterior-predictive reverts to the prior predictive and can be detrimental in covariate shift.
The paper convincingly demonstrates the empirical phenomenon of Bayesian neural networks struggling under certain types of covariate shifts and proposes a plausible explanation for the observed phenomenon. It is solid in terms of its empirical analysis. Furthermore, it presents a new prior that does not suffer from the downsides of the conventional priors commonly used.
Initial doubts about a lack of samples to evaluate the posterior predictive distribution were successfully addressed in the discussion period. Likewise, initial concerns about a lack of novelty with respect to [Izmailov et al., ICML 2021] were also clarified. Another concern was that the proposed phenomenon might not be the only explanation for BNNs failing in OOD situations, but the reviewers didn't consider this a significant weakness of the paper. | test | [
"504EOMQ2QV",
"EgHh3PtTi6y",
"4yR3sPPkhKn",
"2NfbDhSfTxM",
"fL5xlX2yCyZ",
"cMs6TnfeaH",
"85QxbFreSky",
"YSOxFk-Lo7n",
"fOs7mcssuOO",
"BaxJXRv_v_W",
"u7DgskrtEgv",
"Erh6NrCuxSi",
"BYRHxeaEB0i"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" The additional experiments and clarifications are helpful. I maintain that this a good paper worthy of publication.",
"The paper conducts empirical and theoretical analysis on why Bayesian neural networks (BNNs) with high-fidelity inference schemes (HMC) are vulnerable to out-of-distribution data. The authors e... | [
-1,
7,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
7
] | [
-1,
3,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"BaxJXRv_v_W",
"nips_2021_2rR3aBnhCaP",
"fOs7mcssuOO",
"85QxbFreSky",
"85QxbFreSky",
"nips_2021_2rR3aBnhCaP",
"cMs6TnfeaH",
"nips_2021_2rR3aBnhCaP",
"EgHh3PtTi6y",
"BYRHxeaEB0i",
"Erh6NrCuxSi",
"nips_2021_2rR3aBnhCaP",
"nips_2021_2rR3aBnhCaP"
] |
nips_2021_TgDTMyA9Nk | Learning Equilibria in Matching Markets from Bandit Feedback | Large-scale, two-sided matching platforms must find market outcomes that align with user preferences while simultaneously learning these preferences from data. But since preferences are inherently uncertain during learning, the classical notion of stability (Gale and Shapley, 1962; Shapley and Shubik, 1971) is unattainable in these settings. To bridge this gap, we develop a framework and algorithms for learning stable market outcomes under uncertainty. Our primary setting is matching with transferable utilities, where the platform both matches agents and sets monetary transfers between them. We design an incentive-aware learning objective that captures the distance of a market outcome from equilibrium. Using this objective, we analyze the complexity of learning as a function of preference structure, casting learning as a stochastic multi-armed bandit problem. Algorithmically, we show that "optimism in the face of uncertainty," the principle underlying many bandit algorithms, applies to a primal-dual formulation of matching with transfers and leads to near-optimal regret bounds. Our work takes a first step toward elucidating when and how stable matchings arise in large, data-driven marketplaces.
| accept | Reviewers unilaterally supported the paper - the approach is simple and the introduced subsidy idea aligns well with platform markets' realities, the characterization of the problem is reasonably complete, exposition is good, and the results are clean. The most critical reviewer, SoA5, brings up solid points about regret and exposition/motivation around the subsidy, and in future versions of this work it would beneficial for the authors to address these concerns directly. | val | [
"o-DGTipWek",
"zorn0OSjHT5",
"1UoXebLP8hk",
"Dc4JaTs8e17",
"yiAhdwC_znk",
"5nbS2Qv7OZa",
"0h29_iDdSMe",
"Naf6nklSjh",
"s0ln_PAjFZR",
"9OV_-39BEl",
"guXXfblZOx"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the detailed response. I find the response satisfactory. I decide to retain my score which was not influenced much by the drawbacks.\n\nNon-uniqueness of stable outcomes: By sticking to a specific primal-dual pair in line 4, will the solution (not just the reward) converge? Such convergence (or lack of... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
9,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4,
5
] | [
"zorn0OSjHT5",
"guXXfblZOx",
"9OV_-39BEl",
"s0ln_PAjFZR",
"Naf6nklSjh",
"nips_2021_TgDTMyA9Nk",
"nips_2021_TgDTMyA9Nk",
"nips_2021_TgDTMyA9Nk",
"nips_2021_TgDTMyA9Nk",
"nips_2021_TgDTMyA9Nk",
"nips_2021_TgDTMyA9Nk"
] |
nips_2021_0OWwNh-4in1 | Towards Lower Bounds on the Depth of ReLU Neural Networks | We contribute to a better understanding of the class of functions that is represented by a neural network with ReLU activations and a given architecture. Using techniques from mixed-integer optimization, polyhedral theory, and tropical geometry, we provide a mathematical counterbalance to the universal approximation theorems which suggest that a single hidden layer is sufficient for learning tasks. In particular, we investigate whether the class of exactly representable functions strictly increases by adding more layers (with no restrictions on size). This problem has potential impact on algorithmic and statistical aspects because of the insight it provides into the class of functions represented by neural hypothesis classes. However, to the best of our knowledge, this question has not been investigated in the neural network literature. We also present upper bounds on the sizes of neural networks required to represent functions in these neural hypothesis classes.
| accept | Three reviewers recommend accept and two reviewers recommend weak reject. A main criticism is the unclear significance of the partial results and the unclear relevance to practical machine learning. After weighing the discussion and evaluating the submission, I agree with the positive reviewers that the problem formulation, the approach and the tools presented are sufficiently interesting and relevant to the theoretical understanding of neural networks. Hence I am recommending accept. I ask the authors to add the details promised in the discussion in the final version of the paper and to take the reviewers suggestions carefully into consideration when preparing it. | train | [
"nJBarqGmGxH",
"1vaOCXZgOjD",
"WzLogsS8edw",
"VOqDTwmRQNE",
"pcmBEW8SAes",
"Pricv70pRuq",
"o6S8qCEGDn",
"kQg2C8KhTH",
"ryos_5wDbM_",
"nt_vgvy_CNv",
"FgVt6Vp89Zf",
"vsU9XC1SBFL"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for the response. I still am not convinced that the results in this work are strong enough and I choose to keep my score the same.",
"EDIT: After reading authors' responses, I decided to keep my score as is.\n\nThe paper studies the problem of exact representation of continuous functions via... | [
-1,
5,
-1,
7,
-1,
-1,
-1,
-1,
-1,
7,
5,
7
] | [
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"kQg2C8KhTH",
"nips_2021_0OWwNh-4in1",
"Pricv70pRuq",
"nips_2021_0OWwNh-4in1",
"VOqDTwmRQNE",
"nt_vgvy_CNv",
"vsU9XC1SBFL",
"FgVt6Vp89Zf",
"1vaOCXZgOjD",
"nips_2021_0OWwNh-4in1",
"nips_2021_0OWwNh-4in1",
"nips_2021_0OWwNh-4in1"
] |
nips_2021_YDepgWDUDXx | The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective | Large width limits have been a recent focus of deep learning research: modulo computational practicalities, do wider networks outperform narrower ones? Answering this question has been challenging, as conventional networks gain representational power with width, potentially masking any negative effects. Our analysis in this paper decouples capacity and width via the generalization of neural networks to Deep Gaussian Processes (Deep GP), a class of nonparametric hierarchical models that subsume neural nets. In doing so, we aim to understand how width affects (standard) neural networks once they have sufficient capacity for a given modeling task. Our theoretical and empirical results on Deep GP suggest that large width can be detrimental to hierarchical models. Surprisingly, we prove that even nonparametric Deep GP converge to Gaussian processes, effectively becoming shallower without any increase in representational power. The posterior, which corresponds to a mixture of data-adaptable basis functions, becomes less data-dependent with width. Our tail analysis demonstrates that width and depth have opposite effects: depth accentuates a model’s non-Gaussianity, while width makes models increasingly Gaussian. We find there is a “sweet spot” that maximizes test performance before the limiting GP behavior prevents adaptability, occurring at width = 1 or width = 2 for nonparametric Deep GP. These results make strong predictions about the same phenomenon in conventional neural networks trained with L2 regularization (analogous to a Gaussian prior on parameters): we show that such neural networks may need up to 500 − 1000 hidden units for sufficient capacity - depending on the dataset - but further width degrades performance.
| accept | This paper leverages an analysis of deep Gaussian processes to argue that an excess increase in the width of a neural network can degrade performance. The question of whether or not neural networks benefit from increased width is an important and unresolved question in the literature, and the reviewers and I agree that this paper provides an additional and important perspective on the topic that will be of interest to the NeurIPS community. While some reviewers found the experimental results unconvincing and the conclusions somewhat speculative, others found the framework to be illuminating and to provide new perspectives on this important problem. Overall, I do not expect this or any paper to fully and unambiguously resolve all questions relating to the benefits/drawbacks of large width networks, and I believe this paper provides novel and useful insights that shed light on this important problem. Therefore, I recommend acceptance. | test | [
"7E2fPJYDaRZ",
"rNigy3aJwC",
"iOI1xI3NNLN",
"6doyyG3eJTZ",
"DnEdr-GRyG",
"blj46zBpj_-",
"bBsqhrzPtdv",
"EQET4Czz5-U",
"LuYgk-fnH7A",
"CQuOk5Ui_gS",
"abTc3haQJ2o",
"3ho-eA-5TDK",
"CODNUOj1Vi",
"jODJraveInH",
"sL1PozogBAg",
"3aSVDjN6_Kc"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors show that in the infinite width limit, deep GPs becomes a single-layer GP, generalizing the result that deep neural networks at initialization become a GP in the infinite width limit. The authors show that increasing depth of the GP makes the distribution of the final layer of deep GPs deviate from bei... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_YDepgWDUDXx",
"iOI1xI3NNLN",
"abTc3haQJ2o",
"DnEdr-GRyG",
"blj46zBpj_-",
"bBsqhrzPtdv",
"EQET4Czz5-U",
"CODNUOj1Vi",
"nips_2021_YDepgWDUDXx",
"LuYgk-fnH7A",
"3aSVDjN6_Kc",
"sL1PozogBAg",
"7E2fPJYDaRZ",
"nips_2021_YDepgWDUDXx",
"nips_2021_YDepgWDUDXx",
"nips_2021_YDepgWDUDXx"... |
nips_2021_MxE7xFzv0N8 | Exact marginal prior distributions of finite Bayesian neural networks | Jacob Zavatone-Veth, Cengiz Pehlevan | accept | This paper derives an exact solution for function-space prior expression (marginally for single outputs) induced by an independent Gaussian prior on the weights for deep linear and ReLU finite fully connected feedforward NNs without biases. This can be viewed as Bayesian NN priors or NNs at initialisation for finite width networks. This prior is expressed using the Meijer G-function for deep linear networks, and weighted sums of these functions for deep ReLU networks. This makes clear that at finite widths, the prior is increasingly heavy-tailed with increasing depths. Moreover, finite-width corrections are unable to capture this heavy-tailedness as it is not a perturbative effect.
All reviewers point out that the paper is well-written and clear. Also work presented in the paper is solid and no noticeable issues or mistakes were identified. Reviewers mention that `this is the first exact characterization of the function-space prior of finite-width NN` (`jHrx`). It’s been noted that the technical methodology advanced using the connection to Meijer G-functions is novel in the deep learning theory community.
There are few weaknesses pointed out. One significant one was the limitation to the single input rather than multi inputs. In this regard, reviewers `XUQc` and `yWcY` raised concern with the current title being too broad and authors agreed to modify it to more precisely describe the contribution. The authors agreed to modify it as "Exact marginal prior distributions of finite Bayesian neural networks". Another limitation is requirement for particular architecture (deep linear, ReLu) and weight distribution (`ZUme`, `yWcY`, `XUQc`).
Overall, the AC believes this paper is a timely, worthwhile and significant step towards characterizing prior distribution of finite networks despite some restrictive settings. AC’s opinion is that the limitations are outweighed by the interesting theoretical outcomes (voiced also by reviewer `yWcY` and `jHrx`) which opens up new effects (e.g. heavy-tailedness) uncapturable by perturbative corrections. There are lots of interesting questions and follow up work that this paper would ignite, thus it would be worth sharing with the broad NeurIPS audience.
| train | [
"9AT1nweT8TH",
"TeDuj-Ezkjp",
"VPji4t5f4vE",
"MmIu6ckmK9M",
"_mEmce5_Fwj",
"sVgzjZIqjfy",
"gvH2c3Rsttr",
"hkcAANahLuB",
"RLZBl6gCSzt",
"IB4SVqF4Z_l",
"Dc9CWTfwGQT",
"iA2rsgaaPK8"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you to the authors for their reply.\n\n--The proposed title change \"Exact marginal prior distributions of finite Bayesian neural networks\" sounds good to me.\n\n--Thank you for including the results from the deep GP literature in the revision; I have no further suggestions here.\n",
" Thanks for the res... | [
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
3
] | [
"hkcAANahLuB",
"_mEmce5_Fwj",
"nips_2021_MxE7xFzv0N8",
"gvH2c3Rsttr",
"iA2rsgaaPK8",
"Dc9CWTfwGQT",
"VPji4t5f4vE",
"IB4SVqF4Z_l",
"nips_2021_MxE7xFzv0N8",
"nips_2021_MxE7xFzv0N8",
"nips_2021_MxE7xFzv0N8",
"nips_2021_MxE7xFzv0N8"
] |
nips_2021_Alr5_kKmLBX | Spatiotemporal Joint Filter Decomposition in 3D Convolutional Neural Networks | In this paper, we introduce spatiotemporal joint filter decomposition to decouple spatial and temporal learning, while preserving spatiotemporal dependency in a video. A 3D convolutional filter is now jointly decomposed over a set of spatial and temporal filter atoms respectively. In this way, a 3D convolutional layer becomes three: a temporal atom layer, a spatial atom layer, and a joint coefficient layer, all three remaining convolutional. One obvious arithmetic manipulation allowed in our joint decomposition is to swap spatial or temporal atoms with a set of atoms that have the same number but different sizes, while keeping the remaining unchanged. For example, as shown later, we can now achieve tempo-invariance by simply dilating temporal atoms only. To illustrate this useful atom-swapping property, we further demonstrate how such a decomposition permits the direct learning of 3D CNNs with full-size videos through iterations of two consecutive sub-stages of learning: In the temporal stage, full-temporal downsampled-spatial data are used to learn temporal atoms and joint coefficients while fixing spatial atoms. In the spatial stage, full-spatial downsampled-temporal data are used for spatial atoms and joint coefficients while fixing temporal atoms. We show empirically on multiple action recognition datasets that, the decoupled spatiotemporal learning significantly reduces the model memory footprints, and allows deep 3D CNNs to model high-spatial long-temporal dependency with limited computational resources while delivering comparable performance.
| accept | All three reviewers recommend acceptance (1 rating of 7, 2 ratings of 6).
Reviewer mC1w raises the initial rating of 6 to 7, because of the convincing additional results included in the author response. However, the Reviewer still recommends revising a claim and clarifying the contribution in the abstract and title. The experimental setup should be corrected to make the objective of the work clearer.
Reviewer Taqs asked several clarifying questions, which were well addressed in the author response. The rating was thus increased to 6.
Finally, based on author's feedback, Reviewer 3Etu acknowledges the memory reduction as an important improvement and raises the rating to 6.
The ACs concur with the acceptance recommendation made by the reviewers. | test | [
"zaXpUpUy_i",
"Q-7B7UTl_C6",
"LMZ59mUwnUY",
"7GAHFEWJj3-",
"MpCdEC_koJT",
"SjiSd6A6rbP",
"Pi-0_DtNTR2",
"NWcOMrJ-5v3",
"4fGzqGAKynE",
"b1UTFQW6jV",
"EJZHTXESM0e",
"NfaAvsvFUwV",
"ux-O3yXg_nW",
"ITuh3_Y9kL1",
"rlxOxfX4NXI"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
" We are glad our response alleviates your concerns. We will include all the additional experiments in responses in the revision.\n\nThank you for your support for our response!",
"This paper presents a spatiotemporal joint filter decomposition for 3D CNN. Compared with the common 2D+1D decomposition, which is re... | [
-1,
7,
-1,
-1,
-1,
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
-1,
-1,
-1,
4,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"LMZ59mUwnUY",
"nips_2021_Alr5_kKmLBX",
"NfaAvsvFUwV",
"Pi-0_DtNTR2",
"b1UTFQW6jV",
"nips_2021_Alr5_kKmLBX",
"NWcOMrJ-5v3",
"EJZHTXESM0e",
"nips_2021_Alr5_kKmLBX",
"ITuh3_Y9kL1",
"ux-O3yXg_nW",
"Q-7B7UTl_C6",
"SjiSd6A6rbP",
"4fGzqGAKynE",
"nips_2021_Alr5_kKmLBX"
] |
nips_2021_1z2T01DKEaE | Pooling by Sliced-Wasserstein Embedding | Learning representations from sets has become increasingly important with many applications in point cloud processing, graph learning, image/video recognition, and object detection. We introduce a geometrically-interpretable and generic pooling mechanism for aggregating a set of features into a fixed-dimensional representation. In particular, we treat elements of a set as samples from a probability distribution and propose an end-to-end trainable Euclidean embedding for sliced-Wasserstein distance to learn from set-structured data effectively. We evaluate our proposed pooling method on a wide variety of set-structured data, including point-cloud, graph, and image classification tasks, and demonstrate that our proposed method provides superior performance over existing set representation learning approaches. Our code is available at https://github.com/navid-naderi/PSWE.
| accept | All the reviewers are positive about the work and also appreciated the rebuttal, where extra numerical experiments were promised, which I strongly encourage to add. Also an in-depth discussion of the variations in relative performances with respect to GAP seems important. As a small side note, the notion of SW was introduced in [38] and not [36, 37] as the end of Section 3.1 seems to imply (in particular [38] is not concerned with generalized SW). | train | [
"yK8yGL4-ctl",
"hKZ36GT-9Iu",
"nGg5VANhqIX",
"b4qRI315yv4",
"DBvDISi6m5z",
"pqLun9tCgRq",
"PfzHhBUPDTE",
"qU7FPd4hRMJ",
"sje6OvdxUz8",
"U4wLyqGmqhS",
"Arowkga4_Xw",
"0cQN8l6cv8g",
"pDAWZXsfUJI",
"DgWclJYAYi",
"fqahzeX3DPL",
"3SlEB45fWIW",
"oRQpxzAzd0",
"zOKxBp-KlD"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to follow up on our discussion to resolve any remaining concerns you might have about our work, and we would appreciate it if you could please increase your rating as you indicated.",
" Thank you very much for your positive evaluation of our work and our responses to your comments.",
" Thank you... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"U4wLyqGmqhS",
"b4qRI315yv4",
"DBvDISi6m5z",
"DgWclJYAYi",
"fqahzeX3DPL",
"sje6OvdxUz8",
"U4wLyqGmqhS",
"nips_2021_1z2T01DKEaE",
"pDAWZXsfUJI",
"zOKxBp-KlD",
"0cQN8l6cv8g",
"zOKxBp-KlD",
"qU7FPd4hRMJ",
"oRQpxzAzd0",
"3SlEB45fWIW",
"nips_2021_1z2T01DKEaE",
"nips_2021_1z2T01DKEaE",
"... |
nips_2021_-uFBxNwRHa2 | On the Theory of Reinforcement Learning with Once-per-Episode Feedback | We study a theory of reinforcement learning (RL) in which the learner receives binary feedback only once at the end of an episode. While this is an extreme test case for theory, it is also arguably more representative of real-world applications than the traditional requirement in RL practice that the learner receive feedback at every time step. Indeed, in many real-world applications of reinforcement learning, such as self-driving cars and robotics, it is easier to evaluate whether a learner's complete trajectory was either good'' orbad,'' but harder to provide a reward signal at each step. To show that learning is possible in this more challenging setting, we study the case where trajectory labels are generated by an unknown parametric model, and provide a statistically and computationally efficient algorithm that achieves sublinear regret.
| accept | The reviewer were overall ambivalous about this paper, even after rebuttal and discussion.
The reviewers agreed that the proposed model is new, interesting, and that the authors provide the first regret bound for this formulation.
However, they were also concerned by a number of aspects of this work:
* several assumptions seem unnatural and are hard to verify without extensive discussion as of their meaning (e.g., the kappa parameter or the decomposability assumption)
* some choices of the formulation unmotivated (beyond being required in the analysis) and not properly explained in the rebuttal (eg, the nature of the reward)
* Alg 1 is intractable, the complexity of Alg 2 is still unclear after rebuttal (while the regret dependency to the number of episode is quite bad), and the Alg in the experimental section has little to see to the previous ones.
I think that a revision addressing these points would require another round of review, and therefore recommend rejecting this paper. | train | [
"ySAIDAVGDaW",
"eQN26rRZy6A",
"JlQXVumy8O",
"QW_J_tQtGxM",
"Cq8_ab2J1F7",
"yug64iW_tWi",
"2lOXvqavcY-",
"DTR68Q4FuCp",
"nrCFMi6bTSY",
"0cT-VUwxY_3",
"--cjOn2cyHS",
"Pi1AIlwhl9r",
"5JtERBZjx4",
"gOLmmh4Jvm",
"FZ_H0s8pVMX"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper tackles RL problems where feedback is received only at the end of each episode. Specifically, they extend the trajectory-feedback model of Efroni et al.[10] in two aspects: first, the reward is logistic, and not the sum of state-action rewards, and second, they allow per-trajectory feature representation... | [
5,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4
] | [
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_-uFBxNwRHa2",
"nips_2021_-uFBxNwRHa2",
"--cjOn2cyHS",
"Pi1AIlwhl9r",
"5JtERBZjx4",
"2lOXvqavcY-",
"DTR68Q4FuCp",
"nrCFMi6bTSY",
"0cT-VUwxY_3",
"FZ_H0s8pVMX",
"gOLmmh4Jvm",
"ySAIDAVGDaW",
"eQN26rRZy6A",
"nips_2021_-uFBxNwRHa2",
"nips_2021_-uFBxNwRHa2"
] |
nips_2021_IROqhpEha8 | ResNEsts and DenseNEsts: Block-based DNN Models with Improved Representation Guarantees | Models recently used in the literature proving residual networks (ResNets) are better than linear predictors are actually different from standard ResNets that have been widely used in computer vision. In addition to the assumptions such as scalar-valued output or single residual block, the models fundamentally considered in the literature have no nonlinearities at the final residual representation that feeds into the final affine layer. To codify such a difference in nonlinearities and reveal a linear estimation property, we define ResNEsts, i.e., Residual Nonlinear Estimators, by simply dropping nonlinearities at the last residual representation from standard ResNets. We show that wide ResNEsts with bottleneck blocks can always guarantee a very desirable training property that standard ResNets aim to achieve, i.e., adding more blocks does not decrease performance given the same set of basis elements. To prove that, we first recognize ResNEsts are basis function models that are limited by a coupling problem in basis learning and linear prediction. Then, to decouple prediction weights from basis learning, we construct a special architecture termed augmented ResNEst (A-ResNEst) that always guarantees no worse performance with the addition of a block. As a result, such an A-ResNEst establishes empirical risk lower bounds for a ResNEst using corresponding bases. Our results demonstrate ResNEsts indeed have a problem of diminishing feature reuse; however, it can be avoided by sufficiently expanding or widening the input space, leading to the above-mentioned desirable property. Inspired by the densely connected networks (DenseNets) that have been shown to outperform ResNets, we also propose a corresponding new model called Densely connected Nonlinear Estimator (DenseNEst). We show that any DenseNEst can be represented as a wide ResNEst with bottleneck blocks. Unlike ResNEsts, DenseNEsts exhibit the desirable property without any special architectural re-design.
| accept | This work studies the representational power of ResNets and aims to answer the question of when adding more residual blocks in a ResNet can be guaranteed to not lead to a degradation in performance. All 5 reviewers find the paper interesting and think its contribution is a valuable addition to the literature. All reviewers were active in discussion with the authors and with the committee. No serious concerns surfaced that would invalidate the contribution made by the paper. After committee discussion the reviewers reached a consensus recommendation to accept the paper.
Multiple reviewers indicated that the author response was especially useful and served to clear up some minor concerns that existed after the first round of reviews. Authors, please integrate this discussion into the camera ready version of the paper. | train | [
"kt7m64X3PN",
"ZLOSjm828Mi",
"tj7_svyveuF",
"nglS_GZ5Ken",
"IMjIf3sF-hA",
"RByGg7lPn8o",
"KF4ZuPU2F7o",
"Lclu9l_aCgs",
"Tds4IjXae2I",
"3A47vmF7eri",
"91FThl4a9Fr",
"src9-hj8dln",
"Fg9aImtr7_T",
"d25OoSJ4oK8",
"UnQQz7yJSi2",
"sRqRmGfO1ge",
"A1k6JcclNNE",
"TZhcTivMyU_",
"GiEdq3zJ3m... | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"... | [
" We thank the reviewer for the positive feedback. The experiments and their results will be described and included in the supplementary material (Appendix). In the main text, we will point out these empirical findings and refer the reader to the Appendix.\n\nWe thank the reviewer for the additional comment. We wil... | [
-1,
8,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
-1,
2,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"tj7_svyveuF",
"nips_2021_IROqhpEha8",
"8hUFsF_giS",
"nips_2021_IROqhpEha8",
"w1Qmauz9oDD",
"ZLOSjm828Mi",
"Lclu9l_aCgs",
"sRqRmGfO1ge",
"d25OoSJ4oK8",
"UnQQz7yJSi2",
"HZ9e15mS8SI",
"nips_2021_IROqhpEha8",
"nips_2021_IROqhpEha8",
"A1k6JcclNNE",
"GiEdq3zJ3mx",
"nglS_GZ5Ken",
"Fg9aImtr... |
nips_2021_KfC0i9Hjvl2 | Locally private online change point detection | Tom Berrett, Yi Yu | accept | This paper studies the change point detection under the constraint of LDP. Specifically, at each time point t, a new user arrives with data of the form (X_t, Y_t), where X_t is a feature vector and Y_t is a response variable. The goal is to detect changes in the regression function as soon as the change occurs. To achieve LDP, the proposed method first uses a locally private mechanism to transform the data point (x,y) to another random pair and then uses an existing change point detection algorithm to detect the change. The reviewers agree that while this is a solid paper, it just misses the bar for NeurIPS. | train | [
"RKAJ3N76MhV",
"loDOcve1OHe",
"8bJ5XBzQS36",
"PUDK35Ug8Dw",
"SjEbsOcDaiY",
"gmeu76QUUBd",
"mNSiwpt1_yc",
"vXfmHi2XWgh",
"DBubsg75R8p",
"_BpLwPb3mT9"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank the reviewer for considering our rebuttal and engaging with the discussion.\n \nWe agree that the need to choose tuning parameters is a drawback of our method, but this is a ubiquitous problem in both nonparametric statistics and all online change point detection problems. The focus of our ... | [
-1,
6,
-1,
5,
-1,
-1,
-1,
-1,
3,
7
] | [
-1,
2,
-1,
3,
-1,
-1,
-1,
-1,
4,
4
] | [
"PUDK35Ug8Dw",
"nips_2021_KfC0i9Hjvl2",
"mNSiwpt1_yc",
"nips_2021_KfC0i9Hjvl2",
"loDOcve1OHe",
"PUDK35Ug8Dw",
"_BpLwPb3mT9",
"DBubsg75R8p",
"nips_2021_KfC0i9Hjvl2",
"nips_2021_KfC0i9Hjvl2"
] |
nips_2021_jlchsFOLfeF | Invariance Principle Meets Information Bottleneck for Out-of-Distribution Generalization | The invariance principle from causality is at the heart of notable approaches such as invariant risk minimization (IRM) that seek to address out-of-distribution (OOD) generalization failures. Despite the promising theory, invariance principle-based approaches fail in common classification tasks, where invariant (causal) features capture all the information about the label. Are these failures due to the methods failing to capture the invariance? Or is the invariance principle itself insufficient? To answer these questions, we revisit the fundamental assumptions in linear regression tasks, where invariance-based approaches were shown to provably generalize OOD. In contrast to the linear regression tasks, we show that for linear classification tasks we need much stronger restrictions on the distribution shifts, or otherwise OOD generalization is impossible. Furthermore, even with appropriate restrictions on distribution shifts in place, we show that the invariance principle alone is insufficient. We prove that a form of the information bottleneck constraint along with invariance helps address the key failures when invariant features capture all the information about the label and also retains the existing success when they do not. We propose an approach that incorporates both of these principles and demonstrate its effectiveness in several experiments.
| accept | After substantial discussion, including additions of new experimental results from the authors, reviewers have come to a consensus that this paper provides interesting and relevant theoretical results, as well as sufficient empirical evidence supporting the theory, to merit acceptance. We expect that this work will appeal to the ERM, IRM, and Information Bottleneck communities. We particularly think that the exponential speedup of convergence of the IB-based methods will be of theoretical and practical interest. We look forward to reading the updated version of the paper with the additions and revisions the authors have promised. | train | [
"-yRL4-jnlJY",
"NEfPz68ozQ1",
"9I6K7CSSa1o",
"7D9U1jRUB7I",
"WBx1vDx6Z_D",
"-pGMUFsud51",
"hveQAx6A4Yy",
"9aWS9R2p0X",
"88nlWlJ-gBN",
"QCbT-ULTO1y",
"E7ONXjXU3F",
"JcBwakz1nur",
"qpIdzPgokgv",
"NHuVaq2F5Tv",
"-mEBYpydpmi",
"3m3I0KM4sfV",
"KgFdI-cX1St",
"FRUTzrMmS3f",
"A8A8pb5bdJD... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_r... | [
" Thank you for appreciating our efforts and raising your score. We will incorporate the suggested changes in the manuscript. ",
" Hi \n\n\nThank you for your question. There are four environments in Terra incognita, which are called L100, L38, L43, L46. Three environments can be used for training and fourth is u... | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"WBx1vDx6Z_D",
"9I6K7CSSa1o",
"hveQAx6A4Yy",
"nips_2021_jlchsFOLfeF",
"hveQAx6A4Yy",
"7D9U1jRUB7I",
"7D9U1jRUB7I",
"QCbT-ULTO1y",
"JcBwakz1nur",
"-mEBYpydpmi",
"nips_2021_jlchsFOLfeF",
"KgFdI-cX1St",
"NHuVaq2F5Tv",
"3m3I0KM4sfV",
"E7ONXjXU3F",
"7D9U1jRUB7I",
"hMOWFBNKfC",
"A8A8pb5b... |
nips_2021_LAKplpLMbP8 | Repulsive Deep Ensembles are Bayesian | Deep ensembles have recently gained popularity in the deep learning community for their conceptual simplicity and efficiency. However, maintaining functional diversity between ensemble members that are independently trained with gradient descent is challenging. This can lead to pathologies when adding more ensemble members, such as a saturation of the ensemble performance, which converges to the performance of a single model. Moreover, this does not only affect the quality of its predictions, but even more so the uncertainty estimates of the ensemble, and thus its performance on out-of-distribution data. We hypothesize that this limitation can be overcome by discouraging different ensemble members from collapsing to the same function. To this end, we introduce a kernelized repulsive term in the update rule of the deep ensembles. We show that this simple modification not only enforces and maintains diversity among the members but, even more importantly, transforms the maximum a posteriori inference into proper Bayesian inference. Namely, we show that the training dynamics of our proposed repulsive ensembles follow a Wasserstein gradient flow of the KL divergence with the true posterior. We study repulsive terms in weight and function space and empirically compare their performance to standard ensembles and Bayesian baselines on synthetic and real-world prediction tasks.
| accept | In this paper the authors proposed the addition of a repulsive term to the training objective of deep ensembles, based on Stein variational inference. They show that this turns the standard deep ensemble training into an approximation to the Bayesian posterior. The reviewers found the proposed method compelling and found the paper to be of high writing and technical quality. They also found the work timely and that it addresses an important open problem. Thus the recommendation is to accept.
| train | [
"n78KXNJHaSo",
"Q0_V7xFnTCr",
"6FOVFjdNGO",
"Dw5-n9ZAyaO",
"Sw3iDXfsx2",
"avnJFL1Oxrl",
"HnAG5qxCZAu",
"UYhNFvn9fk3",
"z4xo8dzyQGn",
"ud9WHvBjbP",
"zk2jwu5sN7d"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Maintaining functional diversity between ensemble members are a large, and still not completely solved problem. The authors are introducing a kernelized repulsive term to the update rule of an ensemble. They show empirically that this increases the functional diversity between ensemble members. The authors also de... | [
7,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_LAKplpLMbP8",
"HnAG5qxCZAu",
"nips_2021_LAKplpLMbP8",
"avnJFL1Oxrl",
"z4xo8dzyQGn",
"6FOVFjdNGO",
"n78KXNJHaSo",
"ud9WHvBjbP",
"zk2jwu5sN7d",
"nips_2021_LAKplpLMbP8",
"nips_2021_LAKplpLMbP8"
] |
nips_2021_aSjbPcve-b | BayesIMP: Uncertainty Quantification for Causal Data Fusion | While causal models are becoming one of the mainstays of machine learning, the problem of uncertainty quantification in causal inference remains challenging. In this paper, we study the causal data fusion problem, where data arising from multiple causal graphs are combined to estimate the average treatment effect of a target variable. As data arises from multiple sources and can vary in quality and sample size, principled uncertainty quantification becomes essential. To that end, we introduce \emph{Bayesian Causal Mean Processes}, the framework which combines ideas from probabilistic integration and kernel mean embeddings to represent interventional distributions in the reproducing kernel Hilbert space, while taking into account the uncertainty within each causal graph. To demonstrate the informativeness of our uncertainty estimation, we apply our method to the Causal Bayesian Optimisation task and show improvements over state-of-the-art methods.
| accept | All four reviewers advocate acceptance. I also recommend accepting the paper for its contributions to the emerging field of Bayesian causal inference. | test | [
"KnuiXJN2qqV",
"5vWulFqk3Xs",
"2nRYxfxolo",
"1SFZjSZGLNu",
"Uqzl3C1bTF7",
"k77JA8f0_gT",
"iL_J1C97Fd9",
"m8hTqPsSrJ",
"jMxWNLaPaDf",
"kxPEMJQOBT",
"duy3C_kYdDT",
"AWmz5uNEe3A"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a Bayesian approach for estimating the treatment effect by combining two data sources. The main advantage of the Bayesian approach over the frequentist counterpart [17] is its ability to quantify uncertainty. The main machinery was GP which replaces KRR used in existing works. The paper is ver... | [
6,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
3,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_aSjbPcve-b",
"jMxWNLaPaDf",
"k77JA8f0_gT",
"nips_2021_aSjbPcve-b",
"KnuiXJN2qqV",
"iL_J1C97Fd9",
"1SFZjSZGLNu",
"AWmz5uNEe3A",
"duy3C_kYdDT",
"nips_2021_aSjbPcve-b",
"nips_2021_aSjbPcve-b",
"nips_2021_aSjbPcve-b"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.