paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nips_2022_h37KyWDDC6B | Finding Optimal Arms in Non-stochastic Combinatorial Bandits with Semi-bandit Feedback and Finite Budget | We consider the combinatorial bandits problem with semi-bandit feedback under finite sampling budget constraints, in which the learner can carry out its action only for a limited number of times specified by an overall budget. The action is to choose a set of arms, whereupon feedback for each arm in the chosen set is received. Unlike existing works, we study this problem in a non-stochastic setting with subset-dependent feedback, i.e., the semi-bandit feedback received could be generated by an oblivious adversary and also might depend on the chosen set of arms. In addition, we consider a general feedback scenario covering both the numerical-based as well as preference-based case and introduce a sound theoretical framework for this setting guaranteeing sensible notions of optimal arms, which a learner seeks to find. We suggest a generic algorithm suitable to cover the full spectrum of conceivable arm elimination strategies from aggressive to conservative. Theoretical questions about the sufficient and necessary budget of the algorithm to find the best arm are answered and complemented by deriving lower bounds for any learning algorithm for this problem scenario. | Accept | The reviewers came to consensus that this paper has a good contribution to the study of pure exploration for the combinatorial bandits. On the other hand, several minor concerns such as the motivation, experiments and algorithmic novelty, are raised. I agree that these concerns are reasonable and please polish the manuscript by addressing these points in the final version. | train | [
"50ylT4_Suy",
"Jfn6lEHMzwP",
"TT_j0CBAKMD",
"QKyzti-esPE",
"hsxp1I3MUNl",
"6u4AggRODs4",
"7QZsqC8gVZv"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your comments. I will keep my score. ",
" Thank you for your helpful and valuable comments. We are very pleased that you appreciate the impact of our theoretical results and that you find our paper well-organized. We will address your open questions and concerns in the following.\n\n### Bad format... | [
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
3,
2,
3
] | [
"Jfn6lEHMzwP",
"7QZsqC8gVZv",
"6u4AggRODs4",
"hsxp1I3MUNl",
"nips_2022_h37KyWDDC6B",
"nips_2022_h37KyWDDC6B",
"nips_2022_h37KyWDDC6B"
] |
nips_2022_APXedc0hgdT | Rate-Distortion Theoretic Bounds on Generalization Error for Distributed Learning | In this paper, we use tools from rate-distortion theory to establish new upper bounds on the generalization error of statistical distributed learning algorithms. Specifically, there are $K$ clients whose individually chosen models are aggregated by a central server. The bounds depend on the compressibility of each client's algorithm while keeping other clients' algorithms un-compressed, and leveraging the fact that small changes in each local model change the aggregated model by a factor of only $1/K$. Adopting a recently proposed approach by Sefidgaran et al., and extending it suitably to the distributed setting, enables smaller rate-distortion terms which are shown to translate into tighter generalization bounds. The bounds are then applied to the distributed support vector machines (SVM), suggesting that the generalization error of the distributed setting decays faster than that of the centralized one with a factor of $\mathcal{O}(\sqrt{\log(K)/K})$. This finding is validated also experimentally. A similar conclusion is obtained for a multiple-round federated learning setup where each client uses stochastic gradient Langevin dynamics (SGLD). | Accept | The paper investigates generalization error bounds of distributed learning algorithms via rate-distortion theory, and compares the theoretical results with existing information-theoretical generalization bounds. An interesting implication of the authors' theory is the advantage of distributed learning with multiple clients as opposed to a centralized algorithm. This applies, in particular, to distributed SVM and distributed SGD, under some variance regimes. The paper strengthens the literature in several respects by removing or relaxing assumptions.
All in all, an interesting piece of work that has improved during the discussion period. The authors are committed to re-organized their submission according to what they proposed during the discussion.
| train | [
"wFL_Beepty",
"P_0LeSg1U5I2",
"eUjUDOc60dt",
"-PwCOfnuOl4",
"8DFlEOPUkT",
"VDRlwMUYNNg",
"__QwJoDg2Pv",
"aiWJyw5Z62",
"-raf7bS5J4s",
"dMwko3ksAq",
"hpMHSozquvM",
"Tdu7vrdg4OH"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Once again we thank the reviewers for their useful comments which helped improve the quality of our paper. We have uploaded a revised version of the paper in which we addressed all the concerns and suggestions of the reviewers. In particular, as suggested by Reviewer 8roe we have added a new 3-page length section... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"nips_2022_APXedc0hgdT",
"nips_2022_APXedc0hgdT",
"Tdu7vrdg4OH",
"8DFlEOPUkT",
"hpMHSozquvM",
"__QwJoDg2Pv",
"aiWJyw5Z62",
"dMwko3ksAq",
"nips_2022_APXedc0hgdT",
"nips_2022_APXedc0hgdT",
"nips_2022_APXedc0hgdT",
"nips_2022_APXedc0hgdT"
] |
nips_2022_81LQV4k7a7X | ReFactor GNNs: Revisiting Factorisation-based Models from a Message-Passing Perspective | Factorisation-based Models (FMs), such as DistMult, have enjoyed enduring success for Knowledge Graph Completion (KGC) tasks, often outperforming Graph Neural Networks (GNNs). However, unlike GNNs, FMs struggle to incorporate node features and generalise to unseen nodes in inductive settings. Our work bridges the gap between FMs and GNNs by proposing ReFactor GNNs. This new architecture draws upon $\textit{both}$ modelling paradigms, which previously were largely thought of as disjoint. Concretely, using a message-passing formalism, we show how FMs can be cast as GNNs by reformulating the gradient descent procedure as message-passing operations, which forms the basis of our ReFactor GNNs. Across a multitude of well-established KGC benchmarks, our ReFactor GNNs achieve comparable transductive performance to FMs, and state-of-the-art inductive performance while using an order of magnitude fewer parameters. | Accept | the paper proposes ReFactorGNNs, a unified view on factorization-based models (FMs) and graph neural networks (GNNs). It shows how to how FMs can be cast as GNNs by reformulating the gradient descent
procedure as message-passing operations. The reviewers agree that the idea is "intuitive and yet very clever at the same time", as one of the reviewers puts it. Connecting FMs just as DistMul with GNNs is great, and the paper is well written and presented. The one more negative reviewer expresses a low confidence and provides an informed outsider view on the paper, which is actually rather positive.
I fully agree though I would also like to provide some more links to related work:
https://arxiv.org/abs/1509.08535
https://forum.stanford.edu/events/posterslides/MessagePassingforMatrixFactorization.pdf
http://pfister.ee.duke.edu/papers/Kim-istc10.pdf
Moreover, another indication is that transformers are quite expressive, and transformers are GNNs. I I would like to encourage the authors to provide a more high-level view on the problem, and then argue that the presented approach makes this intuition very concrete. Moreover, it would be great to discuss more future work. For instance, it seems possible now to learn new FM approaches automatically from data. Kind of "FM discovery" or "Learning to Factorize". This is all very exciting. | test | [
"yfUv3pchFA",
"wbMGzrPhiva",
"6oQtQ3m3s7H",
"t6sH2ub68Y9",
"kODdvkAmID4",
"oTRUKSDpGQH",
"tz0jrqOA6AQ",
"hpvHqmevhw",
"59bmoIdM3ps",
"Wq6OdjoOkCf",
"WmRp_xpJ9B5",
"5sBQUJtE8hrX",
"VY229GDTEnp",
"IDW6TTXB5Ep",
"hsVbIdRCaJA",
"WVvFgfIodP2"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their time, and we really appreciate the increment in the rating. If there is anything that the reviewer feels is still a gap, we are happy to discuss further!",
" We thank the reviewer for their time, and we are very grateful for the increment in the confidence score.",
" I would li... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
1
] | [
"6oQtQ3m3s7H",
"t6sH2ub68Y9",
"Wq6OdjoOkCf",
"5sBQUJtE8hrX",
"nips_2022_81LQV4k7a7X",
"WmRp_xpJ9B5",
"VY229GDTEnp",
"59bmoIdM3ps",
"WVvFgfIodP2",
"WmRp_xpJ9B5",
"hsVbIdRCaJA",
"VY229GDTEnp",
"IDW6TTXB5Ep",
"nips_2022_81LQV4k7a7X",
"nips_2022_81LQV4k7a7X",
"nips_2022_81LQV4k7a7X"
] |
nips_2022_MloVsjTjlUY | Local Metric Learning for Off-Policy Evaluation in Contextual Bandits with Continuous Actions | We consider local kernel metric learning for off-policy evaluation (OPE) of deterministic policies in contextual bandits with continuous action spaces. Our work is motivated by practical scenarios where the target policy needs to be deterministic due to domain requirements, such as prescription of treatment dosage and duration in medicine. Although importance sampling (IS) provides a basic principle for OPE, it is ill-posed for the deterministic target policy with continuous actions. Our main idea is to relax the target policy and pose the problem as kernel-based estimation, where we learn the kernel metric in order to minimize the overall mean squared error (MSE). We present an analytic solution for the optimal metric, based on the analysis of bias and variance. Whereas prior work has been limited to scalar action spaces or kernel bandwidth selection, our work takes a step further being capable of vector action spaces and metric optimization. We show that our estimator is consistent, and significantly reduces the MSE compared to baseline OPE methods through experiments on various domains. | Accept | Reviewers and AC are satisfied by the authors' response and think the paper should be accepted. | train | [
"ZLi0xExddiN",
"rs5Kq3pl1Gd",
"cPR2ihFbXr",
"jNG7aT3A26P",
"5rcKzx0oZQ5",
"z40u_cKglye",
"Bt1dHnSjZh",
"ToUvorblexw",
"HFDphaU68P7",
"CgD3FYSoQB",
"R4mKrYJSBVp",
"jQMVhwaDTVV",
"7vhjkZrh3Bg",
"9B6wcSLvCOF",
"CdMJOppoutc",
"8ys2nL3v4i"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewers, we derived tighter error bound and corresponding convergence speed of our estimator when there is an error in the reward regressor used for a Hessian estimation in our algorithm and revised Appendix A.10 \"Effect of Error in a Reward Regression (Hessian Estimator for KMIS) on the Estimation Error ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
2,
4
] | [
"nips_2022_MloVsjTjlUY",
"HFDphaU68P7",
"8ys2nL3v4i",
"CdMJOppoutc",
"8ys2nL3v4i",
"8ys2nL3v4i",
"7vhjkZrh3Bg",
"CdMJOppoutc",
"9B6wcSLvCOF",
"jQMVhwaDTVV",
"nips_2022_MloVsjTjlUY",
"nips_2022_MloVsjTjlUY",
"nips_2022_MloVsjTjlUY",
"nips_2022_MloVsjTjlUY",
"nips_2022_MloVsjTjlUY",
"nip... |
nips_2022_edkno3SvKo | Variance Reduced ProxSkip: Algorithm, Theory and Application to Federated Learning | We study distributed optimization methods based on the {\em local training (LT)} paradigm, i.e., methods which achieve communication efficiency by performing richer local gradient-based training on the clients before (expensive) parameter averaging is allowed to take place. While these methods were first proposed about a decade ago, and form the algorithmic backbone of federated learning, there is an enormous gap between their practical performance, and our theoretical understanding. Looking back at the progress of the field, we {\em identify 5 generations of LT methods}: 1) heuristic, 2) homogeneous, 3) sublinear, 4) linear, and 5) accelerated. The 5${}^{\rm th}$ generation was initiated by the ProxSkip method of Mishchenko et al. (2022), whose analysis provided the first theoretical confirmation that LT is a communication acceleration mechanism. Inspired by this recent progress, we contribute to the 5${}^{\rm th}$ generation of LT methods by showing that it is possible to enhance ProxSkip further using {\em variance reduction}. While all previous theoretical results for LT methods ignore the cost of local work altogether, and are framed purely in terms of the number of communication rounds, we construct a method that can be substantially faster in terms of the {\em total training time} than the state-of-the-art method ProxSkip in theory and practice in the regime when local computation is sufficiently expensive. We characterize this threshold theoretically, and confirm our theoretical predictions with empirical results. Our treatment of variance reduction is generic, and can work with a large number of variance reduction techniques, which may lead to future applications in the future. Finally, we corroborate our theoretical results with carefully engineered proof-of-concept experiments. | Accept | The paper proposes a new optimization algorithm for distributed learning with applications to federated learning. The results are interesting and I recommend acceptance. This paper benefited considerably during the author rebuttal phase and I strongly urge the authors to incorporate all the reviewer suggestions and the author clarifications in the final version of the paper. | train | [
"XJbiQOl6OVg",
"2gCPFguqzug",
"yOqiLrrWMl2",
"W99xv01GBKpY",
"7S97ne0-Xfm",
"TOqhvFUEQQm",
"6Fx0ZQdxrk9",
"qfa1xwzmEE",
"hKnIiGNs-N",
"J-gn1qZHq-V",
"MsRDwrJ2nKt",
"wTSfuw1qwZ4",
"J483cOEn4S5",
"VSgT4lpz3BR",
"XZAQmdQhMsE",
"HrToUA3ib1S",
"aPbjmm2uzi",
"5TzUauloo-",
"4VRxqIXQ7sU"... | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" Thanks for your detailed response and discussion, and I have raised my score.",
" Thanks a lot for your time and efforts and for checking this again!\n\nauthors",
" Thanks for clarifying this. My mistake for not carefully check this. Now when I read the code again, you first initialize the loss with $\\ell2=0... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"JoysD2aCTJC",
"yOqiLrrWMl2",
"CH6b8r7CMwS",
"x1rdpL4zwYg",
"J483cOEn4S5",
"MsRDwrJ2nKt",
"MsRDwrJ2nKt",
"wTSfuw1qwZ4",
"J-gn1qZHq-V",
"bqRBhmVCBqq",
"sdyYHxTbySX",
"wYG5fk4yARx",
"7OgFzljOA6t",
"bqRBhmVCBqq",
"nips_2022_edkno3SvKo",
"sdyYHxTbySX",
"MXPo747XfFO",
"JoysD2aCTJC",
"... |
nips_2022_D21DRzkZbSB | Learning Neural Acoustic Fields | Our environment is filled with rich and dynamic acoustic information. When we walk into a cathedral, the reverberations as much as appearance inform us of the sanctuary's wide open space. Similarly, as an object moves around us, we expect the sound emitted to also exhibit this movement. While recent advances in learned implicit functions have led to increasingly higher quality representations of the visual world, there have not been commensurate advances in learning spatial auditory representations. To address this gap, we introduce Neural Acoustic Fields (NAFs), an implicit representation that captures how sounds propagate in a physical scene. By modeling acoustic propagation in a scene as a linear time-invariant system, NAFs learn to continuously map all emitter and listener location pairs to a neural impulse response function that can then be applied to arbitrary sounds. We demonstrate NAFs on both synthetic and real data, and show that the continuous nature of NAFs enables us to render spatial acoustics for a listener at arbitrary locations. We further show that the representation learned by NAFs can help improve visual learning with sparse views. Finally we show that a representation informative of scene structure emerges during the learning of NAFs. | Accept | The authors present Neural Acoustic Fields (NAF), which render sounds for arbitrary emitter and listener positions in a scene. Overall, the reviewers are very positive (8-8-8-5). The authors addressed many of the reviewers' questions about previous related work and rendering spatial binaural audio. | train | [
"o9dEaB-rAdI",
"azt6ELtXHzq",
"ER3M6NHqBz7",
"lrn4Bmi3PE",
"M1u42E7jvGDV",
"uwzBcURSD5s",
"Y9cLqkzjNoh",
"2caX5lu3Rl1",
"jKB894sxuFt",
"14nR50Cfzwt",
"ZSKBtvC1af",
"8H_T55iVf8",
"eAfSj92IbMc",
"bI9r6wVyHA",
"n6GqNiQkgOO"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We genuinely thank all reviewers for their constructive comments which have contributed to the improvements in our paper. We sincerely appreciate the positive 8-8-8-5 evaluation from reviewers u3Tz, e4mi, ossn, and jrJA. \n\nHere is a summary of our response.\n### Contributions\nWe would like to first emphasize t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
4
] | [
"nips_2022_D21DRzkZbSB",
"Y9cLqkzjNoh",
"2caX5lu3Rl1",
"M1u42E7jvGDV",
"nips_2022_D21DRzkZbSB",
"n6GqNiQkgOO",
"bI9r6wVyHA",
"jKB894sxuFt",
"eAfSj92IbMc",
"ZSKBtvC1af",
"8H_T55iVf8",
"nips_2022_D21DRzkZbSB",
"nips_2022_D21DRzkZbSB",
"nips_2022_D21DRzkZbSB",
"nips_2022_D21DRzkZbSB"
] |
nips_2022_8cUGfg-zUnh | Partial Identification of Treatment Effects with Implicit Generative Models | We consider the problem of partial identification, the estimation of bounds on the treatment effects from observational data. Although studied using discrete treatment variables or in specific causal graphs (e.g., instrumental variables), partial identification has been recently explored using tools from deep generative modeling. We propose a new method for partial identification of average treatment effects (ATEs) in general causal graphs using implicit generative models comprising continuous and discrete random variables. Since ATE with continuous treatment is generally non-regular, we leverage the partial derivatives of response functions to define a regular approximation of ATE, a quantity we call uniform average treatment derivative (UATD). We prove that our algorithm converges to tight bounds on ATE in linear structural causal models (SCMs). For nonlinear SCMs, we empirically show that using UATD leads to tighter and more stable bounds than methods that directly optimize the ATE. | Accept | All reviewers agreed that this paper should be accepted because of the strong author response during the rebuttal phase. Specifically the reviewers appreciated the presentation of the paper, the inventive use of existing work, the simplicity and soundness of the method, and the strong theoretical guarantees and empirical results. Authors: please carefully revise the manuscript based on the suggestions by the reviewers: they made many careful suggestions to improve the work and stressed that the paper should only be accepted once these changes are implemented. Once these are done the paper will be a nice addition to the conference! | train | [
"mBP30G02liu",
"0qH9KKw_zf",
"58JITw--9zS",
"Pcoucm5br4y",
"-oGhAWtTAY",
"BpQTpRkEQhv",
"eoBVwrHHSI5",
"sBZ9_zAbFR5",
"UnkoC3kR_vA",
"Rqhx9p02L2r",
"yF8cer7PK_J",
"pLInDDhqiEY",
"pW9CfPNUcLE",
"LEURQjW4puO",
"j-T0HTFBfY9",
"9BFHG4HG3O",
"1XkH1wxd4dV",
"rUFsQcExnUp",
"yHR79TjZioI"... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"... | [
" Thanks. The evaluation has changed in my initial review.",
" Thanks for the feedback and the engagement! The crux of this question is whether one needs to know the fully unrolled causal graph. \n\nWe emphasize that our definition of the \"optimal\" bound depends on the \"given\" causal graph $\\mathcal{G}$ (whi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
4,
4
] | [
"Pcoucm5br4y",
"58JITw--9zS",
"-oGhAWtTAY",
"UnkoC3kR_vA",
"BpQTpRkEQhv",
"LEURQjW4puO",
"yF8cer7PK_J",
"nips_2022_8cUGfg-zUnh",
"pW9CfPNUcLE",
"pLInDDhqiEY",
"cXckxguYwF",
"yHR79TjZioI",
"rUFsQcExnUp",
"1XkH1wxd4dV",
"9BFHG4HG3O",
"nips_2022_8cUGfg-zUnh",
"nips_2022_8cUGfg-zUnh",
... |
nips_2022_JokpPqA294 | ESCADA: Efficient Safety and Context Aware Dose Allocation for Precision Medicine | Finding an optimal individualized treatment regimen is considered one of the most challenging precision medicine problems. Various patient characteristics influence the response to the treatment, and hence, there is no one-size-fits-all regimen. Moreover, the administration of an unsafe dose during the treatment can have adverse effects on health. Therefore, a treatment model must ensure patient \emph{safety} while \emph{efficiently} optimizing the course of therapy. We study a prevalent medical problem where the treatment aims to keep a physiological variable in a safe range and preferably close to a target level, which we refer to as \emph{leveling}. Such a task may be relevant in numerous other domains as well. We propose ESCADA, a novel and generic multi-armed bandit (MAB) algorithm tailored for the leveling task, to make safe, personalized, and context-aware dose recommendations. We derive high probability upper bounds on its cumulative regret and safety guarantees. Following ESCADA's design, we also describe its Thompson sampling-based counterpart. We discuss why the straightforward adaptations of the classical MAB algorithms such as GP-UCB may not be a good fit for the leveling task. Finally, we make \emph{in silico} experiments on the bolus-insulin dose allocation problem in type-1 diabetes mellitus disease and compare our algorithms against the famous GP-UCB algorithm, the rule-based dose calculators, and a clinician. | Accept | I have read all comments and responses carefully.
The reviewers recognized that the problem was a challenging one, and that the paper provides both practical and novel tools and theoretical analysis. However, the reviewers pointed to the lack of numerical studies in the paper (for example, more details about the human clinicians and the patients). That being said, the authors have addressed most constructive comments given by reviewers.
Overall, reviewers agree that this is an important and yet underexplored problem and the authors have provided useful contributions. I, therefore, have decided to recommend the acceptance of the paper. | train | [
"epvAZ8Ycog2",
"nOMkKaemqO",
"-3JO-fFbsok",
"y110J2-J71W",
"oX0KsSuE5A8",
"7NkrdaJXZAg",
"qhOVJFeHdH5",
"26D_it3MsTM",
"-5pLX2bAGW",
"GznNczXBguo",
"_F2vkP79m2",
"5FIoSv3S0Ej",
"2_r-STa0Jzi",
"4FLqRfTD6j5",
"6Fuj4-GhfXj",
"AbH52qQHAbB",
"cbeZZFA6ao0",
"Z3tUlohG4K_"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are delighted to hear that you were satisfied with our response! We are also grateful that you took your valuable time to read the other reviews & responses and let us know about your post rebuttal thoughts.\n\nBest regards,\nAuthors",
" We are delighted to hear that you were satisfied with our response! We ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
8,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
3
] | [
"y110J2-J71W",
"-3JO-fFbsok",
"26D_it3MsTM",
"_F2vkP79m2",
"Z3tUlohG4K_",
"cbeZZFA6ao0",
"AbH52qQHAbB",
"6Fuj4-GhfXj",
"nips_2022_JokpPqA294",
"nips_2022_JokpPqA294",
"Z3tUlohG4K_",
"cbeZZFA6ao0",
"AbH52qQHAbB",
"6Fuj4-GhfXj",
"nips_2022_JokpPqA294",
"nips_2022_JokpPqA294",
"nips_202... |
nips_2022_ekQ_xrVWwQp | A Data-Augmentation Is Worth A Thousand Samples: Analytical Moments And Sampling-Free Training | Data-Augmentation (DA) is known to improve performance across tasks and datasets. We propose a method to theoretically analyze the effect of DA and study questions such as: how many augmented samples are needed to correctly estimate the information encoded by that DA? How does the augmentation policy impact the final parameters of a model? We derive several quantities in close-form, such as the expectation and variance of an image, loss, and model's output under a given DA distribution. Up to our knowledge, we obtain the first explicit regularizer that corresponds to using DA during training for non-trivial transformations such as affine transformations, color jittering, or Gaussian blur. Those derivations open new avenues to quantify the benefits and limitations of DA. For example, given a loss at hand, we find that common DAs require tens of thousands of samples for the loss to be correctly estimated and for the model training to converge. We then show that for a training loss to have reduced variance under DA sampling, the model's saliency map (gradient of the loss with respect to the model's input) must align with the smallest eigenvector of the sample's covariance matrix under the considered DA augmentation; this is exactly the quantity estimated and regularized by TangentProp. Those findings also hint at a possible explanation on why models tend to shift their focus from edges to textures when specific DAs are employed. | Accept | This paper studies data augmentation through the lens of an explicit regularizer, deriving a closed form regularization term corresponding to the effect of data augmentation in the case of linear models and MSE, and analyzes its properties with respect to convergence, sample efficiency and stability. Data augmentation is a core technique in deep learning that is poorly understood and has not, to my knowledge, been the subject of much rigorous analysis, so this work has the potential to be quite influential in our understanding of a fundamental deep learning practice.
Reviewers were generally positive on the approach and the theoretical underpinnings. Several were concerned about generalization of the approach to nonlinear models, which the authors explain proceeds by means of a Taylor expansion. The main shortcoming according to reviewers was experiments limited to MNIST, which the authors remedied with extension of their experiments to additional datasets (EMNIST and Fashion MNIST, to be precise).
With scores uniformly deep within acceptance territory with the exception of yaYp who promises to "accept this paper with flying colours" if the new results are incorporated into the main paper, this seems like an obvious candidate for acceptance.
P.S. The technique of deriving a closed form penalty is reminiscent of marginalized dropout (Wager et al, 2013), a work I would suggest deserves citation in the camera ready. | train | [
"T6tp_rR5mYx",
"Drley5eUUPw",
"O3KxuLy5YiG",
"oEIuV0sYbD",
"vYT8b0XBJ7",
"Wd8M6Pihtrj",
"CFgOSFyqAH",
"qdlKeo3OA18",
"ZQy6Y7GiUjx",
"2z5EwPe-38s",
"QOz0FjJgpQb",
"QUHGDGecjUC",
"56lJScSjIrB",
"fxqdQm9rZ5f",
"kgKWZ6SzarF",
"o6dWtlpoji",
"WGzFbhLq84",
"imIGdvu7j65",
"3idf9RJ68vp",
... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" We are thankful to the reviewer for their answer, and for their appreciation of our submission.\nWe also entirely agree with the reviewer on the value added to our submission by running those experiments, and we are glad that those updates and our comment have addressed the reviewer's original concerns.",
" Tha... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"Drley5eUUPw",
"56lJScSjIrB",
"oEIuV0sYbD",
"CFgOSFyqAH",
"imIGdvu7j65",
"ZQy6Y7GiUjx",
"2z5EwPe-38s",
"nips_2022_ekQ_xrVWwQp",
"QUHGDGecjUC",
"kgKWZ6SzarF",
"nips_2022_ekQ_xrVWwQp",
"3idf9RJ68vp",
"s0nNEaLn8ii",
"imIGdvu7j65",
"WGzFbhLq84",
"nips_2022_ekQ_xrVWwQp",
"nips_2022_ekQ_xr... |
nips_2022_VOyYhoN_yg | Statistically Meaningful Approximation: a Case Study on Approximating Turing Machines with Transformers | A common lens to theoretically study neural net architectures is to analyze the functions they can approximate. However, the constructions from approximation theory often have unrealistic aspects, for example, reliance on infinite precision to memorize target function values. To address this issue, we propose a formal definition of statistically meaningful approximation which requires the approximating network to exhibit good statistical learnability. We present case studies on statistically meaningful approximation for two classes of functions: boolean circuits and Turing machines. We show that overparameterized feedforward neural nets can statistically meaningfully approximate boolean circuits with sample complexity depending only polynomially on the circuit size, not the size of the approximating network. In addition, we show that transformers can statistically meaningfully approximate Turing machines with computation time bounded by T, requiring sample complexity polynomial in the alphabet size, state space size, and log(T). Our analysis introduces new tools for generalization bounds that provide much tighter sample complexity guarantees than the typical VC-dimension or norm-based bounds, which may be of independent interest. | Accept | Reviewers agree on the merits of sharing the paper with the community.The authors are highly encouraged to incorporate the many constructive suggestions offered. | train | [
"iLoKz7hv7kL",
"ysn9jhItzz",
"-ny2xwA08qy",
"rFPoao76wL5",
"7s4Yi_cF1P9",
"iA2s67WObnQ",
"RE54TJvqPA",
"4j82CjhKks",
"m5eLf0au2J4",
"Q1BEzBEofoQ",
"YzLvzsv5bao"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for clarifying my concerns! I have raised my score to reflect my positive opinion of the paper, given that the authors incorporate the above clarifications.",
" Thanks for the comment. For example, the streaming lower bound on approximating frequency moments in [1] could be directly plugged into our e... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
2,
4
] | [
"7s4Yi_cF1P9",
"-ny2xwA08qy",
"RE54TJvqPA",
"YzLvzsv5bao",
"Q1BEzBEofoQ",
"m5eLf0au2J4",
"4j82CjhKks",
"nips_2022_VOyYhoN_yg",
"nips_2022_VOyYhoN_yg",
"nips_2022_VOyYhoN_yg",
"nips_2022_VOyYhoN_yg"
] |
nips_2022_yRhbHp_Vh8e | Grounded Video Situation Recognition | Dense video understanding requires answering several questions such as who is doing what to whom, with what, how, why, and where. Recently, Video Situation Recognition (VidSitu) is framed as a task for structured prediction of multiple events, their relationships, and actions and various verb-role pairs attached to descriptive entities. This task poses several challenges in identifying, disambiguating, and co-referencing entities across multiple verb-role pairs, but also faces some challenges of evaluation. In this work, we propose the addition of spatio-temporal grounding as an essential component of the structured prediction task in a weakly supervised setting, and present a novel three stage Transformer model, VideoWhisperer, that is empowered to make joint predictions. In stage one, we learn contextualised embeddings for video features in parallel with key objects that appear in the video clips to enable fine-grained spatio-temporal reasoning. The second stage sees verb-role queries attend and pool information from object embeddings, localising answers to questions posed about the action. The final stage generates these answers as captions to describe each verb-role pair present in the video. Our model operates on a group of events (clips) simultaneously and predicts verbs, verb-role pairs, their nouns, and their grounding on-the-fly. When evaluated on a grounding-augmented version of the VidSitu dataset, we observe a large improvement in entity captioning accuracy, as well as the ability to localize verb-roles without grounding annotations at training time. | Accept | All four reviewers agreed that this paper tackles an important problem, and the proposed approach is novel and well supported by sufficient empirical evidence. They also acknowledged the new data annotation contribution and that that the paper is generally written well. There were several nice suggestions to improve the paper. The authors are strongly encouraged to incorporate them in the final version. | train | [
"M-sOS_qd38",
"zALn3CR18E",
"NJSzuOq0gJc",
"6gOCxdLS8Ki",
"ISQSCW9dJHj",
"g4n8y3R8m9G",
"-an5rpvJvm3V",
"VXKEUJefC_6",
"hLr05IHMo9",
"dEwSZTAiE3H",
"zafWwKug73Q",
"n_9Fxu45Kd",
"CElP8wrhwCH"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their response and apologize for the misunderstanding. We are glad to see that the reviewer acknowledges the non-trivial nature of the problem definition. We agree with the reviewer that addressing the long-tail in a challenging setup such as GVSR is an important aspect of improving perf... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"NJSzuOq0gJc",
"6gOCxdLS8Ki",
"g4n8y3R8m9G",
"ISQSCW9dJHj",
"CElP8wrhwCH",
"n_9Fxu45Kd",
"zafWwKug73Q",
"dEwSZTAiE3H",
"nips_2022_yRhbHp_Vh8e",
"nips_2022_yRhbHp_Vh8e",
"nips_2022_yRhbHp_Vh8e",
"nips_2022_yRhbHp_Vh8e",
"nips_2022_yRhbHp_Vh8e"
] |
nips_2022_1tCuRbPts3J | Do Residual Neural Networks discretize Neural Ordinary Differential Equations? | Neural Ordinary Differential Equations (Neural ODEs) are the continuous analog of Residual Neural Networks (ResNets). We investigate whether the discrete dynamics defined by a ResNet are close to the continuous one of a Neural ODE. We first quantify the distance between the ResNet's hidden state trajectory and the solution of its corresponding Neural ODE. Our bound is tight and, on the negative side, does not go to $0$ with depth $N$ if the residual functions are not smooth with depth. On the positive side, we show that this smoothness is preserved by gradient descent for a ResNet with linear residual functions and small enough initial loss. It ensures an implicit regularization towards a limit Neural ODE at rate $\frac1N$, uniformly with depth and optimization time. As a byproduct of our analysis, we consider the use of a memory-free discrete adjoint method to train a ResNet by recovering the activations on the fly through a backward pass of the network, and show that this method theoretically succeeds at large depth if the residual functions are Lipschitz with the input. We then show that Heun's method, a second order ODE integration scheme, allows for better gradient estimation with the adjoint method when the residual functions are smooth with depth. We experimentally validate that our adjoint method succeeds at large depth, and that Heun’s method needs fewer layers to succeed. We finally use the adjoint method successfully for fine-tuning very deep ResNets without memory consumption in the residual layers. | Accept | The paper provided an error bound for the approximation between the hidden trajectory of ResNet and its Neural ODE. The trajectory value of ResNet is different for each layer, which is more close to the practice. Authors find a case (linear residual functions, small initial loss) where the error bound can converge to zero. By leveraging the connection in the special case, an adjoin method is designed to estimate the gradient with a memory-free manner. This work can push our understanding about the connection between deep NN models (i.e., ResNet) and Neural ODE. The designed algorithm also called for our attention that, the connection can facilitate "memory-free" gradient estimation. | train | [
"B252uI5I9Dg",
"ThNm1mul3i4",
"DnrEnRvuiTw",
"R2ngrl4kxyT",
"vuS2ceisGh7",
"FpsN9TWZus",
"FznJNvQvNrB",
"IgNvuSL97Vn",
"rEOEr6-Yw-Y",
"B7s9hTF8h_S",
"B0OShvy8jDN"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for addressing these last comments as well!\n\nI also want to mention that I have read Reviewer's rbYF concerns regarding the fact that the results are \"far from the practice\". I agree this is true to a certain extent, but I believe this should not stop us from studying these models in simplified sett... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"ThNm1mul3i4",
"DnrEnRvuiTw",
"R2ngrl4kxyT",
"B0OShvy8jDN",
"B7s9hTF8h_S",
"FznJNvQvNrB",
"rEOEr6-Yw-Y",
"nips_2022_1tCuRbPts3J",
"nips_2022_1tCuRbPts3J",
"nips_2022_1tCuRbPts3J",
"nips_2022_1tCuRbPts3J"
] |
nips_2022_qk1qpCN-k6 | Information bottleneck theory of high-dimensional regression: relevancy, efficiency and optimality | Avoiding overfitting is a central challenge in machine learning, yet many large neural networks readily achieve zero training loss. This puzzling contradiction necessitates new approaches to the study of overfitting. Here we quantify overfitting via residual information, defined as the bits in fitted models that encode noise in training data. Information efficient learning algorithms minimize residual information while maximizing the relevant bits, which are predictive of the unknown generative models. We solve this optimization to obtain the information content of optimal algorithms for a linear regression problem and compare it to that of randomized ridge regression. Our results demonstrate the fundamental trade-off between residual and relevant information and characterize the relative information efficiency of randomized regression with respect to optimal algorithms. Finally, using results from random matrix theory, we reveal the information complexity of learning a linear map in high dimensions and unveil information-theoretic analogs of double and multiple descent phenomena. | Accept | The technical concerns of the reviewers were cleared in the discussion and based on the final reviews and my own reading the paper seems technically sound. While the considered model is very simple and hence relevance to practice is hard to foresee, the contribution of analyzing the IB method in a simple setting is considered valuable and of interest to the community. We hence recommend acceptance of the paper. The reviews and the subsequent discussion provide many suggestions that should help the authors to improve the presentation of their results. In particular, adding the motivation for the information efficiency and plots suggested by reviewer jTSy seem of interest. | train | [
"OWC66JEQHzv",
"HT7ojkXDH4a",
"9jc8Rub-ImR",
"uPhv099xWj7",
"OHbabZUUr80",
"-H_ioHvkzB8",
"aYgEur0dXo",
"pjdOHetLat",
"TyFhPKb11oD",
"FjdFoyTYXsy",
"cmvyK-U4_V",
"UjQxz2xVahK",
"fWvOqFx7MUN"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the clarification. We appreciate your time and effort in helping us improve our work. We have carefully considered and addressed all of your comments below. We hope our response helps clear up any possible misunderstanding. Please, let us know if we misunderstood any of your comments. (The following... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"HT7ojkXDH4a",
"9jc8Rub-ImR",
"uPhv099xWj7",
"pjdOHetLat",
"fWvOqFx7MUN",
"fWvOqFx7MUN",
"UjQxz2xVahK",
"UjQxz2xVahK",
"cmvyK-U4_V",
"cmvyK-U4_V",
"nips_2022_qk1qpCN-k6",
"nips_2022_qk1qpCN-k6",
"nips_2022_qk1qpCN-k6"
] |
nips_2022_mWaYC6CZf5 | On the Representation Collapse of Sparse Mixture of Experts | Sparse mixture of experts provides larger model capacity while requiring a constant computational overhead. It employs the routing mechanism to distribute input tokens to the best-matched experts according to their hidden representations. However, learning such a routing mechanism encourages token clustering around expert centroids, implying a trend toward representation collapse. In this work, we propose to estimate the routing scores between tokens and experts on a low-dimensional hypersphere. We conduct extensive experiments on cross-lingual language model pre-training and fine-tuning on downstream tasks. Experimental results across seven multilingual benchmarks show that our method achieves consistent gains. We also present a comprehensive analysis on the representation and routing behaviors of our models. Our method alleviates the representation collapse issue and achieves more consistent routing than the baseline mixture-of-experts methods. | Accept | This paper highlights that contemporary sparse Mixture-of-Experts (sMoE) models suffer from representation collapse in the gating mechanism. The paper then proposes a simple fix reducing the dimensionality of the gating representations and using cosine similarity. The paper shows qualitatively that the new representation suffers less from collapse and that model trained with the new algorithm exhibit small but consistent improvements when evaluated on cross-lingual language understanding and machine translation.
All of the reviewers had a concern that the method is tested only in the top-1 setting vs. the most common practice of using top-2 or more experts. The paper added experiments on top of GShard that alleviates this concern.
Another concern was limited scope of evaluation. The authors added MT results as well as in-language results, which showed the same trend of small, but consistent improvements.
As a result of the discussion, two of the three reviewers were happy of how their concerns were addressed and increased their score to recommend that the paper is accepted. The third reviewer has not been active in discussion. | val | [
"24s4dK9cxXt",
"sjcwaMaxbHh",
"SiYMKlFp33Q",
"8xrtrhVEqK",
"W8kk6AmC55z",
"2uB4xO84VIm",
"WHlrJhG5MGo",
"WtNw70qXqm",
"ZqjRHUk0a3p",
"ZmzqdC92n1s",
"Lj9CpQHM-5G",
"c8LESAhiy8O",
"6fvwyEPtAu-",
"wjtdW0Gsv1q",
"HH2WkBS2QuG"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for promptly adding in these experiments! This clarifies many points for me, and I also think the \"spread\" experiment should be moved into the main section. ",
" Dear Reviewer JmFj,\n\nWe appreciate your constructive comments for helping us improve our paper in many aspects. We have provided detaile... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"W8kk6AmC55z",
"6fvwyEPtAu-",
"2uB4xO84VIm",
"6fvwyEPtAu-",
"HH2WkBS2QuG",
"WHlrJhG5MGo",
"WtNw70qXqm",
"ZqjRHUk0a3p",
"wjtdW0Gsv1q",
"Lj9CpQHM-5G",
"c8LESAhiy8O",
"6fvwyEPtAu-",
"nips_2022_mWaYC6CZf5",
"nips_2022_mWaYC6CZf5",
"nips_2022_mWaYC6CZf5"
] |
nips_2022_qcRgqCXv1o2 | Certifying Robust Graph Classification under Orthogonal Gromov-Wasserstein Threats | Graph classifiers are vulnerable to topological attacks. Although certificates of robustness have been recently developed, their threat model only counts local and global edge perturbations, which effectively ignores important graph structures such as isomorphism. To address this issue, we propose measuring the perturbation with the orthogonal Gromov-Wasserstein discrepancy, and building its Fenchel biconjugate to facilitate convex optimization. Our key insight is drawn from the matching loss whose root connects two variables via a monotone operator, and it yields a tight outer convex approximation for resistance distance on graph nodes. When applied to graph classification by graph convolutional networks, both our certificate and attack algorithm are demonstrated effective. | Accept | This paper proposes a robustness certificate for graph classifiers under orthogonal Gromov-Wasserstein (OGW) threat models. OGW considers symmetries/isometries and does not rely on a fixed node ordering. The computation of the certificate is based on convex relations. The certificate is demonstrated on a single-layer GCN graph classifier. The paper addresses the interesting and important problem of certifying a graph classifier. The reviewers found the paper original and of high quality. During the rebuttal period, the authors provided an insightful discussion on their work and addressed most of the questions and concerns raised by the reviewers. | train | [
"AZF-RS78APc",
"zBu92BDXoP8",
"YMBE4wtXW0H",
"CrKgNs6b_yM",
"8F29Xb9mtHk",
"nw3bAjOvQNA",
"MPal7edw17c",
"fHV8gU7q7q8",
"gLox7fF1eHu",
"-H9ZFfVih7Q",
"XmSLhU_y8g9",
"qh7mAUch5TA",
"yZQNgODC9Rp",
"GpKhxUKBbL7",
"sw5zDWZ7D3",
"FOTQ5pYZ-PV",
"nnpOKy2muQx"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for including the additional baseline. Given how close the perfectly-robust MLP is to the (new) Robust-GCN in terms of accuracy, the Robust-GCN might not be the most useful model in practice since the accuracy vs. robustness tradeoff might not be most favorable.\n\nNonetheless, I think your work and the... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
5
] | [
"zBu92BDXoP8",
"-H9ZFfVih7Q",
"8F29Xb9mtHk",
"nw3bAjOvQNA",
"fHV8gU7q7q8",
"MPal7edw17c",
"gLox7fF1eHu",
"yZQNgODC9Rp",
"XmSLhU_y8g9",
"qh7mAUch5TA",
"GpKhxUKBbL7",
"nnpOKy2muQx",
"FOTQ5pYZ-PV",
"sw5zDWZ7D3",
"nips_2022_qcRgqCXv1o2",
"nips_2022_qcRgqCXv1o2",
"nips_2022_qcRgqCXv1o2"
] |
nips_2022_UHoGOaGjEq | Decentralized Training of Foundation Models in Heterogeneous Environments | Training foundation models, such as GPT-3 and PaLM, can be extremely expensive, often involving tens of thousands of GPUs running continuously for months. These models are typically trained in specialized clusters featuring fast, homogeneous interconnects and using carefully designed software systems that support both data parallelism and model/pipeline parallelism. Such dedicated clusters can be costly and difficult to obtain. Can we instead leverage the much greater amount of decentralized, heterogeneous, and lower-bandwidth interconnected compute? Previous works examining the heterogeneous, decentralized setting focus on relatively small models that can be trained in a purely data parallel manner. State-of-the-art schemes for model parallel foundation model training, such as Megatron and Deepspeed, only consider the homogeneous data center setting. In this paper, we present the first study of training large foundation models with model parallelism in a decentralized regime over a heterogeneous network. Our key technical contribution is a scheduling algorithm that allocates different computational “tasklets” in the training of foundation models to a group of decentralized GPU devices connected by a slow heterogeneous network. We provide a formal cost model and further propose an efficient evolutionary algorithm to find the optimal allocation strategy. We conduct extensive experiments that represent different scenarios for learning over geo-distributed devices simulated using real-world network measurements. In the most extreme case, across 8 different cities spanning 3 continents, our approach is 4.8× faster than prior state-of-the-art training systems. | Accept | All of the reviewers felt that this is a strong submission. The paper gives a new novel approach for scheduling decentralized training tasks. This will be of general interest to the community. | train | [
"BTPuzrbpwZJ",
"y6ToE5Y-8rb",
"QyOOiVvev--",
"A5hnlCmkYmf",
"_0ACVMcibBn",
"1PLGsRTKrDM",
"RzJUnj6DEeq",
"RXMWmioDUSa",
"6GO9ZYbezh",
"r_Ru4M0ZV1i"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response and additional experiments. I will update the score. ",
" Thank you! The responses here and in your other comment go above and beyond to address my concerns. I'm bumping my rating up one slot.",
" We thank all the reviewers for their insightful comments and suggestions, which will... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"A5hnlCmkYmf",
"_0ACVMcibBn",
"nips_2022_UHoGOaGjEq",
"r_Ru4M0ZV1i",
"RXMWmioDUSa",
"6GO9ZYbezh",
"RXMWmioDUSa",
"nips_2022_UHoGOaGjEq",
"nips_2022_UHoGOaGjEq",
"nips_2022_UHoGOaGjEq"
] |
nips_2022_MeYI0QcOIRg | Explicit Tradeoffs between Adversarial and Natural Distributional Robustness | Several existing works study either adversarial or natural distributional robustness of deep neural networks separately. In practice, however, models need to enjoy both types of robustness to ensure reliability. In this work, we bridge this gap and show that in fact, {\it explicit tradeoffs} exist between adversarial and natural distributional robustness. We first consider a simple linear regression setting on Gaussian data with disjoint sets of \emph{core} and \emph{spurious} features. In this setting, through theoretical and empirical analysis, we show that (i) adversarial training with $\ell_1$ and $\ell_2$ norms increases the model reliance on spurious features; (ii) For $\ell_\infty$ adversarial training, spurious reliance only occurs when the scale of the spurious features is larger than that of the core features; (iii)
adversarial training can have {\it an unintended consequence} in reducing distributional robustness, specifically when spurious correlations are changed in the new test domain. Next, we present extensive empirical evidence, using a test suite of twenty adversarially trained models evaluated on five benchmark datasets (ObjectNet, RIVAL10, Salient ImageNet-1M, ImageNet-9, Waterbirds), that adversarially trained classifiers rely on backgrounds more than their standardly trained counterparts, validating our theoretical results. We also show that spurious correlations in training data (when preserved in the test domain) can {\it improve} adversarial robustness, revealing that previous claims that adversarial vulnerability is rooted in spurious correlations are incomplete. | Accept | All reviewers recommend accepting the paper, so I will follow their suggestion - congratulations!
However, I should add that I do not find the arguments in the paper fully convincing. Earlier work has shown that adversarially trained models behave like "standard" ImageNet models (i.e., models trained without a robustness-enhancing method) on ObjectNet, which is one of the datasets in this paper - see https://arxiv.org/abs/2007.00644 . In these results, there is no trade-off between adversarial robustness and robustness on ObjectNet (on the other hand, adversarial robustness also doesn't help on ObjectNet). I encourage the authors to engage with the prior work on out-of-distribution generalization more deeply because this will help the reader understand how the new results here relate to other findings in out-of-distribution generalization. | train | [
"xzjg22l78qF",
"vE72DgCpTU",
"yX3XgW8pbeQ",
"V36SSYojU3",
"xyLOjIlKUKn",
"zB9t-Un8sPv",
"S2ZPelvX6Ur",
"N9bV8JO74u",
"joUD14NZ1a",
"YrP5EWOwWd",
"6dSQHyZ_Mx4",
"piwVwJHVZXV",
"fIRcIJKNOD",
"ITi4OnlAQDX",
"4Hxn181EOL6"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the authors' response. Though this work does not characterize the tradeoffs with precise theorems, it indeed shows explicit tradeoffs via thorough discussions. Moreover, this highlights the importance of the tradeoffs, which would be of interest to the community. Thus, I would like to retain my score... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"joUD14NZ1a",
"N9bV8JO74u",
"xyLOjIlKUKn",
"zB9t-Un8sPv",
"YrP5EWOwWd",
"S2ZPelvX6Ur",
"4Hxn181EOL6",
"ITi4OnlAQDX",
"fIRcIJKNOD",
"6dSQHyZ_Mx4",
"piwVwJHVZXV",
"nips_2022_MeYI0QcOIRg",
"nips_2022_MeYI0QcOIRg",
"nips_2022_MeYI0QcOIRg",
"nips_2022_MeYI0QcOIRg"
] |
nips_2022_9DYKrsFSU2 | Escaping Saddle Points for Effective Generalization on Class-Imbalanced Data | Real-world datasets exhibit imbalances of varying types and degrees. Several techniques based on re-weighting and margin adjustment of loss are often used to enhance the performance of neural networks, particularly on minority classes. In this work, we analyze the class-imbalanced learning problem by examining the loss landscape of neural networks trained with re-weighting and margin based techniques. Specifically, we examine the spectral density of Hessian of class-wise loss, through which we observe that the network weights converges to a saddle point in the loss landscapes of minority classes. Following this observation, we also find that optimization methods designed to escape from saddle points can be effectively used to improve generalization on minority classes. We further theoretically and empirically demonstrate that Sharpness-Aware Minimization (SAM), a recent technique that encourages convergence to a flat minima, can be effectively used to escape saddle points for minority classes. Using SAM results in a 6.2\% increase in accuracy on the minority classes over the state-of-the-art Vector Scaling Loss, leading to an overall average increase of 4\% across imbalanced datasets. The code is available at https://github.com/val-iisc/Saddle-LongTail. | Accept | The paper studies the problem of saddle point escape for class imbalanced datasets and mostly makes two contributions from my perspective:
1) Analysis of the spectral density of the Hessian for class-imbalanced datasets. This observation is novel as far as I know.
2) A short analysis of SAM demonstrating it escapes saddle points
While the first point seems novel and of interest, I have some limited reservations regarding the second contribution. Theoretically, the authors provide a theorem demonstrating that the CNC condition derived in Daneshmand et al. holds with a larger constant. It is however unclear whether this is the reason for the superior performance of SAM in unbalanced datasets. Saddle points are often not prevalent in the loss landscapes of modern neural networks. The paper does not directly show that better performance is linked to saddles. I would like to encourage the authors to more directly highlight the importance of the CNC condition used in the analysis.
Overall, the reviewers are still rather positive about the paper and despite its shortcomings, it has the potential to encourage more research in this field. I, therefore, recommend acceptance and invite the authors to add a discussion of the shortcomings that should be addressed in future work.
Finally, I note that there is some recent work analyzing the dynamics of gradient descent under class imbalance:
Characterizing the Effect of Class Imbalance on the Learning Dynamics, Francazi et al.
The findings do not seem to be directly related but it's probably worth checking.
| train | [
"V8CdYchkeJE",
"2dcx-N4wGR",
"16efSZnQOlb",
"16MeGNkoEnm",
"BzWH9uk55HZ",
"KMgS8hT2bn-",
"wb9752KAVC5",
"SXfgVZRQPR8",
"cv9z1o3yI6y",
"BK3RpcnzWvA",
"09aP3I7VEz",
"GRuOg8K1geS",
"mTY16gHSkF",
"LNoELmrLw1n",
"NsDNT-a5hQw",
"QOQiZ85hQFR"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate your response, where most of my concerns are resolved.",
" Thank you for the response.\nI hope the authors to add the comparison results and other reviewers' responses in the final version.",
" Thank you, authors, for your elaborate rebuttal. I have read the other reviewer's comments and the auth... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
2,
3
] | [
"BK3RpcnzWvA",
"09aP3I7VEz",
"cv9z1o3yI6y",
"nips_2022_9DYKrsFSU2",
"mTY16gHSkF",
"wb9752KAVC5",
"QOQiZ85hQFR",
"cv9z1o3yI6y",
"mTY16gHSkF",
"NsDNT-a5hQw",
"LNoELmrLw1n",
"nips_2022_9DYKrsFSU2",
"nips_2022_9DYKrsFSU2",
"nips_2022_9DYKrsFSU2",
"nips_2022_9DYKrsFSU2",
"nips_2022_9DYKrsFS... |
nips_2022_BCnZSP-Ryyp | Randomized Sketches for Clustering: Fast and Optimal Kernel $k$-Means | Kernel $k$-means is arguably one of the most common approaches to clustering. In this paper, we investigate the efficiency of kernel $k$-means combined with randomized sketches in terms of both statistical analysis and computational requirements. More precisely, we propose a unified randomized sketches framework to kernel $k$-means and investigate its excess risk bounds, obtaining the state-of-the-art risk bound with only a fraction of computations. Indeed, we prove that it suffices to choose the sketch dimension $\Omega(\sqrt{n})$ to obtain the same accuracy of exact kernel $k$-means with greatly reducing the computational costs, for sub-Gaussian sketches, the randomized orthogonal system (ROS) sketches, and Nystr\"{o}m kernel $k$-means, where $n$ is the number of samples. To the best of our knowledge, this is the first result of this kind for unsupervised learning. Finally, the numerical experiments on simulated data and real-world datasets validate our theoretical analysis. | Accept | The paper analyses the performance of a class of sampling-based sketches for the kernel k-means clustering problem. The main contribution is a proof that the excess clustering risk of these sketches is optimal when a smaller sketch is used than in previous state-of-the-art approaches to sketching kernel k-means. Clear comparisons with the prior state-of-the-art results shows that the method greatly reduces the computational complexity of sketched kernel k-means, and experimental validation shows that the method is competitive in terms of accuracy. | train | [
"Qq-9zFEgiS5",
"la8XryDXQrf",
"lyTC6fyq3Zj",
"paFJ0BSkSS8",
"tJKohmolQmp",
"Dpv3m0fdrzQ",
"j2KNAamuk_AY",
"bDYIREwZNC",
"D8FvDEHeXoyq",
"YnxouXKPJ8",
"VXVBkx63Sbe",
"rBP7mIQc1mJ",
"hpIr4wyY1LT",
"NyygCou_50g"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The author-response phase closes today. Please acknowledge the author rebuttal and state if your position has changed. Thanks!",
" Dear reviewer b1Eg, the discussion period is coming to an end. Do my answers address your concerns? If you don't think so, please point out and let me know. I'd be happy to answer a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
9
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5
] | [
"D8FvDEHeXoyq",
"D8FvDEHeXoyq",
"j2KNAamuk_AY",
"nips_2022_BCnZSP-Ryyp",
"VXVBkx63Sbe",
"hpIr4wyY1LT",
"hpIr4wyY1LT",
"rBP7mIQc1mJ",
"rBP7mIQc1mJ",
"nips_2022_BCnZSP-Ryyp",
"NyygCou_50g",
"nips_2022_BCnZSP-Ryyp",
"nips_2022_BCnZSP-Ryyp",
"nips_2022_BCnZSP-Ryyp"
] |
nips_2022_nOdfIbo3A-F | Learning Articulated Rigid Body Dynamics with Lagrangian Graph Neural Network | Lagrangian and Hamiltonian neural networks LNN and HNNs, respectively) encode strong inductive biases that allow them to outperform other models of physical systems significantly. However, these models have, thus far, mostly been limited to simple systems such as pendulums and springs or a single rigid body such as a gyroscope or a rigid rotor. Here, we present a Lagrangian graph neural network (LGNN) that can learn the dynamics of articulated rigid bodies by exploiting their topology. We demonstrate the performance of LGNN by learning the dynamics of ropes, chains, and trusses with the bars modeled as rigid bodies. LGNN also exhibits generalizability---LGNN trained on chains with a few segments exhibits generalizability to simulate a chain with large number of links and arbitrary link length. We also show that the LGNN can simulate unseen hybrid systems including bars and chains, on which they have not been trained on. Specifically, we show that the LGNN can be used to model the dynamics of complex real-world structures such as the stability of tensegrity structures. Finally, we discuss the non-diagonal nature of the mass matrix and its ability to generalize in complex systems. | Accept | The authors propose the use of two separate networks to learn the kinetic and potential energy of objects made of chains of rigid bodies. Their neural net architecture uses knowledge of constraints in the system, and improves on previous GNS work.
Reviewers pointed out that experiments were too simple with a small number of DoFs where traditional methods already perform very well. The authors replied that their method can learn from trajectories only, and include complex settings, for example with dissipative drag forces. They also added a number of much more complex experiments in a revised version.
Overall, this is a borderline paper given the high standard required for NeuIPS publication. I lean towards recommending this paper for acceptance, with one condition: as pointed out by reviewers VUdP and EADF, the title and abstract should be changed to reflect the fact that a) this paper is not solving "rigid dynamics", but only works on "articulated rigid body"; b) the graph net doesn't actually perform message passing. | train | [
"yTAgG4L_aCb",
"4c74kKxHGKB",
"GRWAOtQXfRB",
"Mu9-u9_FW2W",
"irDKDGUX68-",
"6_EpS8HgjTV",
"f8rWus8nyI6S",
"JBFvqoT97uq",
"iB2KKJC8oxy",
"Awa6K2zVDf0z",
"91-rKr8guq4",
"3YfQTBUaWk2",
"myKJAQzMzuc",
"aoPXYxuphwN",
"2LRrkyyVsEJ",
"eWcs3J85Xz",
"buR7jc422RQ",
"KxyJS9CzguH",
"Iu1XG8k9... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" We thank you for raising the additional concern regarding more complex systems. We would like to highlight that the main contribution of the present work is to **learn the dynamics** of articulated rigid bodies **directly from their trajectory**. Traditional approaches such as the ones mentioned by the reviewer r... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"6_EpS8HgjTV",
"irDKDGUX68-",
"Mu9-u9_FW2W",
"eWcs3J85Xz",
"aoPXYxuphwN",
"JBFvqoT97uq",
"iB2KKJC8oxy",
"Ro_Ji2s6AYn",
"Awa6K2zVDf0z",
"91-rKr8guq4",
"mOxoaEzc0Oa",
"myKJAQzMzuc",
"Ro_Ji2s6AYn",
"2LRrkyyVsEJ",
"Iu1XG8k9uX",
"KxyJS9CzguH",
"nips_2022_nOdfIbo3A-F",
"nips_2022_nOdfIbo... |
nips_2022_FjqBs4XKe87 | Prompt Injection: Parameterization of Fixed Inputs | Recent works have shown that attaching prompts to the input is effective at conditioning Language Models (LM) to perform specific tasks. However, prompts are always included in the input text during inference, thus incurring substantial computational and memory overhead. Also, there is currently no straightforward method of utilizing prompts that are longer than the maximum input length of the LMs without incurring additional costs during inference. We propose Prompt Injection (PI), a novel formulation of injecting the prompt into the parameters of an LM to be an efficient alternative to attaching fixed prompts to the input. We show that in scenarios with long fixed prompts, PI can be up to 280 times more efficient in terms of total FLOPs than previous approaches. We further explore methodologies for PI and show promising results in persona-dependent conversation, semantic parsing, and zero-shot learning with task instructions. Through these explorations, we show that PI can be a promising direction for conditioning language models, especially in scenarios with long and fixed prompts. | Reject | The paper proposes a method for distilling prompts into the parameters of a model. Reviewers liked that the method can improve the efficiency of inference by avoiding having to attend over prompts, and the evaluation on the PersonaChat dataset is a good use case for this approach. However, several important concerns were raised. As pointed out by reviewer y47p, similar ideas have been explored in previous work, and claims of novelty need to be toned down. As acknowledged in the author response, most of the experiments are on tasks with short inputs, so gain little benefit from the approach. The difficulty in finding suitable tasks where the approach has a clear benefit might suggest the method has limited applicability. The additional experiment on MSC is a nice addition in the author response, although results here are a bit underwhelming. Overall, this is borderline, leaning reject.
| val | [
"PzvBPoIU4lJ",
"-8B4ldNQy7S",
"hkQT1ewBvy",
"lIkYK6Rzw-3",
"TFuAiByZzFZ",
"YPRXiMOHaTc",
"bM0ObTlW3Q4",
"XUFO-VjWEKW",
"e7x1lJO16Ld",
"SbZG6-bGTzp",
"9DpOJxKvDuG",
"2bLCYKYSN5"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Hello Reviewer y47p, \n\nThank you for your interest in our work.\n\nWe increased the temperature from 1 to 2 in preliminary experiments, and the performance decreased. We deemed that it was because the quality of generated inputs decreased due to the generation of sentences that don’t match the syntax and make n... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"-8B4ldNQy7S",
"hkQT1ewBvy",
"TFuAiByZzFZ",
"XUFO-VjWEKW",
"YPRXiMOHaTc",
"2bLCYKYSN5",
"9DpOJxKvDuG",
"SbZG6-bGTzp",
"nips_2022_FjqBs4XKe87",
"nips_2022_FjqBs4XKe87",
"nips_2022_FjqBs4XKe87",
"nips_2022_FjqBs4XKe87"
] |
nips_2022_jvFTMD5QTq | A composable machine-learning approach for steady-state simulations on high-resolution grids | In this paper we show that our Machine Learning (ML) approach, CoMLSim (Composable Machine Learning Simulator), can simulate PDEs on highly-resolved grids with higher accuracy and generalization to out-of-distribution source terms and geometries than traditional ML baselines. Our unique approach combines key principles of traditional PDE solvers with local-learning and low-dimensional manifold techniques to iteratively simulate PDEs on large computational domains. The proposed approach is validated on more than 5 steady-state PDEs across different PDE conditions on highly-resolved grids and comparisons are made with the commercial solver, Ansys Fluent as well as 4 other state-of-the-art ML methods. The numerical experiments show that our approach outperforms ML baselines in terms of 1) accuracy across quantitative metrics and 2) generalization to out-of-distribution conditions as well as domain sizes. Additionally, we provide results for a large number of ablations experiments conducted to highlight components of our approach that strongly influence the results. We conclude that our local-learning and iterative-inferencing approach reduces the challenge of generalization that most ML models face. | Accept | This papers proposed a new method to predict accurate and generalizable PDE solutions on high-resolution grids. All reviewers found the paper is interesting and are positive about the paper. Please address the reaming concerns in the next version. | train | [
"UwhpUuPMrU7",
"fe3ZD4L8aMo",
"bE8lFHF-b9",
"caBd8Fr_zsy",
"2f8b9cD2LrK",
"bwFPVgBaFZR",
"iuEOG21I6uYq",
"e-zrUnNFQ2U",
"EdlV-QILZY_U",
"_9PMNi_-P2l",
"a9j7pHNMwaI",
"_iNwb94YUu7",
"lVeiV_hUK9S",
"njWSQRm88oI"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewers,\n\nWe are entering the discussion phase, where the authors will be not involved in the discussion.\n\nI would like to request you to confirm that you have already read the rebuttal from the authors.\n\nBest\n\nAC\n",
" Thank you very much for the careful revision. Many of my questions have been ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"nips_2022_jvFTMD5QTq",
"bE8lFHF-b9",
"_9PMNi_-P2l",
"2f8b9cD2LrK",
"bwFPVgBaFZR",
"iuEOG21I6uYq",
"e-zrUnNFQ2U",
"njWSQRm88oI",
"lVeiV_hUK9S",
"a9j7pHNMwaI",
"_iNwb94YUu7",
"nips_2022_jvFTMD5QTq",
"nips_2022_jvFTMD5QTq",
"nips_2022_jvFTMD5QTq"
] |
nips_2022_d9usspxbWmk | Graph Learning Assisted Multi-Objective Integer Programming | Objective-space decomposition algorithms (ODAs) are widely studied for solving multi-objective integer programs. However, they often encounter difficulties in handling scalarized problems, which could cause infeasibility or repetitive nondominated points and thus induce redundant runtime. To mitigate the issue, we present a graph neural network (GNN) based method to learn the reduction rule in the ODA. We formulate the algorithmic procedure of generic ODAs as a Markov decision process, and parameterize the policy (reduction rule) with a novel two-stage GNN to fuse information from variables, constraints and especially objectives for better state representation. We train our model with imitation learning and deploy it on a state-of-the-art ODA. Results show that our method significantly improves the solving efficiency of the ODA. The learned policy generalizes fairly well to larger problems or more objectives, and the proposed GNN outperforms existing ones for integer programming in terms of test and generalization accuracy. | Accept |
In this paper, the authors exploit imitation learning with a two-stage GNN to learn the reduction rule for ODA to accelerate the solver and reduce the unnecessary computation. The authors evaluate the performances of the proposed method and demonstrate the advantages.
In sum, this paper consider an interesting application of machine learning for optimization and provide a promising solution. All reviewers provide relatively positive feedback of this submission.
Please consider the reviewers' suggestions to improve the submission:
- Justify the MDP modeling with concrete definition of the state and action for the reduction rule for ODA.
- Specify the data set construction and justify the generalization ability in the imitation learning.
- Provide comprehensive comparison, especially with PMOCO.
| val | [
"-zoLhEF9eaD",
"on4YRuuUzTX",
"RPdITWSEJc",
"Be39hpFnfBR",
"MlVnIZGrauL",
"fK-ct5uYxIb",
"_EaCSejnpeO",
"vQ8qnvI7wwW",
"kV9_NrFk27X",
"h_lrS3N5IlaP",
"matG9i9czhS",
"s4yrim-Zw9",
"cFDK9GgG3i",
"Rulmxj6Utqf",
"s5p19ImSVW7",
"mgX-kuOiRFn",
"7q6fqxXv7VI",
"MhMU0XoMNp",
"izOcDG-LNa5"... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_... | [
" We greatly appreciate the reviewer for the acknowledgement of our idea and the experimental results. Here, we try to explain a bit more about the new comments.\n\n**Q1) Regarding the optimal policy in our paper.**\n\nWe would like to note that our MDP and the policy is defined on top of an ODA. Except the action ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4
] | [
"on4YRuuUzTX",
"Be39hpFnfBR",
"MlVnIZGrauL",
"MlVnIZGrauL",
"mgX-kuOiRFn",
"nips_2022_d9usspxbWmk",
"kV9_NrFk27X",
"h_lrS3N5IlaP",
"cFDK9GgG3i",
"7q6fqxXv7VI",
"bLgVJmOTTHz",
"bLgVJmOTTHz",
"bLgVJmOTTHz",
"Ky-BOkYVHgk",
"Ky-BOkYVHgk",
"Ky-BOkYVHgk",
"icDO_3Mo70G",
"icDO_3Mo70G",
... |
nips_2022_dbigt69sBqe | Disentangling the Predictive Variance of Deep Ensembles through the Neural Tangent Kernel | Identifying unfamiliar inputs, also known as out-of-distribution (OOD) detection, is a crucial property of any decision making process. A simple and empirically validated technique is based on deep ensembles where the variance of predictions over different neural networks acts as a substitute for input uncertainty. Nevertheless, a theoretical understanding of the inductive biases leading to the performance of deep ensemble's uncertainty estimation is missing. To improve our description of their behavior, we study deep ensembles with large layer widths operating in simplified linear training regimes, in which the functions trained with gradient descent can be described by the neural tangent kernel. We identify two sources of noise, each inducing a distinct inductive bias in the predictive variance at initialization. We further show theoretically and empirically that both noise sources affect the predictive variance of non-linear deep ensembles in toy models and realistic settings after training. Finally, we propose practical ways to eliminate part of these noise sources leading to significant changes and improved OOD detection in trained deep ensembles. | Accept | The paper proposes to decompose the predictive variance of deep ensembles, into different sources of noise, by studying linearly trained finite width neural networks.
The reviewers found the paper well-written, the theoretical analysis interesting, and the experimental evaluation adequate to justify the key claims.
I also appreciate the detailed author rebuttal to the reviewers’ questions. During the discussion, all the reviewers leaned towards acceptance. I encourage authors to address remaining comments in the final manuscript.
I recommend accept. Nice work! | train | [
"qLhqi-RJ87",
"Vm1WR--4Zb",
"PzEVjfCZSl6",
"G07CxNr7dp",
"rlTegUFqLL4",
"HDWPm9STe13",
"2gAJkjtvvbU",
"7o8WeVB_Li0",
"0OlRTKew58I",
"Q6-gSVhziru",
"2enD7VBKr2",
"KFldVxxlEL_",
"WIfHqjjwRGG",
"X2wenvV7bRq",
"2sjAzYRDtf",
"9K5uObrgbbC",
"8cRBO1puzT",
"Ke0Up1cWiR",
"6VZ5KtS8_fr",
... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",... | [
" I thank the authors for their in-depth reply to the questions I raised; their response has addressed the concerns I had and clarified any remaining misunderstandings. \n\nI will adjust my score accordingly; thanks again for the extensive discussion in a short amount of time.",
" Dear reviewer e4Rz, \nthank you ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
2,
2,
3,
3
] | [
"9K5uObrgbbC",
"PzEVjfCZSl6",
"Q6-gSVhziru",
"kLLB4KTIfJN",
"7KcNJS6LTi1",
"X2wenvV7bRq",
"nips_2022_dbigt69sBqe",
"WIfHqjjwRGG",
"6zbkwgEg1uZ",
"2enD7VBKr2",
"KFldVxxlEL_",
"kLLB4KTIfJN",
"tF11q4vAphu",
"2sjAzYRDtf",
"aBIBHx46NQ",
"8cRBO1puzT",
"Ke0Up1cWiR",
"7KcNJS6LTi1",
"ub60... |
nips_2022_nLGRGuzjtoR | Probing Classifiers are Unreliable for Concept Removal and Detection | Neural network models trained on text data have been found to encode undesirable linguistic or sensitive concepts in their representation. Removing such concepts is non-trivial because of a complex relationship between the concept, text input, and the learnt representation. Recent work has proposed post-hoc and adversarial methods to remove such unwanted concepts from a model's representation. Through an extensive theoretical and empirical analysis, we show that these methods can be counter-productive: they are unable to remove the concepts entirely, and in the worst case may end up destroying all task-relevant features. The reason is the methods' reliance on a probing classifier as a proxy for the concept. Even under the most favorable conditions for learning a probing classifier when a concept's relevant features in representation space alone can provide 100% accuracy, we prove that a probing classifier is likely to use non-concept features and thus post-hoc or adversarial methods will fail to remove the concept correctly. These theoretical implications are confirmed by experiments on models trained on synthetic, Multi-NLI, and Twitter datasets. For sensitive applications of concept removal such as fairness, we recommend caution against using these methods and propose a spuriousness metric to gauge the quality of the final classifier. | Accept | This papers analyzes failure modes of methods that aim to remove spurious features from the representation. The key finding is that since the spurious features are correlated with the core features, such methods will inevitably also remove core features during the process, thus hurting performance. Both the theoretical results and the empirical findings are important for understanding concept-removal methods which are widely used in domain adaptation and robust learning. All reviewers agree that the contribution is significant. The authors may strengthen the paper by discussion ways forward. | train | [
"BQYAOahmgw1",
"JUE8CgJhuD",
"BVmV916KvR",
"DQvuybfWmOZ",
"tZw9u7aoq_9",
"oMPmEgLKpN4",
"u_56XpBIMFl",
"qdlAkq3hS80",
"O2bMYByoySH",
"Gc1VjnIMGB",
"PdwrCJpz3BA",
"82EpzgD_ZFw",
"-8osGNyNws",
"YDmV_pdCoy8",
"_fjh4h1NNb1"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank all the reviewers for engaging with our rebuttal. We believe we have answered all the concerns. Please let us know if any comments or question remains. ",
" Thanks for the response and updates! I like the added discussions in Appendices I and H. I also like the clarification provided in **A1** which ma... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
8,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
2,
4
] | [
"nips_2022_nLGRGuzjtoR",
"O2bMYByoySH",
"u_56XpBIMFl",
"Gc1VjnIMGB",
"_fjh4h1NNb1",
"_fjh4h1NNb1",
"YDmV_pdCoy8",
"-8osGNyNws",
"-8osGNyNws",
"82EpzgD_ZFw",
"nips_2022_nLGRGuzjtoR",
"nips_2022_nLGRGuzjtoR",
"nips_2022_nLGRGuzjtoR",
"nips_2022_nLGRGuzjtoR",
"nips_2022_nLGRGuzjtoR"
] |
nips_2022_V5rlSPsHpkf | Learning to Scaffold: Optimizing Model Explanations for Teaching | Modern machine learning models are opaque, and as a result there is a burgeoning academic subfield on methods that explain these models' behavior. However, what is the precise goal of providing such explanations, and how can we demonstrate that explanations achieve this goal? Some research argues that explanations should help teach a student (either human or machine) to simulate the model being explained, and that the quality of explanations can be measured by the simulation accuracy of students on unexplained examples. In this work, leveraging meta-learning techniques, we extend this idea to improve the quality of the explanations themselves, specifically by optimizing explanations such that student models more effectively learn to simulate the original model. We train models on three natural language processing and computer vision tasks, and find that students trained with explanations extracted with our framework are able to simulate the teacher significantly more effectively than ones produced with previous methods. Through human annotations and a user study, we further find that these learned explanations more closely align with how humans would explain the required decisions in these tasks. Our code is available at https://anonymous.4open.science/r/learning-scaffold-5BEB | Accept | This paper proposed a novel “Scaffold-Maximizing Training” framework to optimize model explanations, by leveraging a student model simulating the teacher model using the teacher’s explanations. The idea of constructing explanations such that student models can better simulate the teacher is interesting and appears to be an original contribution.
All reviewers agree that this paper made a solid contribution to the explainability of ML models, the experiments are comprehensive and thorough, and the paper is well written. Overall the paper is well-rounded with clear justifications of what each algorithmic component does, and convincing empirical support (the proposed algorithm has been demonstrated to be effective on a variety of tasks). After viewing the authors’ feedback, there is a broad consensus on accepting the paper.
| train | [
"fuhd0zPPlRR",
"Fyhbam1Z-LL",
"U_pdylbVz2u",
"nD37fBUZzD",
"HNdEVmrM_DZ",
"Rf1IjK8WHsK",
"SX0xZEibsqV",
"Z5RgdwGWkg3",
"ppcEd3F0ETd"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for addressing my question. I look forward to your results in these further experiments.",
" Sorry for the delay in getting back to you! After your excellent rebuttal came out, I wanted to take a closer look at the prior work, and the assumptions made throughout those works (as well as revisiting this... | [
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
2
] | [
"U_pdylbVz2u",
"HNdEVmrM_DZ",
"ppcEd3F0ETd",
"Z5RgdwGWkg3",
"SX0xZEibsqV",
"nips_2022_V5rlSPsHpkf",
"nips_2022_V5rlSPsHpkf",
"nips_2022_V5rlSPsHpkf",
"nips_2022_V5rlSPsHpkf"
] |
nips_2022_xbhsFMxORxV | Public Wisdom Matters! Discourse-Aware Hyperbolic Fourier Co-Attention for Social Text Classification | Social media has become the fulcrum of all forms of communication. Classifying social texts such as fake news, rumour, sarcasm, etc. has gained significant attention. The surface-level signals expressed by a social-text itself may not be adequate for such tasks; therefore, recent methods attempted to incorporate other intrinsic signals such as user behavior and the underlying graph structure. Oftentimes, the "public wisdom" expressed through the comments/replies to a social-text acts as a surrogate of crowd-sourced view and may provide us with complementary signals. State-of-the-art methods on social-text classification tend to ignore such a rich hierarchical signal. Here, we propose Hyphen, a discourse-aware hyperbolic spectral co-attention network. Hyphen is a fusion of hyperbolic graph representation learning with a novel Fourier co-attention mechanism in an attempt to generalise the social-text classification tasks by incorporating public discourse. We parse public discourse as an Abstract Meaning Representation (AMR) graph and use the powerful hyperbolic geometric representation to model graphs with hierarchical structure. Finally, we equip it with a novel Fourier co-attention mechanism to capture the correlation between the source post and public discourse. Extensive experiments on four different social-text classification tasks, namely detecting fake news, hate speech, rumour, and sarcasm, show that Hyphen generalises well, and achieves state-of-the-art results on ten benchmark datasets. We also employ a sentence-level fact-checked and annotated dataset to evaluate how Hyphen is capable of producing explanations as analogous evidence to the final prediction. | Accept | This paper proposed a discourse-aware hyperbolic spectral co-attention network for social text classification, via using public discourse and its hierarchy. Reviewers all agreed that this work presents an extensive amount of experiments/evaluations, with impressive performance gains and interpretability. Thus, we recommend acceptance.
| train | [
"ycXzD3tUZ9",
"GXj4bmArDgO",
"8uFreiJMKp",
"zSeoQ3n9JUw",
"s7x0nsu-j9N",
"HzIS8Y03Sv",
"3jGxl4EB6x",
"R28tycFM21n",
"s2YrM7KBIQu",
"3xLsO8br2-f2",
"pwb3vyudxEL",
"ySc2mZNLmY",
"CqjtOKa9Dd",
"Cw2NNDnj2Yc",
"0LO8ygVIBnv",
"M_YtvqpxF9w",
"Hz7SiiYutll",
"t-MiqB7steh"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks a lot to the authors for their detailed statements! That confirms my scoring.",
" Thank you for your detailed response. My concerns are adequately addressed and I have updated my score to reflect that. I am especially impressed by the authors providing results for many PLMs on 10 datasets. You do have mi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
3
] | [
"s2YrM7KBIQu",
"s7x0nsu-j9N",
"t-MiqB7steh",
"Hz7SiiYutll",
"0LO8ygVIBnv",
"pwb3vyudxEL",
"t-MiqB7steh",
"t-MiqB7steh",
"Hz7SiiYutll",
"M_YtvqpxF9w",
"M_YtvqpxF9w",
"0LO8ygVIBnv",
"0LO8ygVIBnv",
"0LO8ygVIBnv",
"nips_2022_xbhsFMxORxV",
"nips_2022_xbhsFMxORxV",
"nips_2022_xbhsFMxORxV",... |
nips_2022_B3hDVlw95r | Positive-Unlabeled Learning using Random Forests via Recursive Greedy Risk Minimization | The need to learn from positive and unlabeled data, or PU learning, arises in many applications and has attracted increasing interest. While random forests are known to perform well on many tasks with positive and negative data, recent PU algorithms are generally based on deep neural networks, and the potential of tree-based PU learning is under-explored. In this paper, we propose new random forest algorithms for PU-learning. Key to our approach is a new interpretation of decision tree algorithms for positive and negative data as \emph{recursive greedy risk minimization algorithms}. We extend this perspective to the PU setting to develop new decision tree learning algorithms that directly minimizes PU-data based estimators for the expected risk. This allows us to develop an efficient PU random forest algorithm, PU extra trees. Our approach features three desirable properties: it is robust to the choice of the loss function in the sense that various loss functions lead to the same decision trees; it requires little hyperparameter tuning as compared to neural network based PU learning; it supports a feature importance that directly measures a feature's contribution to risk minimization. Our algorithms demonstrate strong performance on several datasets. Our code is available at \url{https://github.com/puetpaper/PUExtraTrees}. | Accept | This paper proposes a decision tree learning approach with only positive and unlabeled examples. The reviewers are concerned that the novelty of the proposed approach is not high. However, they appreciate that the paper is clearly written, the idea of the proposed approach is reasonable, and comprehensive experiments are conducted. Therefore, I recommend accepting the paper. | train | [
"tg0BHoeSzzY",
"D8X3C4oi2G",
"JpEjz1SCg0",
"1yGldID9rl3",
"oz6EQdzH_Ed",
"iTRnjtwEIEm",
"doF618FlvKa",
"nFgDuRlWmaqv",
"3RlXSa_PZGqz",
"6KUNTOTB7As",
"KgOmzizOI6ZJ",
"RwsuQiJiKpB",
"OrIGr0egJdp",
"hZEpgUB4O-A",
"2VQsThWtaDim",
"TezVR3T713w",
"Hd_O0jnmH5",
"Be3_pEU2JJ",
"xhXsADCmB... | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" Dear Reviewer J3hT, thank you for your helpful comments and discussion so far. We believe we've addressed your remaining concerns on the novelty and significance of our work by highlighting the non-trivial aspects of our work, the importance of filling in a gap of effective PU tree-based methods, the usefulness o... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4
] | [
"6KUNTOTB7As",
"JpEjz1SCg0",
"1yGldID9rl3",
"BHPHqG0AtJ",
"X3BhwOWafHp",
"hZEpgUB4O-A",
"nFgDuRlWmaqv",
"TezVR3T713w",
"6KUNTOTB7As",
"KgOmzizOI6ZJ",
"ACVjXFY8oGe",
"ACVjXFY8oGe",
"BHPHqG0AtJ",
"BHPHqG0AtJ",
"X3BhwOWafHp",
"xhXsADCmBRp",
"xhXsADCmBRp",
"nips_2022_B3hDVlw95r",
"ni... |
nips_2022_xpR25Tsem9C | Missing Data Imputation and Acquisition with Deep Hierarchical Models and Hamiltonian Monte Carlo | Variational Autoencoders (VAEs) have recently been highly successful at imputing and acquiring heterogeneous missing data. However, within this specific application domain, existing VAE methods are restricted by using only one layer of latent variables and strictly Gaussian posterior approximations. To address these limitations, we present HH-VAEM, a Hierarchical VAE model for mixed-type incomplete data that uses Hamiltonian Monte Carlo with automatic hyper-parameter tuning for improved approximate inference. Our experiments show that HH-VAEM outperforms existing baselines in the tasks of missing data imputation and supervised learning with missing features. Finally, we also present a sampling-based approach for efficiently computing the information gain when missing features are to be acquired with HH-VAEM. Our experiments show that this sampling-based approach is superior to alternatives based on Gaussian approximations. | Accept | Thanks to the authors for this submission. The reviewers agreed that this work presented an interesting and novel combination of techniques to achieve good imputation results. The reviewers also agree that the author-response and subsequent revisions have improved the submission and addressed the vast majority of reviewer concerns — in fact most reviewers increased their scores throughout the process. I believe this work is technically sound and of interest to the broader community. | train | [
"sxD5KfNFR6A",
"t3cMPSAX07",
"DJUMsa81CAZ",
"A8nz3pzGQdH",
"COchSIGCfE",
"aN1mx46SAXY",
"3uddW4OPlWR",
"UJDmu8DOys",
"b17L6xwqqEP",
"Yh8fmMpJ1UH",
"inRJWTtrPes",
"0oMG00HvyJ4",
"bIotwjH47lJ",
"_EZ8vMBySB",
"KRKMitX6Za",
"tGhxyOgXsGo",
"9ScX_HtYJ_R",
"grs6zIQiDss",
"PWjbbJ6ISVd",
... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",... | [
" Yes, it resolves my concern. Now I would like to recommend the paper for acceptance.\n\nThank you so much for addressing all of my concerns.",
" We would like to thank you again for reviewing our paper. We hope that after **we have added the requested experiments on image imputation with CelebA, demonstrating t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
3,
4
] | [
"COchSIGCfE",
"I_D05aNQ85o",
"h1atDa_06Ic",
"bIotwjH47lJ",
"3uddW4OPlWR",
"pNoEBryaPKM",
"UJDmu8DOys",
"b17L6xwqqEP",
"KRKMitX6Za",
"0oMG00HvyJ4",
"PWjbbJ6ISVd",
"bIotwjH47lJ",
"nips_2022_xpR25Tsem9C",
"7rnrD0h7ZJj",
"h1atDa_06Ic",
"pNoEBryaPKM",
"40ujMTDhcAl",
"PWjbbJ6ISVd",
"dk... |
nips_2022_73h4EZYtSht | RényiCL: Contrastive Representation Learning with Skew Rényi Divergence | Contrastive representation learning seeks to acquire useful representations by estimating the shared information between multiple views of data. Here, the choice of data augmentation is sensitive to the quality of learned representations: as harder the data augmentations are applied, the views share more task-relevant information, but also task-irrelevant one that can hinder the generalization capability of representation. Motivated by this, we present a new robust contrastive learning scheme, coined RényiCL, which can effectively manage harder augmentations by utilizing Rényi divergence. Our method is built upon the variational lower bound of a Rényi divergence, but a naive usage of a variational method exhibits unstable training due to the large variance. To tackle this challenge, we propose a novel contrastive objective that conducts variational estimation of a skew Renyi divergence and provides a theoretical guarantee on how variational estimation of skew divergence leads to stable training. We show that Rényi contrastive learning objectives perform innate hard negative sampling and easy positive sampling simultaneously so that it can selectively learn useful features and ignore nuisance features. Through experiments on ImageNet, we show that Rényi contrastive learning with stronger augmentations outperforms other self-supervised methods without extra regularization or computational overhead. Also, we validate our method on various domains such as graph and tabular datasets, showing empirical gain over original contrastive methods. | Accept | The reviewers reached a consensus that this paper is a nice addition to NeuRIPS. Please refers to the reviews and author's responses for reviewers' opinions on the strength and weakness of the paper.
| train | [
"VqrN4yFgjQ",
"tCYxMvyenvP",
"Bb1lPgsslO",
"8TWrw4NM95",
"kDOttSplV9",
"SDkYYVLERJ",
"9IDDpKlq2Ze",
"LudN3p5Lgmh",
"swFLpbFzI2",
"cWoXFCqh9Nc",
"4d7ellaUGma",
"VpVYH4Kyvqi",
"dmkX-GVSsHC",
"3wqqSpxr_mF",
"EdvwAiDxwro",
"gxPSjRQdbEU",
"fuf7V2jgbcL",
"COBMt-z0zPp",
"dS-UEKvmZ9",
... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_r... | [
" Great! We sincerely appreciate the valuable discussions throughout the rebuttal period, it really helped strengthening our paper. \n\nMeanwhile, we are still open to discussion about any aspect of our paper. Please let us know if there is any further questions. \n\nThank you, \n\nPaper 10007 Authors",
" Thanks ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
5,
4
] | [
"tCYxMvyenvP",
"9IDDpKlq2Ze",
"SDkYYVLERJ",
"kDOttSplV9",
"gxPSjRQdbEU",
"3wqqSpxr_mF",
"LudN3p5Lgmh",
"swFLpbFzI2",
"cWoXFCqh9Nc",
"dmkX-GVSsHC",
"nips_2022_73h4EZYtSht",
"UXMX1mHF84",
"UXMX1mHF84",
"fPkxpzXNKkD",
"dS-UEKvmZ9",
"dS-UEKvmZ9",
"COBMt-z0zPp",
"nips_2022_73h4EZYtSht",... |
nips_2022_QDPonrGtl1 | Fine-tuning Language Models over Slow Networks using Activation Quantization with Guarantees | Communication compression is a crucial technique for modern distributed learning systems to alleviate their communication bottlenecks over slower networks. Despite recent intensive studies of gradient compression for data parallel-style training, compressing the activations for models trained with pipeline parallelism is still an open problem. In this paper, we propose AQ-SGD, a novel activation compression algorithm for communication-efficient pipeline parallelism training over slow networks. Different from previous efforts in activation compression, instead of compressing activation values directly, AQ-SGD compresses the changes of the activations. This allows us to show, to the best of our knowledge for the first time, that one can still achieve $O(1/\sqrt{T})$ convergence rate for non-convex objectives under activation compression, without making assumptions on gradient unbiasedness that do not hold for deep learning models with non-linear activation functions. We then show that AQ-SGD can be optimized and implemented efficiently, without additional end-to-end runtime overhead. We evaluated AQ-SGD to fine-tune language models with up to 1.5 billion parameters, compressing activation to 2-4 bits. AQ-SGD provides up to $4.3\times$ end-to-end speed-up in slower networks, without sacrificing model quality. Moreover, we also show that AQ-SGD can be combined with state-of-the-art gradient compression algorithms to enable end-to-end communication compression: All communications between machines, including model gradients, forward activations, and backward gradients are compressed into lower precision. This provides up to $4.9\times$ end-to-end speed-up, without sacrificing model quality. | Accept | In this paper, authors propose to speed up the fine tuning of large models over slow networks by compressing *deltas* of activations (vs activations themselves), so as to reduce the computation cost.
Original reviews were mixed, but at the end of the discussion period, all reviewers are leaning towards acceptance. The main issues that were raised are:
* The motivation for training very large models over slow networks
* The limited amount of metrics to validate the quality and robustness of the optimization process
* Concerns about the scalability of the method (storage requirements) and its applicability to the online setting
I consider that these concerns have been mostly addressed during the discussion period by the authors, who also remained honest about some of the limitations of their method.
In my opinion, the pros of this work (a practically useful idea) outweigh the cons (it may only be useful in somewhat niche settings), and I thus recommend acceptance. | train | [
"zLESpKmJcxD",
"bnWKcCdMmhD",
"CDS3R_KSKv",
"VYaTbnN6wDNa",
"_foHzBUovQz",
"wL0Uldy3o7V",
"S2pxAnJEB_J",
"wYDpvSeKHy-",
"VA_RUS1pV5Xo",
"S_v0J08QfHC",
"Aa40zGdcyNf",
"QwNp9ps06Dn",
"BdxZIshVqSZ",
"L5QJRzvU-z5",
"5XtAFPegncw"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the detailed responses and revisions. As I respect the authors' efforts and other reviewers' opinions, I will raise my point to 5. But, I still have a concern on the assumption (i.e., distributed computing system with very slow network + fine-tuning LM). Since PLM is big enough to represent the fin... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
5,
4,
3
] | [
"wYDpvSeKHy-",
"CDS3R_KSKv",
"_foHzBUovQz",
"5XtAFPegncw",
"L5QJRzvU-z5",
"BdxZIshVqSZ",
"BdxZIshVqSZ",
"QwNp9ps06Dn",
"QwNp9ps06Dn",
"QwNp9ps06Dn",
"nips_2022_QDPonrGtl1",
"nips_2022_QDPonrGtl1",
"nips_2022_QDPonrGtl1",
"nips_2022_QDPonrGtl1",
"nips_2022_QDPonrGtl1"
] |
nips_2022_bI1XXtO-hs2 | Benefits of Permutation-Equivariance in Auction Mechanisms | Designing an incentive-compatible auction mechanism that maximizes the auctioneer's revenue while minimizes the bidders’ ex-post regret is an important yet intricate problem in economics. Remarkable progress has been achieved through learning the optimal auction mechanism by neural networks. In this paper, we consider the popular additive valuation and symmetric valuation setting; i.e., the valuation for a set of items is defined as the sum of all items’ valuations in the set, and the valuation distribution is invariant when the bidders and/or the items are permutated. We prove that permutation-equivariant neural networks have significant advantages: the permutation-equivariance decreases the expected ex-post regret, improves the model generalizability, while maintains the expected revenue invariant. This implies that the permutation-equivariance helps approach the theoretically optimal dominant strategy incentive compatible condition, and reduces the required sample complexity for desired generalization. Extensive experiments fully support our theory. To our best knowledge, this is the first work towards understanding the benefits of permutation-equivariance in auction mechanisms. | Accept | Most reviews are in the positive direction. The reviewers gave comments on having better discussions of related work and improving presentation by making definitions more clear. I think the authors can further improve the paper based on these comments. | train | [
"r6jkcI-GN8",
"MPQrJgDAdD9",
"19E3PnLb2VP",
"PRNfZhStqE8",
"6_dOiJowuo-",
"oQwrU5o4nLW",
"iqFCELuRhs0",
"Kw2nRFRoyFfg",
"JFgyDMH6y0L",
"KMnwJqrEh4sz",
"00VRBZYgmsw",
"r1xM9svpaiC",
"mvgL5gHPk_cv",
"PVaLIcd0_zqv",
"VnU8SRZhA4I",
"a_G79GsTc9",
"OPTVxbozXDS",
"eFDC1Q9QvCL",
"xRFeRzj... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author"... | [
" Thank you very much for your suggestions and support! The typo has been addressed.",
" Thank you very much for making those changes. The context as well as your contributions are much clearer to me now. I've updated my score.\n\nI see one minor typo in your revision: \"full-connected\" --> \"fully-connected\"."... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
2,
4
] | [
"MPQrJgDAdD9",
"JFgyDMH6y0L",
"PRNfZhStqE8",
"unW_8EhBQir",
"unW_8EhBQir",
"woo44flUFVy",
"Kw2nRFRoyFfg",
"B3UUHpYcwVa",
"00VRBZYgmsw",
"00VRBZYgmsw",
"VnU8SRZhA4I",
"5cnYWIrUJSn",
"5cnYWIrUJSn",
"5cnYWIrUJSn",
"5cnYWIrUJSn",
"unW_8EhBQir",
"unW_8EhBQir",
"unW_8EhBQir",
"pbkF6LVE... |
nips_2022_5Ap96waLr8A | Efficient Methods for Non-stationary Online Learning | Non-stationary online learning has drawn much attention in recent years. In particular, \emph{dynamic regret} and \emph{adaptive regret} are proposed as two principled performance measures for online convex optimization in non-stationary environments. To optimize them, a two-layer online ensemble is usually deployed due to the inherent uncertainty of the non-stationarity, in which a group of base-learners are maintained and a meta-algorithm is employed to track the best one on the fly. However, the two-layer structure raises the concern about the computational complexity--those methods typically maintain $O(\log T)$ base-learners simultaneously for a $T$-round online game and thus perform multiple projections onto the feasible domain per round, which becomes the computational bottleneck when the domain is complicated. In this paper, we present efficient methods for optimizing dynamic regret and adaptive regret, which reduce the number of projections per round from $O(\log T)$ to $1$. Moreover, our obtained algorithms require only one gradient query and one function evaluation at each round. Our technique hinges on the reduction mechanism developed in parameter-free online learning and requires non-trivial twists on non-stationary online methods. Empirical studies verify our theoretical findings.
| Accept | The authors have successfully developed a powerful surrogate-based reduction framework for greatly reducing the projection complexity of online learning algorithms with low adaptive and dynamic regret while preserving regret bounds. They further demonstrate how to use this approach for several recent algorithms with the best-known regret bounds. An additional result is the first algorithm that enjoys a small-loss type bound for the interval dynamic regret (for convex, smooth functions), and the algorithm achieving this also requires only one projection onto the feasible set per round. The reviewers are unanimous in the high quality of this work. This is an impressive and welcome contribution to the proceedings. Congratulations on your fine work! | train | [
"9RNTZNN_fve",
"jv0Nh_OXqzk",
"2cOWelYabVx",
"0aUt5UabjQR",
"VFCq8TyVRfw",
"NPs3XVs1zZk",
"83kkkeDi0YJ",
"KQVoZPfqNTf",
"aYrP4ThSKSD",
"WzSPTdHaoEA",
"9pAX8pBE7KB",
"HKeR58nf-h7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I continue to think this is a good paper we should accept.",
" Thank you for addressing the reviewers' issues --- and thank you to the other reviewers for their comments. \n\nThis paper both promises impact on the community, and is very clearly written. I will continue to recommend acceptance.",
" I thank the... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
3
] | [
"83kkkeDi0YJ",
"KQVoZPfqNTf",
"0aUt5UabjQR",
"VFCq8TyVRfw",
"HKeR58nf-h7",
"9pAX8pBE7KB",
"WzSPTdHaoEA",
"aYrP4ThSKSD",
"nips_2022_5Ap96waLr8A",
"nips_2022_5Ap96waLr8A",
"nips_2022_5Ap96waLr8A",
"nips_2022_5Ap96waLr8A"
] |
nips_2022_Adl-fs-8OzL | Continuous MDP Homomorphisms and Homomorphic Policy Gradient | Abstraction has been widely studied as a way to improve the efficiency and generalization of reinforcement learning algorithms. In this paper, we study abstraction in the continuous-control setting. We extend the definition of MDP homomorphisms to encompass continuous actions in continuous state spaces. We derive a policy gradient theorem on the abstract MDP, which allows us to leverage approximate symmetries of the environment for policy optimization. Based on this theorem, we propose an actor-critic algorithm that is able to learn the policy and the MDP homomorphism map simultaneously, using the lax bisimulation metric. We demonstrate the effectiveness of our method on benchmark tasks in the DeepMind Control Suite. Our method's ability to utilize MDP homomorphisms for representation learning leads to improved performance when learning from pixel observations. | Accept | The paper a new algorithm for representation learning in RL based on MDP homomorphism. The paper presents a nice theory, the algorithm is well motivated and the experiments are convincing. The concerns of the reviewers such as comparisons with DBC or DM control suite with distractions have been addressed adequately by the authors. All reviewers evaluated the paper very positively and I join their decision. | test | [
"WLikaND4aq",
"N1W2igpVZJ",
"3DdPyBdkaZJ",
"izroVBBahMX",
"IS_mhRcsfJ8",
"x6eiuKbcTi9",
"ALpmENVfjWIK",
"YMlYlMDZHSW",
"U_B3Sl68I_E",
"HqI0dBOskyz",
"4jIlmjLOwk-",
"y1Y13CW1c6p",
"EtM8GorxOD",
"Z6yzKOjxkm"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your reply and acknowledging that extending the theoretical results to approximate MDP homomorphism can be a future step. ",
" Thank you for acknowledging our theoretical contributions and re-adjusting your rating! We agree on the comparison with DrQ-v2, as we have used all of the hyperparameters ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"izroVBBahMX",
"3DdPyBdkaZJ",
"IS_mhRcsfJ8",
"U_B3Sl68I_E",
"x6eiuKbcTi9",
"YMlYlMDZHSW",
"4jIlmjLOwk-",
"Z6yzKOjxkm",
"EtM8GorxOD",
"y1Y13CW1c6p",
"nips_2022_Adl-fs-8OzL",
"nips_2022_Adl-fs-8OzL",
"nips_2022_Adl-fs-8OzL",
"nips_2022_Adl-fs-8OzL"
] |
nips_2022_zyrBT58h_J | Sustainable Online Reinforcement Learning for Auto-bidding | Recently, auto-bidding technique has become an essential tool to increase the revenue of advertisers. Facing the complex and ever-changing bidding environments in the real-world advertising system (RAS), state-of-the-art auto-bidding policies usually leverage reinforcement learning (RL) algorithms to generate real-time bids on behalf of the advertisers. Due to safety concerns, it was believed that the RL training process can only be carried out in an offline virtual advertising system (VAS) that is built based on the historical data generated in the RAS. In this paper, we argue that there exists significant gaps between the VAS and RAS, making the RL training process suffer from the problem of inconsistency between online and offline (IBOO). Firstly, we formally define the IBOO and systematically analyze its causes and influences. Then, to avoid the IBOO, we propose a sustainable online RL (SORL) framework that trains the auto-bidding policy by directly interacting with the RAS, instead of learning in the VAS. Specifically, based on our proof of the Lipschitz smooth property of the Q function, we design a safe and efficient online exploration (SER) policy for continuously collecting data from the RAS. Meanwhile, we derive the theoretical lower bound on the safety of the SER policy. We also develop a variance-suppressed conservative Q-learning (V-CQL) method to effectively and stably learn the auto-bidding policy with the collected data. Finally, extensive simulated and real-world experiments validate the superiority of our approach over the state-of-the-art auto-bidding algorithm. | Accept | The authors address the issue of inconsistencies between modeled and training data for auto bidding RL policies and demonstrate the efficacy of their approach both analytically and experimentally. | val | [
"JgvipMBHei",
"EXv29x0Kn6Z",
"Av63bZvWt3",
"pfniFCAjRy",
"qFxnUcsXnS0",
"BvsZTnUpX8u",
"z07-mIeU43q",
"93Q_vREV9H",
"8k6eYnj_YYZ",
"NTplbAQMtaRo",
"9bogcglB8jRD",
"2-P9OcXJDm3",
"WXxfrxyb4WP",
"jKHZGWOR_kx"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for your reply and the final rating. We really appreciate the time and effort that you have dedicated to our paper!",
" Thank you very much for your reply and the acknowledgement of our responses! We really appreciate your value feedbacks to our work! Thank you again, and looking forward to ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"EXv29x0Kn6Z",
"pfniFCAjRy",
"BvsZTnUpX8u",
"qFxnUcsXnS0",
"z07-mIeU43q",
"8k6eYnj_YYZ",
"93Q_vREV9H",
"jKHZGWOR_kx",
"NTplbAQMtaRo",
"WXxfrxyb4WP",
"2-P9OcXJDm3",
"nips_2022_zyrBT58h_J",
"nips_2022_zyrBT58h_J",
"nips_2022_zyrBT58h_J"
] |
nips_2022_M4OllVd70mJ | Learning to Branch with Tree MDPs | State-of-the-art Mixed Integer Linear Programming (MILP) solvers combine systematic tree search with a plethora of hard-coded heuristics, such as branching rules. While approaches to learn branching strategies have received increasing attention and have shown very promising results, most of the literature focuses on learning fast approximations of the \emph{strong branching} rule. Instead, we propose to learn branching rules from scratch with Reinforcement Learning (RL). We revisit the work of Etheve et al. (2020) and propose a generalization of Markov Decisions Processes (MDP), which we call \emph{tree MDP}, that provides a more suitable formulation of the branching problem. We derive a policy gradient theorem for tree MDPs that exhibits a better credit assignment compared to its temporal counterpart. We demonstrate through computational experiments that this new framework is suitable to tackle the learning-to-branch problem in MILP, and improves the learning convergence. | Accept | The paper studies the MILP problem by providing a tree MDP framework for a more suitable formulation of the branching problem.
The reviewers believe that this approach is relevant and novel. While in the first round of the review, the reviewers had identified a number of concerns such as the applicability of the tree MDP, concerns about the comparison of baselines, and presentation clarities. The authors have addressed these issues in a satisfactory way in the rebuttal phase. All the reviewers unanimously agree to accept the paper. | train | [
"6epl5Q2KKmx",
"wH4D2wqCTsm",
"uZ_H9V79fjP",
"dOaL1OayzS",
"xvQpAxJDulH",
"jxf8Ww9zMOS",
"ZsWQu67V_Ug",
"RxZ1XxU8yw3",
"cUaQnx2UQ9_",
"xHA6V2nlw_"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the detailed responses! After my initial review and reading the responses + changes to the paper, I believe that the paper makes a valuable contribution and would like to increase my score.",
" We thank the reviewer for their valuable time and feedback. We have made modifications to the paper that... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"ZsWQu67V_Ug",
"xHA6V2nlw_",
"nips_2022_M4OllVd70mJ",
"cUaQnx2UQ9_",
"cUaQnx2UQ9_",
"RxZ1XxU8yw3",
"xHA6V2nlw_",
"nips_2022_M4OllVd70mJ",
"nips_2022_M4OllVd70mJ",
"nips_2022_M4OllVd70mJ"
] |
nips_2022_nrOLtfeiIdh | Learning Recourse on Instance Environment to Enhance Prediction Accuracy | Machine Learning models are often susceptible to poor performance on instances sampled from bad environments. For example, an image classifier could provide low accuracy on images captured under low lighting conditions. In high stake ML applications, such as AI-driven medical diagnostics, a better option could be to provide recourse in the form of alternative environment settings in which to recapture the instance for more reliable diagnostics. In this paper, we propose a model called {\em RecourseNet} that learns to apply recourse on the space of environments so that the recoursed instances are amenable to better predictions by the classifier. Learning to output optimal recourse is challenging because we do not assume access to the underlying physical process that generates the recoursed instances. Also, the optimal setting could be instance-dependent --- for example the best camera angle for object recognition could be a function of the object's shape. We propose a novel three-level training method that (a) Learns a classifier that is optimized for high performance under recourse, (b) Learns a recourse predictor when the training data may contain only limited instances under good environment settings, and (c) Triggers recourse selectively only when recourse is likely to improve classifier confidence. | Accept | The paper proposes a recourse approach that recommends how to improve performance on instances by modifying their environment. The paper is well motivated and provides a novel approach that is empirically demonstrated to be useful, though the empirical evaluation is limited. Reviewers agree that this paper addresses an important question that has more recently started to get attention, and that the contribution is novel, creative and significant. The quality of the write up could be improved, and I encourage the authors to do so for the camera ready version. | train | [
"vzeQTlE1Ds6",
"muNgr4TwMc6",
"nycKEPhQo9K",
"p7ty-tJXvtD",
"e0V07bOg8P",
"kydHyT9mspB",
"sYnBy3alRUk",
"2pDKvCQ9HQ",
"vrXgvMRbwZZ",
"QXNB8J7GyeF",
"m5JrYRomR9c",
"0X1YaoZ8tGc",
"K7otgbBetl_",
"K9MRrEzfjU",
"0OoxO2U9SZj",
"mEGQlK-SxX3"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" On using recourse term...\n\nResponse: Traditional methods attempt to recourse by finding perturbation in the input space that in turn encourages the classifier to predict the correct label. These perturbations are mostly limited to additive transformations [9, 13, 29]. We offer recourse in the environment space ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
3
] | [
"nycKEPhQo9K",
"p7ty-tJXvtD",
"QXNB8J7GyeF",
"m5JrYRomR9c",
"kydHyT9mspB",
"sYnBy3alRUk",
"0X1YaoZ8tGc",
"nips_2022_nrOLtfeiIdh",
"mEGQlK-SxX3",
"0OoxO2U9SZj",
"K9MRrEzfjU",
"K7otgbBetl_",
"nips_2022_nrOLtfeiIdh",
"nips_2022_nrOLtfeiIdh",
"nips_2022_nrOLtfeiIdh",
"nips_2022_nrOLtfeiIdh... |
nips_2022_p0LJa6_XHM_ | Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch | As the curation of data for machine learning becomes increasingly automated, dataset tampering is a mounting threat. Backdoor attackers tamper with training data to embed a vulnerability in models that are trained on that data. This vulnerability is then activated at inference time by placing a "trigger'' into the model's input. Typical backdoor attacks insert the trigger directly into the training data, although the presence of such an attack may be visible upon inspection. In contrast, the Hidden Trigger Backdoor Attack achieves poisoning without placing a trigger into the training data at all. However, this hidden trigger attack is ineffective at poisoning neural networks trained from scratch. We develop a new hidden trigger attack, Sleeper Agent, which employs gradient matching, data selection, and target model re-training during the crafting process. Sleeper Agent is the first hidden trigger backdoor attack to be effective against neural networks trained from scratch. We demonstrate its effectiveness on ImageNet and in black-box settings. Our implementation code can be found at: https://github.com/hsouri/Sleeper-Agent. | Accept | This paper introduces a clean label poisoning attack to backdoor neural network models.
The reviewers unanimously voted to accept this paper and I agree: it is a strong
technical paper that is well written. The experiments were initially limited in some
areas but the author's rebuttal addresses many of these concerns.
| train | [
"uGrGNtATWZ3",
"Ho8Y-FxzyUQ",
"ZYA9P-YagNu",
"tM_TroCHRqq",
"wy2ZSQ7O6GEc",
"M3NrM4abAaB",
"25Nre5cXM6O",
"pq3_w6nMnnC",
"w6devSJ6y6Zc",
"6vFnbjxY4T",
"6wpPK1zZp0O",
"kFy0hQ-ggOE",
"h5YGgq3V2Q-"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response to address my concern. I suggest you note the accuracy of each pre-trained model. I have increased my score to 5 accordingly. ",
" Thanks for the response. My concerns have been properly addressed. I am happy to increase the score to 5.",
" Thank you for the detailed responses. The au... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
4
] | [
"6wpPK1zZp0O",
"tM_TroCHRqq",
"M3NrM4abAaB",
"wy2ZSQ7O6GEc",
"h5YGgq3V2Q-",
"kFy0hQ-ggOE",
"6wpPK1zZp0O",
"6vFnbjxY4T",
"nips_2022_p0LJa6_XHM_",
"nips_2022_p0LJa6_XHM_",
"nips_2022_p0LJa6_XHM_",
"nips_2022_p0LJa6_XHM_",
"nips_2022_p0LJa6_XHM_"
] |
nips_2022_oDoj_LKI3JZ | Sparse Gaussian Process Hyperparameters: Optimize or Integrate? | The kernel function and its hyperparameters are the central model selection choice in a Gaussian process (Rasmussen and Williams, 2006).
Typically, the hyperparameters of the kernel are chosen by maximising the marginal likelihood, an approach known as Type-II maximum likelihood (ML-II). However, ML-II does not account for hyperparameter uncertainty, and it is well-known that this can lead to severely biased estimates and an underestimation of predictive uncertainty. While there are several works which employ fully Bayesian characterisation of GPs, relatively few propose such approaches for the sparse GPs paradigm. In this work we propose an algorithm for sparse Gaussian process regression which leverages MCMC to sample from the hyperparameter posterior within the variational inducing point framework of (Titsias, 2009). This work is closely related to (Hensman et al, 2015b) but side-steps the need to sample the inducing points, thereby significantly improving sampling efficiency in the Gaussian likelihood case. We compare this scheme against natural baselines in literature along with stochastic variational GPs (SVGPs) along with an extensive computational analysis.
| Accept | The paper presents a new method for scalable inference in Gaussian regression based on the inducing-variable formalism. It shows that it is possible to sample covariance hyper-parameters while avoiding sampling the inducing variables, a consequence of their doubly collapsed bound. The reviewers agree that it is a technically solid paper and the authors have addressed their concerns satisfactorily, with one of the reviewers raising their scores and the authors providing additional results. I believe this work is worth presenting at NeurIPS and, therefore, recommend its acceptance. | train | [
"iv-6CHD-yU",
"ItSdUeq3bS9",
"rlXCjKFdrAc",
"Cphj7_HW3I",
"NTuAZj83vI2",
"phu_OkKcz1",
"VyQmZ2aIUTS",
"H697W0RwReq",
"C0gzKUdVxIp",
"jyeiQGlOPAG",
"kUTxbmwIGvfL",
"4oC1wpYtXDI",
"fDuZLjpCfey",
"XJIkksT_HE",
"e4O39mx9lNn",
"0Ml-hzYlko",
"8dMV3WHBYwT"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the detailed response and feedback.\n\nWe agree that the benefits of the proposed method should be contextualised along with the compute cost. Note that we distinguish between two types of compute time. 1) only the time it takes for sampling from the intractable posterior over unknowns approximated ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
2
] | [
"NTuAZj83vI2",
"phu_OkKcz1",
"Cphj7_HW3I",
"4oC1wpYtXDI",
"H697W0RwReq",
"VyQmZ2aIUTS",
"jyeiQGlOPAG",
"C0gzKUdVxIp",
"XJIkksT_HE",
"0Ml-hzYlko",
"e4O39mx9lNn",
"8dMV3WHBYwT",
"nips_2022_oDoj_LKI3JZ",
"nips_2022_oDoj_LKI3JZ",
"nips_2022_oDoj_LKI3JZ",
"nips_2022_oDoj_LKI3JZ",
"nips_20... |
nips_2022_CQaqJDWUGJ | MCL-GAN: Generative Adversarial Networks with Multiple Specialized Discriminators | We propose a framework of generative adversarial networks with multiple discriminators, which collaborate to represent a real dataset more effectively. Our approach facilitates learning a generator consistent with the underlying data distribution based on real images and thus mitigates the chronic mode collapse problem. From the inspiration of multiple choice learning, we guide each discriminator to have expertise in a subset of the entire data and allow the generator to find reasonable correspondences between the latent and real data spaces automatically without extra supervision for training examples. Despite the use of multiple discriminators, the backbone networks are shared across the discriminators and the increase in training cost is marginal. We demonstrate the effectiveness of our algorithm using multiple evaluation metrics in the standard datasets for diverse tasks. | Accept | The paper considers a GAN setting with multiple discriminators, a topic that has been studied quite a few times before. It proposes a novel approach based on multiple choice learning in order to specialize discriminators to certain modes of the data distribution, with the aim of mitigating collapse. They demonstrate this clustering effect qualitatively, and present quantitative results on a number of tasks. Parameter-sharing keeps the computational burden in check, while the number of discriminators is determined adaptively.
Reviewers generally found the problem of interest, and the method simple, reasonably motivated, not burdensome in the additional hyperparameters, and generally applicable. A common complaint was the lack of high resolution results. While it is important not to unduly limit acceptance of papers wherein the authors are operating under a constrained compute budget, I believe that at this stage the request for some megapixel was reasonable, and of scientific interest as techniques which show promise at smaller scales often show little at larger scales.
The authors did provide these results in rebuttal, along with a great number of other requested comparisons and revisions that I feel improve the empirical rigour of the work. As of the end of the discussion period, two reviewers (oqkx, FAZc) lean reject; FAZc did not respond to, or acknowledge, the rebuttal, in which it appears to me their concerns were addressed, and therefore I am discounting their score in my evaluation. oqkx's concerns can be summarized as a) incrementality; b) speculation on tuning difficulty; c) unconvinced by the explanation of advantages over GMAN; d) scale of experiments. The rebuttal seems to have addressed all of oqkx's concerns except a).
I don't believe that concerns around incrementality of the approach justify rejection in this instance. The method is quite clearly differently motivated from GMAN, the appeal to multiple choice learning is to my knowledge novel and appears sensible, the empirical results (which the authors have gone the extra mile to improve during the review period) are extensive and convincing. Despite the borderline scores, I tentatively recommend acceptance. | train | [
"70HuekJUPUd",
"_Jp1-D8i-LZ",
"k_u7cL7gcin",
"KZJeHG-YSyE",
"M2NFbyl7GHa",
"ilGSrgmwcV",
"AOvlQ2IlJM",
"vHlkdh29XZD",
"s7Xu5abDZ0w",
"XySnNY1gnfzV",
"sA56ccRBN52",
"xuEK_Mv-SF1",
"st3RTT1K1SU",
"qe_h9o3BRBc",
"N-PpOwKt2E",
"hBfpZNsUWOS",
"O59sugVypd_",
"mcSfJKMI4Q"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response. I still have the concern that this extension to GMAN is incremental. Thus, I would like to keep my original rating. ",
" We summarize the additional experiments and clarifications on the concerns regarding the issues raised by reviewers. We have responded to all concerns regarding exper... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
3
] | [
"sA56ccRBN52",
"nips_2022_CQaqJDWUGJ",
"nips_2022_CQaqJDWUGJ",
"ilGSrgmwcV",
"sA56ccRBN52",
"AOvlQ2IlJM",
"xuEK_Mv-SF1",
"XySnNY1gnfzV",
"nips_2022_CQaqJDWUGJ",
"N-PpOwKt2E",
"hBfpZNsUWOS",
"O59sugVypd_",
"qe_h9o3BRBc",
"mcSfJKMI4Q",
"nips_2022_CQaqJDWUGJ",
"nips_2022_CQaqJDWUGJ",
"n... |
nips_2022_5oEk8fvJxny | Relaxing Equivariance Constraints with Non-stationary Continuous Filters | Equivariances provide useful inductive biases in neural network modeling, with the translation equivariance of convolutional neural networks being a canonical example. Equivariances can be embedded in architectures through weight-sharing and place symmetry constraints on the functions a neural network can represent. The type of symmetry is typically fixed and has to be chosen in advance. Although some tasks are inherently equivariant, many tasks do not strictly follow such symmetries. In such cases, equivariance constraints can be overly restrictive. In this work, we propose a parameter-efficient relaxation of equivariance that can effectively interpolate between a (i) non-equivariant linear product, (ii) a strict-equivariant convolution, and (iii) a strictly-invariant mapping. The proposed parameterization can be thought of as a building block to allow adjustable symmetry structure in neural networks. Compared to non-equivariant or strict-equivariant baselines, we experimentally verify that soft equivariance leads to improved performance in terms of test accuracy on CIFAR-10 and CIFAR-100 image classification tasks. | Accept | This paper received initially quite mixed reviews, but after a strong rebuttal from the author's, a number of reviewers increased their scores, leaving it with an overall borderline positive rating. The work was praised for the interesting and novel core idea, the potential significance of the work's contribution, generally clear writing, and providing good empirical results for small-scale image datasets. Concerns that remained after the author's rebuttal included the lack of experiments on larger scale datasets or architectures, issues in the presentation/care of some mathematical formulations and notation, and a lack of comparison to certain potential benchmark approaches.
Taking these into account, my own view of the work is that the strengths outweigh the negatives. In particular, I feel like most of the remaining concerns raised are either not reasonable or not appropriate grounds for rejection. In particular, I do not think that direct comparisons to Augerino are warranted beyond the new augmentation experiments in Appendix C (which I would consider promoting to the main paper for the final version) or that the issues with the mathematical formulations are severe enough to be reasonable grounds for rejection. Given that the paper has some clear strengths that all reviewers agree on, my recommendation is therefore that the paper should be accepted. | train | [
"QfF4m0v8giE",
"ouNh3UWiWBF",
"JniGvUfvI02",
"4TB6ygPUUm",
"i-LITE09oVN",
"yPX9JhboY1i",
"FzZBzkc9pl6",
"bylr1nV3I18",
"szZpONwZ7pD",
"G6DvlTygVmN",
"Y9G35LNIwjH",
"HquGjY1HmWo",
"hpbP3I-oI12",
"Ky7JzLdoV96",
"FSVWwgDW8"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for engaging in the rebuttal discussion. We are glad that the earlier response has partially addressed concerns. We are happy to answer follow-up questions on any remaining issues.\n\n> 1\n\nThank you for pointing this out. Parameter-sharing approaches rely on representing weights in a fully... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
3
] | [
"ouNh3UWiWBF",
"JniGvUfvI02",
"szZpONwZ7pD",
"FzZBzkc9pl6",
"yPX9JhboY1i",
"G6DvlTygVmN",
"bylr1nV3I18",
"FSVWwgDW8",
"Ky7JzLdoV96",
"hpbP3I-oI12",
"HquGjY1HmWo",
"nips_2022_5oEk8fvJxny",
"nips_2022_5oEk8fvJxny",
"nips_2022_5oEk8fvJxny",
"nips_2022_5oEk8fvJxny"
] |
nips_2022_tfkeJG9yAX | A general approximation lower bound in $L^p$ norm, with applications to feed-forward neural networks | We study the fundamental limits to the expressive power of neural networks. Given two sets $F$, $G$ of real-valued functions, we first prove a general lower bound on how well functions in $F$ can be approximated in $L^p(\mu)$ norm by functions in $G$, for any $p \geq 1$ and any probability measure $\mu$. The lower bound depends on the packing number of $F$, the range of $F$, and the fat-shattering dimension of $G$. We then instantiate this bound to the case where $G$ corresponds to a piecewise-polynomial feedforward neural network, and describe in details the application to two sets $F$: Hölder balls and multivariate monotonic functions. Beside matching (known or new) upper bounds up to log factors, our lower bounds shed some light on the similarities or differences between approximation in $L^p$ norm or in sup norm, solving an open question by DeVore et al. (2021). Our proof strategy differs from the sup norm case and uses a key probability result of Mendelson (2002). | Accept | The paper provides novel lower bounds on function approximation, relating $L^p$ norm approximation error to combinatorial complexity measures of both the approximating and approximated functions classes. These bounds are instantiated for approximation via piecewise-polynomial neural networks.
There is a consensus among the reviewers that the results of the paper are novel, solve an open problem, and follow from deep technical insights. Consequently, I concur with the majority of reviewers and recommend acceptance of the paper. | test | [
"S2qxmZtbHzO",
"G9v6gN585ft",
"zVEIApdM-QX",
"n-krFgiZ5sI",
"amqprmbjMDZ",
"4ffS_5RMx7s",
"D8z9N5bogDq",
"VdnAsWhEtM__",
"64j6HtAqFGA",
"eV_-nuq7CE",
"iFCmux5p9no",
"8LDtZ1CEXE"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your reply! I am not totally convinced by the practicality of the suggested applications, so I will keep my score for now and possibly discuss the matter with the other reviewers.",
" I have read the response. The practical examples provided appear artificial and reverse-engineering the theoretical r... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"4ffS_5RMx7s",
"amqprmbjMDZ",
"D8z9N5bogDq",
"nips_2022_tfkeJG9yAX",
"8LDtZ1CEXE",
"iFCmux5p9no",
"eV_-nuq7CE",
"64j6HtAqFGA",
"nips_2022_tfkeJG9yAX",
"nips_2022_tfkeJG9yAX",
"nips_2022_tfkeJG9yAX",
"nips_2022_tfkeJG9yAX"
] |
nips_2022_jm_opnaGmm5 | Generalization Bounds for Gradient Methods via Discrete and Continuous Prior | Proving algorithm-dependent generalization error bounds for gradient-type optimization methods has attracted significant attention recently in learning theory. However, most existing trajectory-based analyses require either restrictive assumptions on the learning rate (e.g., fast decreasing learning rate), or continuous injected noise (such as the Gaussian noise in Langevin dynamics). In this paper, we introduce a new discrete data-dependent prior to the PAC-Bayesian framework, and prove a high probability generalization bound of order $O(\frac{1}{n}\cdot \sum_{t=1}^T(\gamma_t/\varepsilon_t)^2\left\|{\mathrm{g}_t}\right\|^2)$ for Floored GD (i.e. a version of gradient descent with precision level $\varepsilon_t$), where $n$ is the number of training samples, $\gamma_t$ is the learning rate at step $t$, $\mathrm{g}_t$ is roughly the difference of the gradient computed using all samples and that using only prior samples. $\left\|{\mathrm{g}_t}\right\|$ is upper bounded by and and typical much smaller than the gradient norm $\left\|{\nabla f(W_t)}\right\|$. We remark that our bound holds for nonconvex and nonsmooth scenarios. Moreover, our theoretical results provide numerically favorable upper bounds of testing errors (e.g., $0.037$ on MNIST). Using similar technique, we can also obtain new generalization bounds for a certain variant of SGD. Furthermore, we study the generalization bounds for gradient Langevin Dynamics (GLD). Using the same framework with a carefully constructed continuous prior, we show a new high probability generalization bound of order $O(\frac{1}{n} + \frac{L^2}{n^2}\sum_{t=1}^T(\gamma_t/\sigma_t)^2)$ for GLD. The new $1/n^2$ rate is due to the concentration of the difference between the gradient of training samples and that of the prior. | Accept | Authors study generalization properties of gradient-based optimization algorithms via a PAC-Bayesian approach. Based on a data-dependent prior, authors establish a generalization bound for FGD, FSGD, GLD, and SGLD. The authors also provide convincing empirical studies to demonstrate that their results are not vacuous.
- Authors should better motivate the use of their seemingly synthetic algorithms.
- There are many typos in this paper, other than the ones listed by the reviewers. Although the reviewers did not raise this issue, authors should make sure their paper is ready for camera ready if the paper is accepted. | train | [
"9Y_caXEPWFl",
"5Vqx1rT0Jt",
"s68cNSiNVXM",
"un-mqFYsi-O",
"N9HA2p2GubV",
"XjmuyVLZPNtM",
"B23dNi2Vf1gH",
"WBuvAigQ3TZ",
"IVTCGsaGBCy",
"G5RDjNIojQa",
"_BGiin0_iZ0",
"fx2YzXNVUKW",
"IYtJzsAih1L",
"nilnt3qRR1X",
"EGh42XmjBpQ",
"YgxY_QHxDt"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. While I am still quite positive about this paper, several reviewers flagged issues about the relevance of the FGD algorithm. While I see your point that this algorithm has nice analytical properties, I also see the point of the other reviewers. As a result, I am lowering my score to a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
2,
4
] | [
"WBuvAigQ3TZ",
"s68cNSiNVXM",
"_BGiin0_iZ0",
"XjmuyVLZPNtM",
"B23dNi2Vf1gH",
"G5RDjNIojQa",
"IVTCGsaGBCy",
"YgxY_QHxDt",
"EGh42XmjBpQ",
"nilnt3qRR1X",
"IYtJzsAih1L",
"nips_2022_jm_opnaGmm5",
"nips_2022_jm_opnaGmm5",
"nips_2022_jm_opnaGmm5",
"nips_2022_jm_opnaGmm5",
"nips_2022_jm_opnaGm... |
nips_2022_KHoV9zn1jLE | Implicitly regularized interaction between SGD and the loss landscape geometry | We study unstable dynamics of stochastic gradient descent (SGD) and its impact on generalization in neural networks. We find that SGD induces an implicit regularization on the interaction between the gradient distribution and the loss landscape geometry. Moreover, based on the analysis of a concentration measure of the batch gradient, we propose a more accurate scaling rule, Linear and Saturation Scaling Rule (LSSR), between batch size and learning rate. | Reject | While this paper presents a series of interesting observation, e.g., self regularization around the edge of stability and learning rate scaling, in my view it fails to communicate the scientific value of the work in a coherent way. For example, I find it puzzling that the main result of the paper "implicitly regularized interaction" is stated as a definition (Def. 4) instead of a theorem, proposition, or a hypothesis. During the discussion, I realized that the authors use the word "regularization" in a slightly non-standard way. My questions to the authors are: (1) what does it mean to regularize the interaction? and (2) how does it relate to generalization? | train | [
"YTVPXKoWk0O",
"3wXHJ2KWzm",
"B9_RUlbaOdf",
"QwY1aFLh28w",
"cVZ_d7a5wMa",
"xeKZFLZPIK3",
"981sI7T1d05",
"uIGHkQz9xtI",
"ReaYK30t2kK",
"CX6bRxAVzGG",
"KqI0iGGtgC_"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We hope to discuss more about the reviews, but unfortunately there is no enough time for discussion. From now on, we are sorry that we may not answer the follow-up reviews.",
" Thanks for the valuable comments, suggestions and efforts towards improving our manuscript. Please kindly check the responses below.",
... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5
] | [
"3wXHJ2KWzm",
"nips_2022_KHoV9zn1jLE",
"KqI0iGGtgC_",
"CX6bRxAVzGG",
"CX6bRxAVzGG",
"ReaYK30t2kK",
"ReaYK30t2kK",
"ReaYK30t2kK",
"nips_2022_KHoV9zn1jLE",
"nips_2022_KHoV9zn1jLE",
"nips_2022_KHoV9zn1jLE"
] |
nips_2022_DpKaP-PY8bK | Function Classes for Identifiable Nonlinear Independent Component Analysis | Unsupervised learning of latent variable models (LVMs) is widely used to represent data in machine learning. When such model reflects the ground truth factors and the mechanisms mapping them to observations, there is reason to expect that such models allow generalisation in downstream tasks. It is however well known that such identifiability guaranties are typically not achievable without putting constraints on the model class. This is notably the case for nonlinear Independent Component Analysis, in which the LVM maps statistically independent variables to observations via a deterministic nonlinear function. Several families of spurious solutions fitting perfectly the data, but that do not correspond to the ground truth factors can be constructed in generic settings. However, recent work suggests that constraining the function class of such models may promote identifiability. Specifically, function classes with constraints on their partial derivatives, gathered in the Jacobian matrix, have been proposed, such as orthogonal coordinate transformations (OCT), which impose orthogonality of the Jacobian columns. In the present work, we prove that a subclass of these transformations, conformal maps, is identifiable and provide novel theoretical results suggesting that OCTs have properties that prevent families of spurious solutions to spoil identifiability in a generic setting. | Accept | The paper studies identifiability of ICA for two families of non-linear functions: conformal maps and orthogonal coordinate transforms. For conformal maps, they prove identifiability for d > 2, improving an old '99 result for d=2 due to Hyvarinen and Pajunen. For orthogonal coord. transforms, they prove a weaker notion of "local" identifiability.
There was quite a lot of discussion on the various strengths and weakness of the paper.
(1) *Experiments*: Though the paper had very little experiments, the reviewers agreed that since the paper is primarily a theory paper, an extensive experimental section is not necessary.
(2) *Theory*: The reviewers found both the proofs of Theorems 2 and 3 quite interesting, involving new ideas. They're heavy on tools from complex analysis, which is not surprising giving that conformal maps are natural through the lens of complex analysis; but they found the connections to PDEs interesting and potentially useful in the future. There were some potential worries about correctness, but no definite error was identified.
(3) *How strong of an assumption is conformality in practice*: The reviewers agree this is probably quite restrictive as an assumption, but idenfiability of non-linear ICA is always going to require some strong conditions, and we're still very far from understanding when it's possible (when no auxiliary variables are involved). The paper shrinks the gap b/w theory and practice even if the theory has very strong assumptions. | val | [
"tgkHOSsP9jH",
"6t9BxqSh-jT",
"5CNQwXi1Nm9",
"DVWECZ6e-s",
"UZGasCLqz5f",
"TMvQZEtCKkT",
"EzO8llXROB",
"17e51kkLw19",
"R8B81NGkgB",
"aVYz-6d5D0r",
"1ucu4jSr8-z",
"ZFLemvUbbK",
"46gQIHdtYN2",
"kqG1lBYzSc8",
"lPPt1qod21J",
"U8GJ-MWyxpj",
"MTXicisdW7",
"Fc-tOaNEELw"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewers, Area Chair, and Senior Area Chair,\n\nWe thank the reviewers for engaging actively in the discussion.\nHere, we gather key elements scattered across the multiple comments on this page, to provide you with an overview of the points raised by reviewers and how we addressed them.\nThere were no doubt... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
3
] | [
"nips_2022_DpKaP-PY8bK",
"EzO8llXROB",
"DVWECZ6e-s",
"UZGasCLqz5f",
"U8GJ-MWyxpj",
"aVYz-6d5D0r",
"17e51kkLw19",
"1ucu4jSr8-z",
"46gQIHdtYN2",
"Fc-tOaNEELw",
"MTXicisdW7",
"U8GJ-MWyxpj",
"lPPt1qod21J",
"nips_2022_DpKaP-PY8bK",
"nips_2022_DpKaP-PY8bK",
"nips_2022_DpKaP-PY8bK",
"nips_2... |
nips_2022_-cBZMMTImxT | Tree ensemble kernels for Bayesian optimization with known constraints over mixed-feature spaces | Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search, as they achieve good predictive performance with little or no manual tuning, naturally handle discrete feature spaces, and are relatively insensitive to outliers in the training data. Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function. To address both points simultaneously, we propose using the kernel interpretation of tree ensembles as a Gaussian Process prior to obtain model variance estimates, and we develop a compatible optimization formulation for the acquisition function. The latter further allows us to seamlessly integrate known constraints to improve sampling efficiency by considering domain-knowledge in engineering settings and modeling search space symmetries, e.g., hierarchical relationships in neural architecture search. Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints. | Accept | This paper presents a fairly neat take on combining tree ensembles with GPs by creating a kernel function from the tree ensembles. This not only allows for Bayesian optimization over discrete and mixed feature spaces--inheriting the usual advantage of tree ensembles--but allows for UCB/LCB to be optimized using a mixed integer programming tool.
Based on reviewer questions and responses, there are a few things that surfaced that I do think the paper would benefit from discussion further in the camera ready version. Most particular, while the setting here is certainly reasonable, I do think the paper would be overall better if there was simply just an ablation of optimization performance versus the time allocated to Gurobi. | train | [
"xOWtokbRam",
"bV7SRBKhWo",
"8mcMU9OYmHq",
"9xKWxMlrglY",
"F9KNfwAg1C4",
"LIyc_1xoPvM",
"J6aiqAxoGr",
"phWwRBnvuir",
"ARDHj4lTfhO"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for their response and I appreciate their time and effort for addressing my comments. I increase my score to Borderline Accept. ",
" Sincere thanks to the reviewers for their constructive, thoughtful comments.\n\nAfter submitting, we realized we had missed a SMAC feature that i... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"F9KNfwAg1C4",
"nips_2022_-cBZMMTImxT",
"ARDHj4lTfhO",
"phWwRBnvuir",
"J6aiqAxoGr",
"J6aiqAxoGr",
"nips_2022_-cBZMMTImxT",
"nips_2022_-cBZMMTImxT",
"nips_2022_-cBZMMTImxT"
] |
nips_2022_dD3pwu4g8Fh | Anonymized Histograms in Intermediate Privacy Models | We study the problem of privately computing the $\mbox{\it anonymized histogram}$ (a.k.a. $\mbox{\it unattributed histogram}$), which is defined as the histogram without item labels. Previous works have provided algorithms with $\ell_1$- and $\ell_2^2$-errors of $O_\varepsilon(\sqrt{n})$ in the central model of differential privacy (DP).
In this work, we provide an algorithm with a nearly matching error guarantee of $\widetilde{O}_\varepsilon(\sqrt{n})$ in the shuffle DP and pan-private models. Our algorithm is very simple: it just post-processes the discrete Laplace-noised histogram! Using this algorithm as a subroutine, we show applications in privately estimating symmetric properties of distributions such as entropy, support coverage, and support size.
| Accept | This paper considers the release of anonymized histograms under differential privacy, and presents new (and simple) algorithms for the shuffle model and the pan-private model of differential privacy. The reviewers all agree that the problem and the results are interesting, and support accepting the paper. | train | [
"bsISxh1mSgo",
"aCOGOo7sUQU",
"WI_CUwqUQ8",
"12VzMCxCMT",
"Hm7ooKh7Zz",
"--cYWKnFwCL",
"a2cY4IZ0346",
"DWPxujk8xyM",
"UMm7equhmbk",
"tKluG3phE0R",
"XSVMO6_lGu_",
"sNu6hciAQ2",
"Hygrtr3axG3"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read the other reviewers' comments and the authors' responses to the time complexity analysis. Thanks to the authors for their feedback. All of my questions are addressed.\n\n",
" I'm pleased to see details for a near-linear time post-processing steps for Algorithm 1 and 2. Though I have not gotten a cha... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
3,
2
] | [
"DWPxujk8xyM",
"WI_CUwqUQ8",
"Hm7ooKh7Zz",
"UMm7equhmbk",
"--cYWKnFwCL",
"Hygrtr3axG3",
"sNu6hciAQ2",
"XSVMO6_lGu_",
"tKluG3phE0R",
"nips_2022_dD3pwu4g8Fh",
"nips_2022_dD3pwu4g8Fh",
"nips_2022_dD3pwu4g8Fh",
"nips_2022_dD3pwu4g8Fh"
] |
nips_2022_lbQTJN42uea | Subquadratic Kronecker Regression with Applications to Tensor Decomposition | Kronecker regression is a highly-structured least squares problem $\min_{\mathbf{x}} \lVert \mathbf{K}\mathbf{x} - \mathbf{b} \rVert_{2}^2$, where the design matrix $\mathbf{K} = \mathbf{A}^{(1)} \otimes \cdots \otimes \mathbf{A}^{(N)}$ is a Kronecker product of factor matrices. This regression problem arises in each step of the widely-used alternating least squares (ALS) algorithm for computing the Tucker decomposition of a tensor. We present the first subquadratic-time algorithm for solving Kronecker regression to a $(1+\varepsilon)$-approximation that avoids the exponential term $O(\varepsilon^{-N})$ in the running time. Our techniques combine leverage score sampling and iterative methods. By extending our approach to block-design matrices where one block is a Kronecker product, we also achieve subquadratic-time algorithms for (1) Kronecker ridge regression and (2) updating the factor matrix of a Tucker decomposition in ALS, which is not a pure Kronecker regression problem, thereby improving the running time of all steps of Tucker ALS. We demonstrate the speed and accuracy of this Kronecker regression algorithm on synthetic data and real-world image tensors. | Accept | All reviewers felt that this paper made a solid technical contribution on algorithms for Kronecker regression and should be accepted to the conference.
| train | [
"Rij3oxKj3nr",
"71vE-JheOe9",
"7faxnOhN4Xt",
"oJDHNQpnRaJ",
"cR48brJYQpE",
"vk5lYLUOnO8a",
"XLMrKExydys",
"dR8WSPaVQIm",
"l5ENwh3vHOh",
"zszjPEZeWXc",
"Rnd-pUB1gny",
"yj3pnVjIiYF",
"x6uq2daMUxv",
"sn9TFe9_6aH"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for addressing all my comments; I keep my score at acceptance; good luck!",
" Thank you for the second round of feedback!\n\n> In Appendix C, you replace the condition of $Nz = 0$ by adding a term $w ||Nz||^2$ to the objective for a large enough $w$ and then do unconstrained optimization. How large of ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3,
4
] | [
"dR8WSPaVQIm",
"7faxnOhN4Xt",
"XLMrKExydys",
"XLMrKExydys",
"nips_2022_lbQTJN42uea",
"sn9TFe9_6aH",
"sn9TFe9_6aH",
"x6uq2daMUxv",
"yj3pnVjIiYF",
"Rnd-pUB1gny",
"nips_2022_lbQTJN42uea",
"nips_2022_lbQTJN42uea",
"nips_2022_lbQTJN42uea",
"nips_2022_lbQTJN42uea"
] |
nips_2022_jxezD-1XYr | CascadeXML: Rethinking Transformers for End-to-end Multi-resolution Training in Extreme Multi-label Classification | Extreme Multi-label Text Classification (XMC) involves learning a classifier that can assign an input with a subset of most relevant labels from millions of label choices. Recent approaches, such as XR-Transformer and LightXML, leverage a transformer instance to achieve state-of-the-art performance. However, in this process, these approaches need to make various trade-offs between performance and computational requirements. A major shortcoming, as compared to the Bi-LSTM based AttentionXML, is that they fail to keep separate feature representations for each resolution in a label tree. We thus propose CascadeXML, an end-to-end multi-resolution learning pipeline, which can harness the multi-layered architecture of a transformer model for attending to different label resolutions with separate feature representations. CascadeXML significantly outperforms all existing approaches with non-trivial gains obtained on benchmark datasets consisting of up to three million labels. Code for CascadeXML will be made publicly available at https://github.com/xmc-aalto/cascadexml. | Accept | This paper proposes CascadeXML, which is an end-to-end framework for the task of tree-based extreme multi-label text classification. It extracts the representations from different layers of a BERT model and then maps them to different levels of the hierarchical label tree (HLT).
The proposed method shows strong performance in P@k on benchmark datasets with an improved efficiency during inference compared to other state-of-the-art methods including XR-Transformer and LightXML. Two of the reviewers pointed out the lack of ablation studies for the choice of intermediate layers of the BERT model and the mapping between those Transformer layser to the HLT layers. The problems were addressed in the updated version of the paper during rebuttal and the reviewers increased their scores as a result.
Given that 3 out of the 4 reviewers give a score of 7, the recommendation is to accept the paper.
| train | [
"B1tjdqz9Wm6",
"TSwvYznAE4",
"uDfapg0xqFT",
"UdzarNFZz6q",
"Ff6XTMmFzvl",
"uj0-jPlXz4",
"i0VQH5prw58",
"vron1xUpD8T",
"-IaB1pSai7j",
"eACqNSUHhV-",
"iVl3UMOJNCu",
"VjTBP594sd",
"JgiXBytK66"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer,\n\nmany thanks for your comments.\n\nRegarding your above question, please find our response below :\n\n**Number of HLT levels** : While constructing the HLT, the number of levels is a hyper-parameter, and it can be set to a reasonably value as desired. Larger values i.e. O(log L) resulting from bi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
2,
4,
5
] | [
"TSwvYznAE4",
"uDfapg0xqFT",
"Ff6XTMmFzvl",
"uj0-jPlXz4",
"JgiXBytK66",
"VjTBP594sd",
"iVl3UMOJNCu",
"eACqNSUHhV-",
"nips_2022_jxezD-1XYr",
"nips_2022_jxezD-1XYr",
"nips_2022_jxezD-1XYr",
"nips_2022_jxezD-1XYr",
"nips_2022_jxezD-1XYr"
] |
nips_2022_IKcdgKKA_cs | Mathematically Modeling the Lexicon Entropy of Emergent Language | We formulate a stochastic process, FiLex, as a mathematical model of lexicon entropy in deep learning-based emergent language systems. Defining a model mathematically allows it to generate clear predictions which can be directly and decisively tested. We empirically verify across four different environments that FiLex predicts the correct correlation between hyperparameters (training steps, lexicon size, learning rate, rollout buffer size, and Gumbel-Softmax temperature) and the emergent language's entropy in $20$ out of $20$ environment-hyperparameter combinations. Furthermore, our experiments reveal that different environments show diverse relationships between their hyperparameters and entropy which demonstrates the need for a model which can make well-defined predictions at a precise level of granularity. | Reject | This paper proposes FiLex -- a mathematical model to capture lexicon entropy in emergent language systems. The paper tackles an important and interesting problem in a field (emergent language) where relatively less theory currently exists. However, the reviewers find the experiments not convincing enough (e.g. they do not evaluate actual emergent language and instead use human languages) and lacking in scale. I do think the paper has some merits and can be strengthened further by addressing the reviewer comments, but the current version unfortunately seems below bar for acceptance. | train | [
"YutFdMFgdip",
"4sIhznPYuG",
"kU1UnCwEDg9",
"fwvJ8PJuUXM",
"peGeDe5ZYYt",
"yLHMqo7i_nd",
"EMud2WvFquC",
"btNRqoeMr8m",
"JBG0VO9zL7B"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the authors’ responses. This clears up some of my confusions about the intuitions and experiments. However, I still think that the current manuscript cannot overlook the weakness listed above, e.g. the experiments are not enough to support the intuitions, and the paper is not clarified clearly. The pap... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"fwvJ8PJuUXM",
"peGeDe5ZYYt",
"JBG0VO9zL7B",
"btNRqoeMr8m",
"EMud2WvFquC",
"nips_2022_IKcdgKKA_cs",
"nips_2022_IKcdgKKA_cs",
"nips_2022_IKcdgKKA_cs",
"nips_2022_IKcdgKKA_cs"
] |
nips_2022_aV9WSvM6N3 | Unsupervised Learning From Incomplete Measurements for Inverse Problems | In many real-world inverse problems, only incomplete measurement data are available for training which can pose a problem for learning a reconstruction function. Indeed, unsupervised learning using a fixed incomplete measurement process is impossible in general, as there is no information in the nullspace of the measurement operator. This limitation can be overcome by using measurements from multiple operators. While this idea has been successfully applied in various applications, a precise characterization of the conditions for learning is still lacking. In this paper, we fill this gap by presenting necessary and sufficient conditions for learning the underlying signal model needed for reconstruction which indicate the interplay between the number of distinct measurement operators, the number of measurements per operator, the dimension of the model and the dimension of the signals. Furthermore, we propose a novel and conceptually simple unsupervised learning loss which only requires access to incomplete measurement data and achieves a performance on par with supervised learning when the sufficient condition is verified. We validate our theoretical bounds and demonstrate the advantages of the proposed unsupervised loss compared to previous methods via a series of experiments on various imaging inverse problems, such as accelerated magnetic resonance imaging, compressed sensing and image inpainting. | Accept | This paper proposes an unsupervised learning algorithm for inverse problems using multiple incomplete measurement models. The authors presented theoretical results on the number of measurements per model and the number of models required for recovery under the assumption that the inputs have low-dimensional structures. In addition, a conceptually simple unsupervised learning loss is proposed and it only requires access to incomplete measurement data. The paper is well written, its theoretical analysis is strong, and the experimental results are convincing. | train | [
"QRe5i-Gukme",
"4o65w1bcPh",
"z9EE7xmDDe",
"rgIJ1dC024h",
"uUA85TDu0m0",
"MsxUextCXXW",
"xzq8bf7b70J",
"YPeEJcG8Lb",
"mwZl0F5PI77",
"lzruDWAtRIv"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. I believe this is a strong paper and I will keep me current score. My one comment is that AmbientGAN can be applied in a similar manner as to your approach without the noise vector, I.e. according to the paper \n\nFast unsupervised mri reconstruction without fully-sampled ground truth... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"xzq8bf7b70J",
"z9EE7xmDDe",
"uUA85TDu0m0",
"lzruDWAtRIv",
"MsxUextCXXW",
"mwZl0F5PI77",
"YPeEJcG8Lb",
"nips_2022_aV9WSvM6N3",
"nips_2022_aV9WSvM6N3",
"nips_2022_aV9WSvM6N3"
] |
nips_2022_x5ysKCMXR5s | Support Recovery in Sparse PCA with Incomplete Data | We study a practical algorithm for sparse principal component analysis (PCA) of incomplete and noisy data.
Our algorithm is based on the semidefinite program (SDP) relaxation of the non-convex $l_1$-regularized PCA problem.
We provide theoretical and experimental evidence that SDP enables us to exactly recover the true support of the sparse leading eigenvector of the unknown true matrix, despite only observing an incomplete (missing uniformly at random) and noisy version of it.
We derive sufficient conditions for exact recovery, which involve matrix incoherence, the spectral gap between the largest and second-largest eigenvalues, the observation probability and the noise variance.
We validate our theoretical results with incomplete synthetic data, and show encouraging and meaningful results on a gene expression dataset. | Accept | The paper studies the sparse PCA problem — finding sparse direction of large variance. It studies the natural convex relaxation, in which we lift the outer product xx^T to a matrix X, drop the rank constraint, and apply L1 regularization. In contrast to much existing work, here, the covariance matrix is partially observed. The proposal is to apply the convex relaxation, with the covariance matrix M^* replaced by a censored and noisy version M. The paper assumes that M^* has a sparse lead eigenvector, and ask when this approach correctly recovers the support of that vector.
Reviewers found the paper to be technically solid, with novel results on sparse PCA with missing data. The main concerns center around the presentation: the style is dense with definitions and conditions, perhaps to the detriment of the reader’s insight. Arguably, with a better chosen set of sufficient conditions would lead to a more readable collection of results, leaving more space to convey what the results mean and how they were achieved.
| train | [
"pkOzRjwTr3I",
"oZ34445Nehy",
"bYgBGRwDBE",
"5d4YryeceYF",
"QWbGU2YfOkq",
"4Gb1qhhfBP",
"T5VOZYB8ybd",
"tLURBHYWux7",
"jtTGOWZ0h3b",
"xPUkTZiVF7Y",
"LXZrqFcQgTQ",
"OM65Qj2Lf1c",
"4NnGfLjkF4t",
"MBXzXkPK8H"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Our supplementary material has been updated, aiming to address all the comments from reviewers on additional experiments, but not the main text.\nWe will follow the due process, and in case the paper gets accepted, we will definitely revise the main text to incorporate suggestions from reviewers, for which we wil... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
2,
4
] | [
"oZ34445Nehy",
"xPUkTZiVF7Y",
"4NnGfLjkF4t",
"OM65Qj2Lf1c",
"LXZrqFcQgTQ",
"MBXzXkPK8H",
"4NnGfLjkF4t",
"OM65Qj2Lf1c",
"LXZrqFcQgTQ",
"LXZrqFcQgTQ",
"nips_2022_x5ysKCMXR5s",
"nips_2022_x5ysKCMXR5s",
"nips_2022_x5ysKCMXR5s",
"nips_2022_x5ysKCMXR5s"
] |
nips_2022_aVnAsHaawE3 | ULNeF: Untangled Layered Neural Fields for Mix-and-Match Virtual Try-On | Recent advances in neural models have shown great results for virtual try-on (VTO) problems, where a 3D representation of a garment is deformed to fit a target body shape. However, current solutions are limited to a single garment layer, and cannot address the combinatorial complexity of mixing different garments. Motivated by this limitation, we investigate the use of neural fields for mix-and-match VTO, and identify and solve a fundamental challenge that existing neural-field methods cannot address: the interaction between layered neural fields. To this end, we propose a neural model that untangles layered neural fields to represent collision-free garment surfaces. The key ingredient is a neural untangling projection operator that works directly on the layered neural fields, not on explicit surface representations. Algorithms to resolve object-object interaction are inherently limited by the use of explicit geometric representations, and we show how methods that work directly on neural implicit representations could bring a change of paradigm and open the door to radically different approaches. | Accept | The reviewers found this paper novel and ambitious (neural virtual try-ons), well written, with good qualitative results, and thought the mix-n-match is a good addition.
They were concerned about certain exposition parts (section 4.2), and the experimental setup that lacked quantitative evaluation.
I encourage the authors to take into heart these comments while preparing their revision.
| train | [
"duH78fyz7F",
"OUrMVmxL_mN",
"guSUmXzEmKU",
"hr-hlKn97Ij",
"ayJp1NpKENu",
"1JCgHB5iIR7",
"3ys4XdYvWEH",
"LQ-7X0JG585",
"aUzi11DlHyL",
"U-Fo418W9eY",
"Ds-hHppxTd",
"8mQZjCAb1N1"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the author for clarification! It makes more sense to me, and I'm happy to see the performance of ULNef in this setting!\n \nI'm still not fully convinced by the novelty of the paper, but I'm happy to increase my score to a boardline accept.\n\n",
" Thanks for asking this question, it is important that t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"OUrMVmxL_mN",
"guSUmXzEmKU",
"LQ-7X0JG585",
"nips_2022_aVnAsHaawE3",
"8mQZjCAb1N1",
"Ds-hHppxTd",
"U-Fo418W9eY",
"aUzi11DlHyL",
"nips_2022_aVnAsHaawE3",
"nips_2022_aVnAsHaawE3",
"nips_2022_aVnAsHaawE3",
"nips_2022_aVnAsHaawE3"
] |
nips_2022_jOYdlD4oYrn | Private Isotonic Regression | In this paper, we consider the problem of differentially private (DP) algorithms for isotonic regression. For the most general problem of isotonic regression over a partially ordered set (poset) $\mathcal{X}$ and for any Lipschitz loss function, we obtain a pure-DP algorithm that, given $n$ input points, has an expected excess empirical risk of roughly $\mathrm{width}(\mathcal{X}) \cdot \log|\mathcal{X}| / n$, where $\mathrm{width}(\mathcal{X})$ is the width of the poset. In contrast, we also obtain a near-matching lower bound of roughly $(\mathrm{width}(\mathcal{X}) + \log |\mathcal{X}|) / n$, that holds even for approximate-DP algorithms. Moreover, we show that the above bounds are essentially the best that can be obtained without utilizing any further structure of the poset.
In the special case of a totally ordered set and for $\ell_1$ and $\ell_2^2$ losses, our algorithm can be implemented in near-linear running time; we also provide extensions of this algorithm to the problem of private isotonic regression with additional structural constraints on the output function. | Accept | Most reviewers found the paper well written with no serious doubts regarding the correctness. We hope authors incorporate the comments from the reviewers in their final revision to improve the presentation. | train | [
"nyiaEGnW5j",
"95K4Oh62BI6",
"udAX2c4YitB",
"SYmrbd2eUOo",
"MODJ0PLc0Nk",
"x0sxdPUqMCu",
"tdSjmeqqADu",
"i6nQ1AK2ZMw",
"5RzvWbJLmtU",
"pCROERZUusG",
"v2CZOFT35Ec"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the clear response, I don't have any further comments.",
" Thank you for the response, which has resolved my questions.",
" Also, no need to apologize for typos. They happen to us all, and all the tyme.",
" We apologize for the typos in the paper and thank you for bringing this to our attention. ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"MODJ0PLc0Nk",
"tdSjmeqqADu",
"SYmrbd2eUOo",
"v2CZOFT35Ec",
"pCROERZUusG",
"5RzvWbJLmtU",
"i6nQ1AK2ZMw",
"nips_2022_jOYdlD4oYrn",
"nips_2022_jOYdlD4oYrn",
"nips_2022_jOYdlD4oYrn",
"nips_2022_jOYdlD4oYrn"
] |
nips_2022_Jd2RfKd4Mjz | Real-Valued Backpropagation is Unsuitable for Complex-Valued Neural Networks | Recently complex-valued neural networks have received increasing attention due to successful applications in various tasks and the potential advantages of better theoretical properties and richer representational capacity. However, the training dynamics of complex networks compared to real networks remains an open problem. In this paper, we investigate the dynamics of deep complex networks during real-valued backpropagation in the infinite-width limit via neural tangent kernel (NTK). We first extend the Tensor Program to the complex domain, to show that the dynamics of any basic complex network architecture is governed by its NTK under real-valued backpropagation. Then we propose a way to investigate the comparison of training dynamics between complex and real networks by studying their NTKs. As a result, we surprisingly prove that for most complex activation functions, the commonly used real-valued backpropagation reduces the training dynamics of complex networks to that of ordinary real networks as the widths tend to infinity, thus eliminating the characteristics of complex-valued neural networks. Finally, the experiments validate our theoretical findings numerically. | Accept | This paper extends Yang's Tensor Program paradigm to the setting of complex-valued neural networks and shows that under certain conditions the infinite-width training dynamics under real-valued backpropagation are equivalent to those of real-valued networks. Numerical evidence supporting the main conclusions is presented and a short discussion argues for the necessity of backpropagation algorithms specifically designed for complex networks.
Even though the idea is not entirely novel, the proof that real-valued backpropagation is unsuitable for complex-valued neural networks will likely surprise many readers, and it is useful to present a solid theoretical explanation, even if it only holds in a certain infinite-width regime. The extension of Tensor Programs to complex networks is also a nice result in itself, if not groundbreaking. The reviewers generally appreciate these strengths of the paper.
The paper would be more impactful if further effort were devoted to the practical implications of the results, including a direct comparison with a backpropagation method designed for complex networks. It would be particularly useful to characterize the types of functions that are more readily learned with a method based on, e.g., Wirtinger calculus.
In light of the the above strengths and weaknesses, this is a borderline paper. Ultimately, I believe it falls just above the threshold and I recommend acceptance. | train | [
"pH5JFaeH4it",
"ySLZzy5_yZ_",
"JSuu0b1CeSp",
"pjhVeWN6GM3",
"uXUt76413Wj",
"EDw-LSBiS4E",
"ML9JFFbBObE",
"oBacKl4sKfr"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer JGv9,\n\nMany thanks for your kind reply, acknowledging our contribution, and voting for the acceptance of our submission. We will revise our paper according to the suggestions.\n\n---\n\nAnswer: Thank you very much for your feedback. \n\nEven if NTKs have some successful practical applications (suc... | [
-1,
-1,
-1,
-1,
-1,
8,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
5,
3,
4
] | [
"ySLZzy5_yZ_",
"pjhVeWN6GM3",
"oBacKl4sKfr",
"ML9JFFbBObE",
"EDw-LSBiS4E",
"nips_2022_Jd2RfKd4Mjz",
"nips_2022_Jd2RfKd4Mjz",
"nips_2022_Jd2RfKd4Mjz"
] |
nips_2022_pNHT6oBaPr8 | Learning Partial Equivariances From Data | Group Convolutional Neural Networks (G-CNNs) constrain learned features to respect the symmetries in the selected group, and lead to better generalization when these symmetries appear in the data. If this is not the case, however, equivariance leads to overly constrained models and worse performance. Frequently, transformations occurring in data can be better represented by a subset of a group than by a group as a whole, e.g., rotations in $[-90^{\circ}, 90^{\circ}]$. In such cases, a model that respects equivariance partially is better suited to represent the data. In addition, relevant transformations may differ for low and high-level features. For instance, full rotation equivariance is useful to describe edge orientations in a face, but partial rotation equivariance is better suited to describe face poses relative to the camera. In other words, the optimal level of equivariance may differ per layer. In this work, we introduce Partial G-CNNs: G-CNNs able to learn layer-wise levels of partial and full equivariance to discrete, continuous groups and combinations thereof as part of training. Partial G-CNNs retain full equivariance when beneficial, e.g., for rotated MNIST, but adjust it whenever it becomes harmful, e.g., for classification of 6/9 digits or natural images. We empirically show that partial G-CNNs pair G-CNNs when full equivariance is advantageous, and outperform them otherwise. We plan to release our code publicly at https://github.com/merlresearch/partial-gcnn. | Accept | All three reviewers lean toward acceptance. After the author-reviewer discussion, reviewers find most of their concerns clarified. Concerns about several experimental issues remain, and the authors provided reasonable responses. After careful consideration, AC recommends accepting the paper. | train | [
"VHjlyq3MX5y",
"UlRCW9AxXk-",
"Ud1khdm_fv",
"dFCt68-HuTb",
"iry8B58HFk5",
"fj4jQ0kcieK",
"aUv8IJBcmw8",
"Qbpf5HDvinG",
"s-VAm4z6jqw",
"93yVBnBcJkJ",
"TkOzOVs-Ccbd",
"Y1lYFIpLODM",
"kEqYgIwFwoe",
"wDuyLQrPbkq",
"kNFEA2bA3DA",
"kJuZqBtz0TZ"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer,\n\nWe thank you very much for your reply. \n\nBest regards,\n\nThe authors",
" That is right. What you describe is the left action of the group. \n\nIf we define the right action as $\\mathcal{L}_{u}(v) = (vu^{-1})$ we can also express this as a left action with some algebra. As explained -among ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"iry8B58HFk5",
"fj4jQ0kcieK",
"aUv8IJBcmw8",
"Qbpf5HDvinG",
"TkOzOVs-Ccbd",
"kEqYgIwFwoe",
"kJuZqBtz0TZ",
"wDuyLQrPbkq",
"93yVBnBcJkJ",
"wDuyLQrPbkq",
"Y1lYFIpLODM",
"kNFEA2bA3DA",
"kJuZqBtz0TZ",
"nips_2022_pNHT6oBaPr8",
"nips_2022_pNHT6oBaPr8",
"nips_2022_pNHT6oBaPr8"
] |
nips_2022_ob8tk9Q_2tN | A Variant of Anderson Mixing with Minimal Memory Size | Anderson mixing (AM) is a useful method that can accelerate fixed-point iterations by exploring the information from historical iterations. Despite its numerical success in various applications, the memory requirement in AM remains a bottleneck when solving large-scale optimization problems in a resource-limited machine. To address this problem, we propose a novel variant of AM method, called Min-AM, by storing only one vector pair, that is the minimal memory size requirement in AM. Our method forms a symmetric approximation to the inverse Hessian matrix and is proved to be equivalent to the full-memory Type-I AM for solving strongly convex quadratic optimization. Moreover, for general nonlinear optimization problems, we establish the convergence properties of Min-AM under reasonable assumptions and show that the mixing parameters can be adaptively chosen by estimating the eigenvalues of the Hessian. Finally, we extend Min-AM to solve stochastic programming problems. Experimental results on logistic regression and network training problems validate the effectiveness of the proposed Min-AM. | Accept | All reviewers were positive about this paper and the overall impression is very good - accept. | train | [
"7yw7T5UmI2",
"fpBtDJ_nEqM",
"YGq17pFpcJJ",
"BDngCWO7jOi",
"atZx6xMnRG3",
"SLZ9HAfixNU",
"bxZnj22Ii9",
"8UNB6iSOeEc",
"48jPoZvn_z1",
"0DjOtmwenhE",
"_2e_7ixw0k",
"o98T3qwoQvl",
"zCe4nZ3yrTj",
"q2Tj3eKM4bT",
"wH9qIAOBTM2",
"2BvK7cHebm3",
"vC4SYzPiHT8",
"0vhGVLsavZ",
"3iyOgR5uoIF",... | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_r... | [
" Thanks a lot for your support and comments. We compared the proposed method with the conjugate gradient method in Appendix A.2 in the revised manuscript. We will update the manuscript by further incorporating the connection between the proposed method and the conjugate gradient/residual method.",
" It is our pl... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
5
] | [
"SLZ9HAfixNU",
"0DjOtmwenhE",
"48jPoZvn_z1",
"8UNB6iSOeEc",
"bxZnj22Ii9",
"o98T3qwoQvl",
"_2e_7ixw0k",
"zCe4nZ3yrTj",
"q2Tj3eKM4bT",
"wH9qIAOBTM2",
"nips_2022_ob8tk9Q_2tN",
"17TaCjGk4BQ",
"VhUzln4WAOK",
"VhUzln4WAOK",
"VhUzln4WAOK",
"3iyOgR5uoIF",
"0vhGVLsavZ",
"nips_2022_ob8tk9Q_2... |
nips_2022_TfMeY_L_l6t | A hybrid approach to seismic deblending: when physics meets self-supervision | To limit the time, cost, and environmental impact associated with the acquisition
of seismic data, in recent decades considerable effort has been put into so-called
simultaneous shooting acquisitions, where seismic sources are fired at short time
intervals between each other. As a consequence, waves originating from consecu-
tive shots are entangled within the seismic recordings, yielding so-called blended
data. For processing and imaging purposes, the data generated by each individual
shot must be retrieved. This process, called deblending, is achieved by solving
an inverse problem which is heavily underdetermined. Conventional approaches
rely on transformations that render the blending noise into burst-like noise, whilst
preserving the signal of interest. Compressed sensing type regularization is then
applied, where sparsity in some domain is assumed for the signal of interest. The
domain of choice depends on the geometry of the acquisition and the properties of
seismic data within the chosen domain. In this work, we introduce a new concept
that consists of embedding a self-supervised denoising network into the Plug-and-
Play (PnP) framework. A novel network is introduced whose design extends the
blind-spot network architecture of Laine et al. (2019) for partially coherent noise
(i.e., correlated in time). The network is then trained directly on the noisy input
data at each step of the PnP algorithm. By leveraging both the underlying physics
of the problem and the great denoising capabilities of our blind-spot network,
our algorithm is shown to outperform an industry-standard method whilst being
comparable in terms of computational cost. Moreover, being independent on the
acquisition geometry, it can be easily applied to both marine and land data without
any significant modification. | Reject | The paper studies a seismic deblending problem. This is a problem in reflection seismology, in which multiple excitations are applied simultaneously, and then an underdetermined inverse problem is solved to recover the underlying composition of the earth. Existing approaches to this problem are mostly based on regularization — e.g., frequency domain sparsity. The paper proposes an alternative method based on plug-and-play-ADMM, with a self-supervised regularizer. The regularizer here is a “blind spot” network, which tries to predict a pixel based on its surroundings. In simulation studies (based on synthetic blending of real seismic data), the proposed algorithm outperforms the regularization approach.
Reviews of the paper were mixed: reviewers all recognized careful, pedagogical manner in which the paper lays out its problem of interest. At the same time, several reviewers raised concerns that the exposition was overly focused on background material, at the expense of explaining the paper’s technical contributions. Exposition aside, much of the discussion in the reviews and authors’ response centers on the novelty and depth of the paper’s technical contributions. The reviewers note that the application of self-supervised denoising within a plug-n-play framework is not a novelty of the paper (nor is it argued as one). Rather the technical contribution lies in a combination of existing ideas (self-supervised denoising ala struct BS, plug-n-play) which is well suited to the reflection seismology application. Reviewers generally felt that the paper would be stronger if it focused more on this methodology and on the technical justification of the approach. While the paper introduces a method that has value for reflection seismology, it is current form, the concerns are significant enough to place it below the bar for acceptance. | train | [
"oAdDR7OGL5X",
"moa9R3UQrTI",
"ZTZnX1GBxAD",
"CF7gHQfU668",
"0yT2GQwtZ5k",
"J-VNb-s4EJT",
"2MpshHCosF",
"mIzV5vl3gCa",
"UkkONLflRVF",
"HyUB-nL5gn",
"_eRv0hscmP"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for taking the time to respond to our reply. The claim may be a bit strong, but it is nowhere in the manuscript, just in our response to your review. We do however believe that our proposed methodology may apply to a larger class of problems than RARE. The Noise2Noise approach requires two samples with ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
4,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
3,
4
] | [
"moa9R3UQrTI",
"ZTZnX1GBxAD",
"_eRv0hscmP",
"HyUB-nL5gn",
"UkkONLflRVF",
"mIzV5vl3gCa",
"nips_2022_TfMeY_L_l6t",
"nips_2022_TfMeY_L_l6t",
"nips_2022_TfMeY_L_l6t",
"nips_2022_TfMeY_L_l6t",
"nips_2022_TfMeY_L_l6t"
] |
nips_2022_ftKnhsDquqr | Improved techniques for deterministic l2 robustness | Training convolutional neural networks (CNNs) with a strict 1-Lipschitz constraint under the l_{2} norm is useful for adversarial robustness, interpretable gradients and stable training. 1-Lipschitz CNNs are usually designed by enforcing each layer to have an orthogonal Jacobian matrix (for all inputs) to prevent the gradients from vanishing during backpropagation. However, their performance often significantly lags behind that of heuristic methods to enforce Lipschitz constraints where the resulting CNN is not provably 1-Lipschitz. In this work, we reduce this gap by introducing (a) a procedure to certify robustness of 1-Lipschitz CNNs by replacing the last linear layer with a 1-hidden layer MLP that significantly improves their performance for both standard and provably robust accuracy, (b) a method to significantly reduce the training time per epoch for Skew Orthogonal Convolution (SOC) layers (>30\% reduction for deeper networks) and (c) a class of pooling layers using the mathematical property that the l_{2} distance of an input to a manifold is 1-Lipschitz. Using these methods, we significantly advance the state-of-the-art for standard and provable robust accuracies on CIFAR-10 (gains of +1.79\% and +3.82\%) and similarly on CIFAR-100 (+3.78\% and +4.75\% across all networks. | Accept | This paper introduces several new techniques to improve the $\ell_2$ adversarial robustness of CNNs, including approximating the gradient for Skew Orthogonal Convolutions (SOC), replacing the final linear layer with a 1-hidden-layer MLP, and introducing a new class of pooling layers. These techniques lead to improved efficiency and robust accuracies on CIFAR-10 and 100.
All reviewers agree that this paper has made interesting and solid contributions, and find the paper well-written. Therefore, I recommend it to be accepted to the conference. | train | [
"zapWr5qamK_",
"t7pmhyTYUxv",
"X2c7uqPhgo",
"B0FFT6Wq1xp",
"5iUMMP3DnGu",
"KGezJsrEsd",
"OiamgFIm0n",
"APxmR5arYFt"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your insightful comments.\n\n**Scale to larger problems**: Providing high robustness guarantees for problems with large number of classes is in general a difficult problem and remains an open direction of future research. In this paper, we focus on improving the certificates among classes with top-1... | [
-1,
-1,
-1,
-1,
5,
7,
5,
5
] | [
-1,
-1,
-1,
-1,
3,
3,
3,
2
] | [
"APxmR5arYFt",
"OiamgFIm0n",
"KGezJsrEsd",
"5iUMMP3DnGu",
"nips_2022_ftKnhsDquqr",
"nips_2022_ftKnhsDquqr",
"nips_2022_ftKnhsDquqr",
"nips_2022_ftKnhsDquqr"
] |
nips_2022_7pNV4PCjbQy | Augmented RBMLE-UCB Approach for Adaptive Control of Linear Quadratic Systems | We consider the problem of controlling an unknown stochastic linear system with quadratic costs -- called the adaptive LQ control problem. We re-examine an approach called ``Reward-Biased Maximum Likelihood Estimate'' (RBMLE) that was proposed more than forty years ago, and which predates the ``Upper Confidence Bound'' (UCB) method, as well as the definition of ``regret'' for bandit problems. It simply added a term favoring parameters with larger rewards to the criterion for parameter estimation. We show how the RBMLE and UCB methods can be reconciled, and thereby propose an Augmented RBMLE-UCB algorithm that combines the penalty of the RBMLE method with the constraints of the UCB method, uniting the two approaches to optimism in the face of uncertainty. We establish that theoretically, this method retains ${\mathcal{O}}(\sqrt{T})$ regret, the best known so far. We further compare the empirical performance of the proposed Augmented RBMLE-UCB and the standard RBMLE (without the augmentation) with UCB, Thompson Sampling, Input Perturbation, Randomized Certainty Equivalence and StabL on many real-world examples including flight control of Boeing 747 and Unmanned Aerial Vehicle. We perform extensive simulation studies showing that the Augmented RBMLE consistently outperforms UCB, Thompson Sampling and StabL by a huge margin, while it is marginally better than Input Perturbation and moderately better than Randomized Certainty Equivalence. | Accept | Reward-Biased Maximum Likelihood Estimate (RBMLE) is an approach to balance exploration-exploitation that is based on biasing models with smaller cost functions. The paper considers an augmented version of it (ARBMLE) that confines the search of the model to the confidence set used by an Upper Confidence Bound (UCB)-like algorithm. The paper considers the Linear Quadratic Regulation (LQR) with unknown dynamics, provides a regret bound of ARBMLE, which is comparable to that of UCB-based approach. The paper empirically shows that both ARBMLE and RBMLE outperform many other methods, particularly UCB-based ones.
We have both strong support in favour of acceptance of this paper and some less enthusiastic negative reviews. After reading the paper, the reviews, and the discussions, I am inclined to accept the paper. The main reason is that the paper considers a relatively less-known approach to exploration-exploitation problem, provides reasonable analysis (even though the tools might be standard), and shows promising empirical results. However, my recommendation should not be considered as dismissing the concerns of reviewers. I believe many of them are valid. I merely put less weight on them in my evaluation compared to the negative reviewers.
Let me emphasize a few points brought by reviewers and my own reading of this work. I hope the authors consider them in the revision of their paper.
- The writing quality varies a lot. The first two sections are written clearly and have some nice insights and intuitions, but then the writing quality deteriorates. For example, Section 3.2 becomes confusing (we have E_t, E_1, E_2 with different meanings), and Section 5 becomes a series of lemmas without much insight. Sections 6 and 7 are of better quality again.
- The series the sequence of lemmas in Section 5 is not very insightful. The authors have added a paragraph at the beginning of that section, but I believe that is not enough. My suggestion is that authors either provide better intuition behind each of these lemma, or move them to an appendix.
- Be clear about the dependence of the regret bound on the dimension of the system.
- Assumption 3 requires more discussion.
- The issue of tractability of solving the required optimization problem should be discussed explicitly.
- Given that [31] (Mete et al., "Reward biased maximum likelihood estimation for reinforcement learning", 2021) solves an arguably more general problem (RL instead of LQR), a detailed comparison is needed. What are the differences in insight, proof techniques, etc.? | train | [
"TYNG4I-pSx",
"ukjMvutOgoi",
"-uSnTaQ5FHq",
"cdDbJZ6O-8X",
"LMsjEna0MHD",
"8OwAy1n6jnw",
"NYgTmSMkkxe",
"XkB5V2H-QlQ",
"dA5s3xeg4Ds",
"hzhzv7UDmK_",
"9uOkpZ3XyPF",
"j1EdcLmbQO1",
"A6UtTMqTvVp",
"aInTSgxQsJJS",
"RXMhY3DkJjw",
"gnF9ejJXAUL",
"oUFuYmQ41k",
"gcmJBNyPs6F",
"eskY3xGOKX... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer,\n\nThanks again for your detailed review! The authors have responded to the reviews. Have they answered your questions satisfactorily? If not, what are the remaining issues?\nIf you have any further questions from them, please ask them now. The deadline for discussion between the reviewers and the ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
3,
4,
5
] | [
"oUFuYmQ41k",
"-uSnTaQ5FHq",
"8OwAy1n6jnw",
"XkB5V2H-QlQ",
"hzhzv7UDmK_",
"9uOkpZ3XyPF",
"j1EdcLmbQO1",
"A6UtTMqTvVp",
"nips_2022_7pNV4PCjbQy",
"oUFuYmQ41k",
"gnF9ejJXAUL",
"RXMhY3DkJjw",
"gcmJBNyPs6F",
"eskY3xGOKXZ",
"nips_2022_7pNV4PCjbQy",
"nips_2022_7pNV4PCjbQy",
"nips_2022_7pNV4... |
nips_2022_wmsw0bihpZF | Optimizing Data Collection for Machine Learning | Modern deep learning systems require huge data sets to achieve impressive performance, but there is little guidance on how much or what kind of data to collect. Over-collecting data incurs unnecessary present costs, while under-collecting may incur future costs and delay workflows. We propose a new paradigm for modeling the data collection workflow as a formal optimal data collection problem that allows designers to specify performance targets, collection costs, a time horizon, and penalties for failing to meet the targets. Additionally, this formulation generalizes to tasks requiring multiple data sources, such as labeled and unlabeled data used in semi-supervised learning. To solve our problem, we develop Learn-Optimize-Collect (LOC), which minimizes expected future collection costs. Finally, we numerically compare our framework to the conventional baseline of estimating data requirements by extrapolating from neural scaling laws. We significantly reduce the risks of failing to meet desired performance targets on several classification, segmentation, and detection tasks, while maintaining low total collection costs. | Accept | The paper addresses a critical problem in the era of massive-data-set ML, which is how to estimate the size of data that needs to be collected to train a model of a given performance level. This is not the first paper to propose the problem, but it does give improvements and extensions over previous work.
The reviewers reached clear consensus that the problem is important, that the paper is extremely clear, and that the methods appear to work well. However, there was concern that the current paper was perhaps not a significant step forward over a previous recent paper "How Much More Data Do I Need? Estimating Requirements for Downstream Tasks". In order to help resolve this question, I read both the current submission (and revision), but also went back to read the previous paper -- in addition, of course, to fully going through all reviews and author responses.
In the end, I think that there is enough new material here to justify publication. The use of the bootstrap seems to give useful improvement and is computationally tractable, and the extension to the multivariate / multi-source case is interesting and important. But equally importantly, in my opinion, is the fact that the empirical work is so thoroughly done and adds additional empirical grounding to a nascent-but-critical area of the field. For these reasons, I am discounting to some degree the arguments from the "borderline reject" review and am recommending acceptance.
Lastly, note that an earlier version of the paper included text from the related work that was disturbingly similar to that of a previously published paper. The authors have since revised the paper and removed the similar / near overlapping text, which is appropriate, but even so should consider this a strong warning to avoid submitting overlapping text in the future. | train | [
"LlBh8-9hZ6c",
"LV2Ijed1TeO",
"SecPakU0MaY",
"6JHys-NaT_f",
"mr87esdCUN",
"UEfc2acX6V",
"PbUYBCx-NpO",
"2fDMNRIBmEv"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response. It would be helpful to include some of these details in the final version of the paper to help make it more readable.",
" We thank the reviewers for their detailed comments and positive feedback on the paper. All three reviewers stated that the problem is interesting, important, and cle... | [
-1,
-1,
-1,
-1,
-1,
4,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"6JHys-NaT_f",
"nips_2022_wmsw0bihpZF",
"UEfc2acX6V",
"PbUYBCx-NpO",
"2fDMNRIBmEv",
"nips_2022_wmsw0bihpZF",
"nips_2022_wmsw0bihpZF",
"nips_2022_wmsw0bihpZF"
] |
nips_2022_7EP90NMAoK | On Sample Optimality in Personalized Collaborative and Federated Learning | In personalized federated learning, each member of a potentially large set of agents aims to train a model minimizing its loss function averaged over its local data distribution. We study this problem under the lens of stochastic optimization, focusing on a scenario with a large number of agents, that each possess very few data samples from their local data distribution. Specifically, we prove novel matching lower and upper bounds on the number of samples required from all agents to approximately minimize the generalization error of a fixed agent. We provide strategies matching these lower bounds, based on a gradient filtering approach: given prior knowledge on some notion of distance between local data distributions, agents filter and aggregate stochastic gradients received from other agents, in order to achieve an optimal bias-variance trade-off. Finally, we quantify the impact of using rough estimations of the distances between local distributions of agents, based on a very small number of local samples. | Accept | This paper addresses an important issue related to sample complexities for a personalized federated learning (PFL) problem. Its focus is on the case where a large number of agents collaborate to train the PFL problem, and each agent can have local data from a slightly different data distribution.
Author(s) provide both the lower and upper bound on the number of samples needed in order to achieve their goals. They also discuss techniques that allow for achieving an optimal bias-variance trade-off.
Note: please try to incorporate the suggestions and discussions into the camera-ready version. | train | [
"K_IM2b3-2Dp",
"vDVn9DvBe43v",
"LO3T9HLZrY",
"KIvQSWD_ROF",
"Ba4u0OJrMcL",
"AaHYGLdB5h",
"2RX56siE3Q",
"HjRuJ5t8jD-",
"6nzYTLRKtKE",
"D2v9V8XBo3j"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your clarifications.",
" Thank you for commenting on the points raised by me during the review. I am happy with the responses, and look forward to the revised manuscript with the updates that the authors have promised to make. I have no further comments, and would like to recommend acceptance. ",
"... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3,
2
] | [
"LO3T9HLZrY",
"KIvQSWD_ROF",
"D2v9V8XBo3j",
"6nzYTLRKtKE",
"HjRuJ5t8jD-",
"2RX56siE3Q",
"nips_2022_7EP90NMAoK",
"nips_2022_7EP90NMAoK",
"nips_2022_7EP90NMAoK",
"nips_2022_7EP90NMAoK"
] |
nips_2022_d6mf9AFoR-O | Dual-discriminative Graph Neural Network for Imbalanced Graph-level Anomaly Detection | Graph-level anomaly detection aims to distinguish anomalous graphs in a graph dataset from normal graphs. Anomalous graphs represent a very few but essential patterns in the real world. The anomalous property of a graph may be referable to its anomalous attributes of particular nodes and anomalous substructures that refer to a subset of nodes and edges in the graph. In addition, due to the imbalance nature of the anomaly problem, anomalous information will be diluted by normal graphs with overwhelming quantities. Various anomaly notions in the attributes and/or substructures and the imbalance nature together make detecting anomalous graphs a non-trivial task. In this paper, we propose a dual-discriminative graph neural network for graph-level anomaly detection, namely iGAD. Specifically, an anomalous graph attribute-aware graph convolution and an anomalous graph substructure-aware deep Random Walk Kernel (deep RWK) are welded into a graph neural network to achieve the dual-discriminative ability on anomalous attributes and substructures. The deep RWK in iGAD makes up for the deficiency of graph convolution in distinguishing structural information caused by the simple neighborhood aggregation mechanism. Further, we propose a Point Mutual Information (PMI)-based loss function to target the problems caused by imbalance distributions. The loss function enables iGAD to capture the essential correlation between input graphs and their anomalous/normal properties. We evaluate iGAD on four real-world graph datasets. Extensive experiments demonstrate the superiority of iGAD on the graph-level anomaly detection task. | Accept | This paper proposes a dual-discriminative graph neural network for detecting anomalous graphs given a set of graphs. For imbalanced data, a point wise mutual information (PMI) loss is used. The extensive experimental results using real-world datasets demonstrate the effectiveness of the proposed method. The graph-level anomaly detection is an important problem, and the motivation and its challenges are well presented. This paper is well-written. The proposed method of considering anomalous node attributes and anomalous graph structures is interesting. | train | [
"jNwTjMt-lwD",
"o2t3wets_sU",
"qWPQS7D3LoL",
"a3nt7a22jMM",
"QQWq-qDLibx",
"f03SVAGVPN",
"MHHXLXoN1nr",
"Lp33xtfQXMW",
"pMUjQogKCi7",
"ufbdKBM3Iv",
"RisVm4CQVaj",
"T46T3FUmSPb",
"BsA24PGUlfK",
"2E8h6uDUly0",
"fAuOUA2IB0o",
"lnv70fjc_8V",
"1Nr2n8jmO5t",
"8yz5cdpaQjv",
"nSeKi0ZcwV"... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer 653N,\n\nThanks for your response. We have run the node-level anomaly detection algorithm DOMINANT on datasets MCF-7, PC-3, MOLT-4, and SW-620. DOMINANT is a representative and well-used node-level anomaly detection algorithm, which learns the anomaly score of a node based on node features and adjac... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
5
] | [
"o2t3wets_sU",
"qWPQS7D3LoL",
"1Nr2n8jmO5t",
"QQWq-qDLibx",
"MHHXLXoN1nr",
"BsA24PGUlfK",
"pMUjQogKCi7",
"ufbdKBM3Iv",
"2E8h6uDUly0",
"fAuOUA2IB0o",
"1Nr2n8jmO5t",
"1Nr2n8jmO5t",
"nSeKi0ZcwV",
"lnv70fjc_8V",
"8yz5cdpaQjv",
"nips_2022_d6mf9AFoR-O",
"nips_2022_d6mf9AFoR-O",
"nips_202... |
nips_2022_HH3GHN_Q1Ba | Revisiting Populations in Multi-Agent Communication | Despite evidence from sociolinguistics that larger groups of speakers tend to develop more structured languages, the use of populations has failed to yield significant benefits in emergent multi-agent communication. In this paper we reassess the validity of the standard training protocol and illustrate its limitations. Specifically, we analyze population-level communication at the equilibrium in sender-receiver Lewis games. We find that receivers co-adapt to senders they are interacting with, which limits the effect of the population. Informed by this analysis, we propose an alternative training protocol based on ``partitioning'' agents. Partitioning isolates sender-receiver pairs, limits co-adaptation, and results in a new global optimization objective where agents maximize (1) their respective "internal" communication accuracy and (2) their alignment with other agents. In experiments, we find that agents trained in partitioned populations are able to communicate successfully with new agents which they have never interacted with and tend to develop a shared language. Moreover, we observe that larger populations develop languages that are more compositional. Our findings suggest that scaling up to populations in multi-agent can be beneficial, but that it matters how we scale up. | Reject | The paper investigates the effectiveness of population-level training of multi-agent communication strategies. Based on the finding that agents that interact with one another co-adapt, the paper proposes an alternative training process that "partitions" the population by constructing specific sender-receiver pairs in a manner that reduces co-adaptation. The paper shoes that this partition-based strategy gives rise to a new optimization objective that encourages alignment across the population. Experiments demonstrate the emergence of mutual understanding between agents that have never communicated, and that partition-based training results in language that is more compositional compared to alternative strategies.
The paper was reviewed by three researchers who discussed the merits of the paper with the AC. There is general agreement that the paper provides an interesting discussion of and important insights into the effect of population-based optimization for emergent communication. Based on these insights, the authors propose a novel training procedure that experiments show is effective. Several reviewers commented that the paper is very well written and was enjoyable to read. The reviewers raised several concerns/questions that the authors made a concerted effort to address. However, a notable limitation of the current version of the paper is the lack of qualitative and quantitative comparisons to previous work. While the proposal to update the related work discussion is helpful, the paper should also provide experimental evaluations that compare to existing work, without which the significance of this particular training procedure is unclear. | train | [
"h_Gq6G4jj-Y",
"tqwOc_tK4lm",
"VvckbylJMn",
"wlyOE01b_R",
"jObuR9Ld5t",
"oSy5GeMn8y5",
"phR5c4L8Pok",
"4OWuH7eKFrR",
"Qq8rP39yhap",
"SqOg9_WLQA3",
"rW30O2Y5mkC",
"if4eMb2ePU"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for replying to my comments. I believe most of my concerns were addressed after the discussion. I think the paper is more clear now after the revisions made by the authors. \n\n---\nStated in the authors response:\n> Note that receiver is not updated. Moreover sender does not intervene in this procedure,... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"phR5c4L8Pok",
"jObuR9Ld5t",
"nips_2022_HH3GHN_Q1Ba",
"nips_2022_HH3GHN_Q1Ba",
"oSy5GeMn8y5",
"SqOg9_WLQA3",
"4OWuH7eKFrR",
"rW30O2Y5mkC",
"if4eMb2ePU",
"nips_2022_HH3GHN_Q1Ba",
"nips_2022_HH3GHN_Q1Ba",
"nips_2022_HH3GHN_Q1Ba"
] |
nips_2022_4_oCZgBIVI | Sharper Convergence Guarantees for Asynchronous SGD for Distributed and Federated Learning | We study the asynchronous stochastic gradient descent algorithm, for distributed training over $n$ workers that might be heterogeneous. In this algorithm, workers compute stochastic gradients in parallel at their own pace and return them to the server without any synchronization.
Existing convergence rates of this algorithm for non-convex smooth objectives depend on the maximum delay $\tau_{\max}$ and reach an $\epsilon$-stationary point after $O\!\left(\sigma^2\epsilon^{-2}+ \tau_{\max}\epsilon^{-1}\right)$ iterations, where $\sigma$ is the variance of stochastic gradients. In this work (i) we obtain a tighter convergence rate of $O\!\left(\sigma^2\epsilon^{-2}+ \sqrt{\tau_{\max}\tau_{avg}}\epsilon^{-1}\right)$ *without any change in the algorithm* where $\tau_{avg}$ is the average delay, which can be significantly smaller than $\tau_{\max}$. We also provide (ii) a simple delay-adaptive learning rate scheme, under which asynchronous SGD achieves a convergence rate of $O\!\left(\sigma^2\epsilon^{-2}+ \tau_{avg}\epsilon^{-1}\right)$, and does not require any extra hyperparameter tuning nor extra communications. Our result allows to show *for the first time* that asynchronous SGD is *always faster* than mini-batch SGD. In addition, (iii) we consider the case of heterogeneous functions motivated by federated learning applications and improve the convergence rate by proving a weaker dependence on the maximum delay compared to prior works. | Accept | The paper studies the convergence of asynchronous SGD in the setting of distributed/federated nonconvex smooth optimization with . The first contribution of the paper is to tighten the existing convergence guarantees, improving the dependence on the delay parameter from maximum ($\tau_{\max}$) to the geometric mean of maximum and average delay ($\sqrt{\tau_{\max}\tau_{\rm avg}}$). The paper then proceeds to introduce an asynchronous SGD variant with delay-adaptive learning rate that further reduces the dependence to simply $\tau_{\rm avg}.$ On a conceptual level, the paper shows that asynchronous SGD is always faster than minibatch SGD. The main techniques based on active set/concurrency seem novel and are presented in a clear way. The paper is well written, has solid contributions, and is a good fit for NeurIPS. | train | [
"vVS1RBEyOsA",
"QEidpue0CK",
"R1kqypbJCpg",
"PYDLzyv1ED5",
"8kvpOsdkI8YC",
"9atMKYUVYQW",
"YhB8v9ZQjr",
"fr6_BjFtyP9"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response. My concerns are addressed.",
" We would like to thank all of the reviewers for their very positive evaluation of our paper and their valuable comments that help to improve the paper. We provided responses to each reviewer separately below. ",
" Thank you very much for your positive r... | [
-1,
-1,
-1,
-1,
-1,
7,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"PYDLzyv1ED5",
"nips_2022_4_oCZgBIVI",
"fr6_BjFtyP9",
"YhB8v9ZQjr",
"9atMKYUVYQW",
"nips_2022_4_oCZgBIVI",
"nips_2022_4_oCZgBIVI",
"nips_2022_4_oCZgBIVI"
] |
nips_2022_eMW9AkXaREI | Vision Transformers provably learn spatial structure | Vision Transformers (ViTs) have recently achieved comparable or superior performance to Convolutional neural networks (CNNs) in computer vision. This empirical breakthrough is even more remarkable since ViTs discards spatial information by mixing patch embeddings and positional encodings and do not embed any visual inductive bias (e.g.\ spatial locality). Yet, recent work showed that while minimizing their training loss, ViTs specifically learn spatially delocalized patterns. This raises a central question: how do ViTs learn this pattern by solely minimizing their training loss using gradient-based methods from \emph{random initialization}? We propose a structured classification dataset and a simplified ViT model to provide preliminary theoretical justification of this phenomenon. Our model relies on a simplified attention mechanism --the positional attention mechanism-- where the attention matrix solely depends on the positional encodings. While the problem admits multiple solutions that generalize, we show that our model implicitly learns the spatial structure of the dataset while generalizing.
We finally prove that learning the structure helps to sample-efficiently transfer to downstream datasets that share the same structure as the pre-training one but with different features. We empirically verify that ViTs using only the positional attention mechanism perform similarly to the original one on CIFAR-10/100, SVHN and ImageNet. | Accept | This paper provides a theoretical analysis of the empirical finding that Vision Transformers learn position embeddings that recapitulate the spatial structure of the training data, even though this spatial structure is no longer explicitly represented after the image is split into patches. The reviewers are generally satisfied by the soundness of the theory, but there is some disagreement regarding the significance of the contribution. The AC believes this paper asks an interesting theoretical question, even if (as is often true) it can only be answered in a simplified setting, and the answer is nontrivial. The AC thus recommends acceptance. | train | [
"ieyB3-QwqaN",
"rj1V37s0tCj",
"RrdjLZg1rt-",
"qUI-dZmC_oe",
"O9CMiG-u_Zl",
"64rKFUpb1b1",
"B_lPl9-hmhY",
"M4zT-6TMqi",
"sYm6uDkgi0q",
"zQDhA-qWVoS"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for addressing my concerns. I appreciate the changes you have proposed. \n\nAfter consideration, as an editorial matter, I think couching your results in terms convolution is really doing you a lot more harm than good. However, the ability to learn the ground-truth associativity in the positional embedd... | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
3
] | [
"RrdjLZg1rt-",
"zQDhA-qWVoS",
"sYm6uDkgi0q",
"M4zT-6TMqi",
"B_lPl9-hmhY",
"nips_2022_eMW9AkXaREI",
"nips_2022_eMW9AkXaREI",
"nips_2022_eMW9AkXaREI",
"nips_2022_eMW9AkXaREI",
"nips_2022_eMW9AkXaREI"
] |
nips_2022_5j6fWcPccO | Using Mixup as a Regularizer Can Surprisingly Improve Accuracy & Out-of-Distribution Robustness | We show that the effectiveness of the well celebrated Mixup can be further improved if instead of using it as the sole learning objective, it is utilized as an additional regularizer to the standard cross-entropy loss. This simple change not only improves accuracy but also significantly improves the quality of the predictive uncertainty estimation of Mixup in most cases under various forms of covariate shifts and out-of-distribution detection experiments. In fact, we observe that Mixup otherwise yields much degraded performance on detecting out-of-distribution samples possibly, as we show empirically, due to its tendency to learn models exhibiting high-entropy throughout; making it difficult to differentiate in-distribution samples from out-of-distribution ones.
To show the efficacy of our approach (RegMixup), we provide thorough analyses and experiments on vision datasets (ImageNet & CIFAR-10/100) and compare it with a suite of recent approaches for reliable uncertainty estimation. | Accept | The paper presents a surprisingly simple modification to mixup regularization, which results in a consistent improvement over standard mixup both in the original dataset and in the out-of-distribution setting. All of the reviewers agree that the paper adds to the literature despite its simplicity, and accordingly, I recommend acceptance. | train | [
"l-vn-HINckT",
"kAGe5PsX4nx",
"4U3NZrjKoCL",
"cvpadPVWsJ6",
"jJzb_mmOCPm",
"kwfZCphQc-Jd",
"WeMtHtp_e_s",
"IJD4q8m0p01",
"HVLSvwtgsR-v",
"VRnHVgbOjJP",
"50vQu6tI6sjB",
"4TDZ6VdyOEd",
"_UNzT7Oosa",
"yfiatK9kBdF",
"iKK74ymu35e",
"cZDhlYFfxHb",
"T2epISSdoMm",
"uRqVr-3DiOm",
"Q79ErFc... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_r... | [
" Dear Authors,\n\nThanks for your kind words. It's glad to see my suggestions are somehow valuable for improving your manuscript. I also appreciate your efforts in this discussion period. Your rebuttal/response is very well prepared. \n\nBest luck with your submission!",
" Dear Reviewer,\n\nWe would like to than... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"kAGe5PsX4nx",
"oUm-S5Pw06y",
"cvpadPVWsJ6",
"jJzb_mmOCPm",
"50vQu6tI6sjB",
"WeMtHtp_e_s",
"cZDhlYFfxHb",
"Q79ErFcXrFw",
"VRnHVgbOjJP",
"iKK74ymu35e",
"4TDZ6VdyOEd",
"_UNzT7Oosa",
"yfiatK9kBdF",
"oUm-S5Pw06y",
"m7f_Fjti0QY",
"Q79ErFcXrFw",
"nips_2022_5j6fWcPccO",
"nips_2022_5j6fWcP... |
nips_2022_0fKlU1OlANc | Unsupervised Adaptation from Repeated Traversals for Autonomous Driving | For a self-driving car to operate reliably, its perceptual system must generalize to the end-user's environment --- ideally without additional annotation efforts. One potential solution is to leverage unlabeled data (e.g., unlabeled LiDAR point clouds) collected from the end-users' environments (i.e. target domain) to adapt the system to the difference between training and testing environments. While extensive research has been done on such an unsupervised domain adaptation problem, one fundamental problem lingers: there is no reliable signal in the target domain to supervise the adaptation process. To overcome this issue we observe that it is easy to collect unsupervised data from multiple traversals of repeated routes. While different from conventional unsupervised domain adaptation, this assumption is extremely realistic since many drivers share the same roads. We show that this simple additional assumption is sufficient to obtain a potent signal that allows us to perform iterative self-training of 3D object detectors on the target domain. Concretely, we generate pseudo-labels with the out-of-domain detector but reduce false positives by removing detections of supposedly mobile objects that are persistent across traversals. Further, we reduce false negatives by encouraging predictions in regions that are not persistent. We experiment with our approach on two large-scale driving datasets and show remarkable improvement in 3D object detection of cars, pedestrians, and cyclists, bringing us a step closer to generalizable autonomous driving. | Accept | This paper proposes to leverage unlabelled LIDAR scans from vehicles repeatedly traversing the same environment (e.g., Lyft data in the US), for the domain adaptation of car, bicycle and pedestrian classifiers trained in different domains (e.g., KITTI data in Germany). It uses pretrained Point R-CNN classifiers and bounding box detectors followed by Persistent Point PP-scores and statistical methods for false positive and false negative removal. Various ablations demonstrate the superiority of this self-training method.
Reviewers had very split scores, going from 8 (UWLf) down to 3 (mdTS). After rebuttals, reviewer's estimated score is 5 (not updated in the review, but the reviewer said they were "favourable"), meaning an average score of 5.75.
Reviewers praised the idea (UWLf, YZhW, mdTS), the clarity of the paper (UWLf, FkR5, mdTS), the thorough evaluation (UWLf, mdTS), the promising generalisation results on the Lyft and Ithaca-365 datasets (UWLf, YZhW, FkR5) and the potential uses of unsupervised data labeling (UWLf).
Reviewer (UWLf, FkR5) noted that other LIDAR point classifiers other than Point R-CNN or single-stage detectors could have been evaluated: these points were addressed during rebuttals with new experiments). Reviewer (YZhW) noted poor performance in one specific domain and the presence of many hyperparameters, but did not respond to the authors’ rebuttal. Reviewer (FkR5) was concerned about applicability to static objects, limited novelty of the filtering methods, and the fact that comparison was with only two other methods; the authors provided a rebuttal to most of these points. Reviewer mdTS had a large number of specific questions regarding the evaluation, starting with the hyperparameter choice; again, the reviewer did not answer to the authors’ rebuttal.
As AC, and based on scores 8, 5, 5, and 3->5?, I would recommend this paper for acceptance.
Sincerely,
Area Chair | train | [
"BaJPsOdJsb",
"r8zPcgwsAUP",
"t0MY78fTmGD",
"k2ufPjK3v6v",
"y-CddW98pNN",
"CabXq5iwAGZ",
"zAwI0iOzjU9",
"JKWCOVkxtT-Q",
"dEFZ9VhETnf",
"VJkxwaZVbHOO",
"bFVNLUNKnbI",
"TbD5DgpI2aJ",
"ibFNo9-ulaU",
"357ne1C5Mnj",
"rxhW-d2zjyy",
"PA9hSlWKpOW"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your support, as well as the suggestions and helpful comments. We will definitely include these in our final version, with particular care for points 2, 4, and 5. We were a bit confused by what you mean for \"Table 1, comparing against baseline SN in the full range\"? We currently do have that compa... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
3
] | [
"y-CddW98pNN",
"rxhW-d2zjyy",
"CabXq5iwAGZ",
"357ne1C5Mnj",
"VJkxwaZVbHOO",
"dEFZ9VhETnf",
"JKWCOVkxtT-Q",
"357ne1C5Mnj",
"PA9hSlWKpOW",
"bFVNLUNKnbI",
"ibFNo9-ulaU",
"rxhW-d2zjyy",
"nips_2022_0fKlU1OlANc",
"nips_2022_0fKlU1OlANc",
"nips_2022_0fKlU1OlANc",
"nips_2022_0fKlU1OlANc"
] |
nips_2022_5haAJAcofjc | General Cutting Planes for Bound-Propagation-Based Neural Network Verification | Bound propagation methods, when combined with branch and bound, are among the most effective methods to formally verify properties of deep neural networks such as correctness, robustness, and safety. However, existing works cannot handle the general form of cutting plane constraints widely accepted in traditional solvers, which are crucial for strengthening verifiers with tightened convex relaxations. In this paper, we generalize the bound propagation procedure to allow the addition of arbitrary cutting plane constraints, including those involving relaxed integer variables that do not appear in existing bound propagation formulations. Our generalized bound propagation method, GCP-CROWN, opens up the opportunity to apply general cutting plane methods for neural network verification while benefiting from the efficiency and GPU acceleration of bound propagation methods. As a case study, we investigate the use of cutting planes generated by off-the-shelf mixed integer programming (MIP) solver. We find that MIP solvers can generate high-quality cutting planes for strengthening bound-propagation-based verifiers using our new formulation. Since the branching-focused bound propagation procedure and the cutting-plane-focused MIP solver can run in parallel utilizing different types of hardware (GPUs and CPUs), their combination can quickly explore a large number of branches with strong cutting planes, leading to strong verification performance. Experiments demonstrate that our method is the first verifier that can completely solve the oval20 benchmark and verify twice as many instances on the oval21 benchmark compared to the best tool in VNN-COMP 2021, and also noticeably outperforms state-of-the-art verifiers on a wide range of benchmarks. GCP-CROWN is part of the $\alpha,\beta$-CROWN verifier, the VNN-COMP 2022 winner. Code is available at http://PaperCode.cc/GCP-CROWN. | Accept | All the reviewers found the work to have promise, but there was concern about the novelty of the work. That said, the experimental results showcased the power of the approach; the authors are advised to put their work in the context of prior work.
Overall, there was a consensus that the paper deserves to be published and hence the recommendation. | train | [
"zFakHw6VroA",
"wedSesCBK0I",
"Cv6aPWCFn6",
"MDXesc6LsGV",
"pvEHHy-mnc",
"PpkH8SwVBW",
"OtnMqr5KJ4",
"TLqRDJUvQnE",
"S7yoIy0BnMJR",
"06P_sAczZsE",
"CZ-UHI5x6Yq",
"rhJ6qlRZe_A"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer QMej,\n\nWe greatly appreciate your encouraging review and thank you again for recognizing the importance of our work! We have answered your questions in detail in our response, and feel free to let us know if you have any additional questions before the discussion period ends. \n\nSince the discuss... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"CZ-UHI5x6Yq",
"rhJ6qlRZe_A",
"06P_sAczZsE",
"rhJ6qlRZe_A",
"06P_sAczZsE",
"OtnMqr5KJ4",
"rhJ6qlRZe_A",
"CZ-UHI5x6Yq",
"06P_sAczZsE",
"nips_2022_5haAJAcofjc",
"nips_2022_5haAJAcofjc",
"nips_2022_5haAJAcofjc"
] |
nips_2022__D4cE66L9x3 | Byzantine Spectral Ranking | We study the problem of rank aggregation where the goal is to obtain a global ranking by aggregating pair-wise comparisons of voters over a set of items. We consider an adversarial setting where the voters are partitioned into two sets. The first set votes in a stochastic manner according to the popular score-based Bradley-Terry-Luce (BTL) model for pairwise comparisons. The second set comprises malicious Byzantine voters trying to deteriorate the ranking. We consider a strongly-adversarial scenario where the Byzantine voters know the BTL scores, the votes of the good voters, the algorithm, and can collude with each other. We first show that the popular spectral ranking based Rank-Centrality algorithm, though optimal for the BTL model, does not perform well even when a small constant fraction of the voters are Byzantine.
We introduce the Byzantine Spectral Ranking Algorithm (and a faster variant of it), which produces a reliable ranking when the number of good voters exceeds the number of Byzantine voters. We show that no algorithm can produce a satisfactory ranking with probability > 1/2 for all BTL weights when there are more Byzantine voters than good voters, showing that our algorithm works for all possible population fractions. We support our theoretical results with experimental results on synthetic and real datasets to demonstrate the failure of the Rank-Centrality algorithm under several adversarial scenarios and how the proposed Byzantine Spectral Ranking algorithm is robust in obtaining good rankings. | Accept | This is a nice paper that deals with byzantine corruption in the BTL model. It first shows that rank centrality performs quite poorly (btw how does the maximum likelihood estimator, which is known to be minimax optimal in the non-adversarial setting, perform in the Byzantine setting?). The paper then presents algorithms for ranking when the majority of workers are non-adversarial (which is unsurprisingly necessary for any algorithm). We all agree that this paper should be accepted, although different reviewers have different levels of excitement about the paper. A persistent concern seems to be absence of proper comparison with Agarwal et al. and weak-ish bounds. Nevertheless, this paper is above bar and I recommend acceptance.
| train | [
"CVu1KN97kb",
"TFZZjLqKSQq",
"GjX2RO40aVx",
"kwUD3p3U5AM",
"p3bK_Qa-kVH",
"3RQTkz-Uqj9",
"tg4IaRU42LI",
"LI15NYqiJeI",
"GGrGrOl6bES",
"2df0fabcDn",
"qnPSPgi2Dq1",
"5LI4xVJ5LFt",
"l8tcgmMg3r"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Author response was comprehensive, I have raised the rating to 7. ",
" We thank the reviewer for responding to our rebuttal and also raising the rating. ",
" We thank the reviewer for taking time out to respond to our rebuttal. We are glad that the queries of the reviewer have been addressed convincingly. We ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"3RQTkz-Uqj9",
"p3bK_Qa-kVH",
"kwUD3p3U5AM",
"tg4IaRU42LI",
"GGrGrOl6bES",
"l8tcgmMg3r",
"5LI4xVJ5LFt",
"qnPSPgi2Dq1",
"2df0fabcDn",
"nips_2022__D4cE66L9x3",
"nips_2022__D4cE66L9x3",
"nips_2022__D4cE66L9x3",
"nips_2022__D4cE66L9x3"
] |
nips_2022_zYc5FSxL6ar | Low-Rank Modular Reinforcement Learning via Muscle Synergy | Modular Reinforcement Learning (RL) decentralizes the control of multi-joint robots by learning policies for each actuator. Previous work on modular RL has proven its ability to control morphologically different agents with a shared actuator policy. However, with the increase in the Degree of Freedom (DoF) of robots, training a morphology-generalizable modular controller becomes exponentially difficult. Motivated by the way the human central nervous system controls numerous muscles, we propose a Synergy-Oriented LeARning (SOLAR) framework that exploits the redundant nature of DoF in robot control. Actuators are grouped into synergies by an unsupervised learning method, and a synergy action is learned to control multiple actuators in synchrony. In this way, we achieve a low-rank control at the synergy level. We extensively evaluate our method on a variety of robot morphologies, and the results show its superior efficiency and generalizability, especially on robots with a large DoF like Humanoids++ and UNIMALs. | Accept | This paper propose an approach for learning low-rank synergies for morphology-generalizable robot control.
All the reviewers agree that the paper is interesting and a valuable contribution.
Hence, I recommend acceptance.
Additional comments:
- The related work section does not cover virtually any past work from robotics that deal with synergies and dimensionality reduction for large action-spaces. It would be good to include some of this literature to better place your work.
- The dimensionality of the action space in the environments used is not very high. However, the fact that in neither the manuscript nor the appendix the values of K (i.e., the number of actuators) are specified might create ambiguity. It would be good to make these values more visible. | train | [
"jK8O6KS0_rK",
"kou8EFaW3Us",
"HiRQMWUshJL",
"mTMrE0t0Inb",
"VKG6Pcddi6yh",
"4L2PchJXDT",
"deqURfzJpoM",
"TrdWdFRp8u_",
"aYV2oolsu3R"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The authors response has addressed my questions and concerns about the work. The additional box pushing task is quite interesting to see. Thus I'd recommend acceptance of the paper.",
" Thanks for the reply and the revision.\nI think the paper is insightful not only for the AI community but also for other scien... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"mTMrE0t0Inb",
"4L2PchJXDT",
"nips_2022_zYc5FSxL6ar",
"aYV2oolsu3R",
"TrdWdFRp8u_",
"deqURfzJpoM",
"nips_2022_zYc5FSxL6ar",
"nips_2022_zYc5FSxL6ar",
"nips_2022_zYc5FSxL6ar"
] |
nips_2022_q-LMlivZrV | CLOOB: Modern Hopfield Networks with InfoLOOB Outperform CLIP | CLIP yielded impressive results on zero-shot transfer learning tasks and is considered as a foundation model like BERT or GPT3. CLIP vision models that have a rich representation are pre-trained using the InfoNCE objective and natural language supervision before they are fine-tuned on particular tasks. Though CLIP excels at zero-shot transfer learning, it suffers from an explaining away problem, that is, it focuses on one or few features, while neglecting other relevant features. This problem is caused by insufficiently extracting the covariance structure in the original multi-modal data. We suggest to use modern Hopfield networks to tackle the problem of explaining away. Their retrieved embeddings have an enriched covariance structure derived from co-occurrences of features in the stored embeddings. However, modern Hopfield networks increase the saturation effect of the InfoNCE objective which hampers learning. We propose to use the InfoLOOB objective to mitigate this saturation effect. We introduce the novel "Contrastive Leave One Out Boost" (CLOOB), which uses modern Hopfield networks for covariance enrichment together with the InfoLOOB objective. In experiments we compare CLOOB to CLIP after pre-training on the Conceptual Captions and the YFCC dataset with respect to their zero-shot transfer learning performance on other datasets. CLOOB consistently outperforms CLIP at zero-shot transfer learning across all considered architectures and datasets. | Accept | The submission proposes a new objective for contrastive training with images+text. The comparison is made with CLIP. The method, CLOOB, improves upon a reimplementation of CLIP. The main concern here is that InfoLOOB and Hopfield nets are known already and whether this submission is thus incremental. Reviewer 3XnZ points out that InfoLOOB as a contrastive objective was previously discussed, for example. Like the reviewer, I am concerned that the comparisons against CLIP are not quite done properly: fewer tasks (this is improved in the rebuttal) and much smaller batches. The authors *claim* that larger batches sizes would benefit CLOOB too, but it's not something really shown in the work.
I believe the rebuttal addresses many of the reviewers' concerns. I can understand the concerns regarding novelty, but I believe the authors when they say that they use Hopfield nets in a way that wasn't used before. Given the promising results, careful ablations/studies, and relatively small (5%) time penalty, I think this could be of general interest, so I vote for this to be accepted.
| train | [
"a5ANCsLz6-e",
"wl9Q5-f7N6m",
"VIl2jLfI6he",
"1WphRcjw4Jc",
"0LnW_tKr6uE",
"2qJutDuA1Uh",
"90Fkt8w0DVm",
"8c29rceknqs"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your comments which helped to improve the paper. We addressed all your comments and followed all your suggestions.\n\n**Novelty**\n\nWe disagree with the reviewer concerning the novelty. So far, Hopfield networks have been used as associative memories to store patterns. In contrast, we use modern Ho... | [
-1,
-1,
-1,
-1,
-1,
5,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"8c29rceknqs",
"90Fkt8w0DVm",
"2qJutDuA1Uh",
"2qJutDuA1Uh",
"nips_2022_q-LMlivZrV",
"nips_2022_q-LMlivZrV",
"nips_2022_q-LMlivZrV",
"nips_2022_q-LMlivZrV"
] |
nips_2022_5XtsqM57-Zb | Learning on the Edge: Online Learning with Stochastic Feedback Graphs | The framework of feedback graphs is a generalization of sequential decision-making with bandit or full information feedback. In this work, we study an extension where the directed feedback graph is stochastic, following a distribution similar to the classical Erdős-Rényi model. Specifically, in each round every edge in the graph is either realized or not with a distinct probability for each edge. We prove nearly optimal regret bounds of order $\min\bigl\{\min_{\varepsilon} \sqrt{(\alpha_\varepsilon/\varepsilon) T},\, \min_{\varepsilon} (\delta_\varepsilon/\varepsilon)^{1/3} T^{2/3}\bigr\}$ (ignoring logarithmic factors), where $\alpha_{\varepsilon}$ and $\delta_{\varepsilon}$ are graph-theoretic quantities measured on the support of the stochastic feedback graph $\mathcal{G}$ with edge probabilities thresholded at $\varepsilon$. Our result, which holds without any preliminary knowledge about $\mathcal{G}$, requires the learner to observe only the realized out-neighborhood of the chosen action. When the learner is allowed to observe the realization of the entire graph (but only the losses in the out-neighborhood of the chosen action), we derive a more efficient algorithm featuring a dependence on weighted versions of the independence and weak domination numbers that exhibits improved bounds for some special cases. | Accept | The reviewers came to consensus that this paper makes a good progress on online learning with stochastic feedback graphs. I agree with these opinions and please polish the manuscript by addressing the raised minor concerns such as the presentation issues in the final version. | val | [
"2CM0FTFOEjO",
"JSoUV5m7xOG",
"QMIWxIVLXkY",
"HXEolnsxjYG",
"dY5SS4ODIT3",
"ZNZRrjbu2N8",
"2EKQo9FL9hw",
"P2aZbmgEQHu",
"B5fkzkvRBBj",
"oLnVmI9vp2r",
"0tcX_NMq_Ps",
"RATlcT47NbF"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the careful response. \nAfter reading authors' response and comments of other reviewers, I tend to keep my original score.",
" Thank you for your response, it is much appreciated. We are pleased that you are satisfied with most of the responses; if you have other specific concerns or questions we wil... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
1,
4,
3
] | [
"P2aZbmgEQHu",
"QMIWxIVLXkY",
"HXEolnsxjYG",
"dY5SS4ODIT3",
"0tcX_NMq_Ps",
"oLnVmI9vp2r",
"RATlcT47NbF",
"B5fkzkvRBBj",
"nips_2022_5XtsqM57-Zb",
"nips_2022_5XtsqM57-Zb",
"nips_2022_5XtsqM57-Zb",
"nips_2022_5XtsqM57-Zb"
] |
nips_2022_l2CVt1ySC2Q | On Measuring Excess Capacity in Neural Networks | We study the excess capacity of deep networks in the context of supervised classification. That is, given a capacity measure of the underlying hypothesis class - in our case, empirical Rademacher complexity - to what extent can we (a priori) constrain this class while retaining an empirical error on a par with the unconstrained regime? To assess excess capacity in modern architectures (such as residual networks), we extend and unify prior Rademacher complexity bounds to accommodate function composition and addition, as well as the structure of convolutions. The capacity-driving terms in our bounds are the Lipschitz constants of the layers and a (2,1) group norm distance to the initializations of the convolution weights. Experiments on benchmark datasets of varying task difficulty indicate that (1) there is a substantial amount of excess capacity per task, and (2) capacity can be kept at a surprisingly similar level across tasks. Overall, this suggests a notion of compressibility with respect to weight norms, complementary to classic compression via weight pruning. Source code is available at https://github.com/rkwitt/excess_capacity. | Accept | This paper extends prior Rademacher complexity bounds to neural network architectures that include convolutions and skip connections. The paper is well-organized with a clear summary of the field and related work. The reviewers found that, although the techniques were not novel, the new results enhance the practical relevance of prior bounds and will be of interest to the theory community. A few technical points were raised and addressed during the discussion phase, but the consensus remains that this is a solid paper and should be published, and so I recommend acceptance. | train | [
"zzvaGmM0fdK",
"Zy6iudYpYg-",
"3MtrlT9q3Pt",
"bq2DujuA_Al",
"A1WlhhcEsai",
"HkhnCkzRqTy",
"nxL81xI5H2g",
"IIiJj6SzVD8",
"__2DtET6vn5",
"m-YMigpVC9_",
"FDdpeQpdJ1W",
"mv8SQl7exaJ",
"HqFDb45kDGc",
"eH1erXkjdjA",
"chec8P62Klz",
"DcfmEp59XkZ",
"gvGxiLp2fO2"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank you for very thoughtful response. We are glad that the issues could be resolved.\nFor a final version, we will update our manuscript to incorporate your comments and extend our preliminary experiments regarding point (8).",
" (7) Yes, this is correct, for the set of filters it is a matrix multiplicatio... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
1
] | [
"bq2DujuA_Al",
"nxL81xI5H2g",
"HkhnCkzRqTy",
"A1WlhhcEsai",
"gvGxiLp2fO2",
"gvGxiLp2fO2",
"gvGxiLp2fO2",
"DcfmEp59XkZ",
"DcfmEp59XkZ",
"DcfmEp59XkZ",
"DcfmEp59XkZ",
"chec8P62Klz",
"chec8P62Klz",
"chec8P62Klz",
"nips_2022_l2CVt1ySC2Q",
"nips_2022_l2CVt1ySC2Q",
"nips_2022_l2CVt1ySC2Q"
... |
nips_2022_q-snd9xOG3b | Log-Concave and Multivariate Canonical Noise Distributions for Differential Privacy | A canonical noise distribution (CND) is an additive mechanism designed to satisfy $f$-differential privacy ($f$-DP), without any wasted privacy budget. $f$-DP is a hypothesis testing-based formulation of privacy phrased in terms of tradeoff functions, which captures the difficulty of a hypothesis test. In this paper, we consider the existence and construction of both log-concave CNDs and multivariate CNDs. Log-concave distributions are important to ensure that higher outputs of the mechanism correspond to higher input values, whereas multivariate noise distributions are important to ensure that a joint release of multiple outputs has a tight privacy characterization. We show that the existence and construction of CNDs for both types of problems is related to whether the tradeoff function can be decomposed by functional composition (related to group privacy) or mechanism composition. In particular, we show that pure $\epsilon$-DP cannot be decomposed in either way and that there is neither a log-concave CND nor any multivariate CND for $\epsilon$-DP. On the other hand, we show that Gaussian-DP, $(0,\delta)$-DP, and Laplace-DP each have both log-concave and multivariate CNDs. | Accept | This paper provides conditions for the existence of log-concave multivariate distributions to satisfy f-differential privacy constraints. The results of the paper have the potential to be broadly applicable. | val | [
"S5kGrD6s4ta",
"p8XWordH5_u",
"d1LMoXl9JB8",
"PBU3XNInRt9U",
"-rmVjS6w3nN",
"okNGegaaVia",
"ISJiVzVA5Z6",
"yKOthk4FUVU",
"CYQRzA5_pOK"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks very much for your response. I'm keeping my score.",
" Thank you for addressing all my questions, I am keeping my score (voting for acceptance).",
" Reviewer RBrs asks whether there exists a multivariate CND for $(\\epsilon,\\delta)$-DP with respect to $\\ell_2$ sensitivity. We agree that this is an in... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"d1LMoXl9JB8",
"PBU3XNInRt9U",
"CYQRzA5_pOK",
"yKOthk4FUVU",
"ISJiVzVA5Z6",
"nips_2022_q-snd9xOG3b",
"nips_2022_q-snd9xOG3b",
"nips_2022_q-snd9xOG3b",
"nips_2022_q-snd9xOG3b"
] |
nips_2022_yhZLEvmyHYQ | Bayesian Active Learning with Fully Bayesian Gaussian Processes | The bias-variance trade-off is a well-known problem in machine learning that only gets more pronounced the less available data there is. In active learning, where labeled data is scarce or difficult to obtain, neglecting this trade-off can cause inefficient and non-optimal querying, leading to unnecessary data labeling. In this paper, we focus on active learning with Gaussian Processes (GPs). For the GP, the bias-variance trade-off is made by optimization of the two hyperparameters: the length scale and noise-term. Considering that the optimal mode of the joint posterior of the hyperparameters is equivalent to the optimal bias-variance trade-off, we approximate this joint posterior and utilize it to design two new acquisition functions. The first one is a Bayesian variant of Query-by-Committee (B-QBC), and the second is an extension that explicitly minimizes the predictive variance through a Query by Mixture of Gaussian Processes (QB-MGP) formulation. Across six simulators, we empirically show that B-QBC, on average, achieves the best marginal likelihood, whereas QB-MGP achieves the best predictive performance. We show that incorporating the bias-variance trade-off in the acquisition functions mitigates unnecessary and expensive data labeling. | Accept | This paper took a “fully Bayesian” perspective on Gaussian process (FBGP) regression tasks. The key technical contribution builds on the argument that the optimal mode of the hyperparameter posterior (i.e. over the length-scale and the noise parameters) corresponds to the optimal bias-variance trade-off. One would expect that a fully Bayesian approach (with a reasonable prior) would indeed be beneficial; it is interesting and useful to understand how such methods perform in practice in terms of robustness and the (extra) computational burden. All reviewers agree that this work provided sufficient empirical support that demonstrates the potential advantages of using FBGP for active learning in regression tasks. During the rebuttal/discussion phase, the authors provided extra empirical evidence which makes the results more convincing (e.g., discussion and empirical analysis on the computational complexity and the factors that affect such). There were no other critical concerns in the reviews.
There are valuable suggestions in the reviews, including improving the clarity when analyzing the results against baselines, providing details of the experimental setting and results, and an in-depth discussion of the computational cost (which appears to be a key message to convey) The authors are strongly encouraged to address the concerns raised in the reviews when preparing a revision of this paper.
| train | [
"V4y8Vkh_ORo",
"YPmqyOPEDfL",
"NXQmfFWaC0s",
"9JZeKy2yNQy",
"d5BiJQCsKI",
"FEHv3eayh",
"-GsYk7QWZc6",
"w-DhAIPTe5n",
"jIBkjt8JyTe"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response to the review comments.\n\n1. The rebuttal to the first question is reasonable, and I understand the authors' point. It would be great if the authors could include the relevant discussion in the manuscript, either in the discussion/conclusion sections or as a potential limitation.\n\n2.... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4
] | [
"d5BiJQCsKI",
"NXQmfFWaC0s",
"jIBkjt8JyTe",
"jIBkjt8JyTe",
"w-DhAIPTe5n",
"-GsYk7QWZc6",
"nips_2022_yhZLEvmyHYQ",
"nips_2022_yhZLEvmyHYQ",
"nips_2022_yhZLEvmyHYQ"
] |
nips_2022_ckQvYXizgd1 | The computational and learning benefits of Daleian neural networks | Dale’s principle implies that biological neural networks are composed of neurons that are either excitatory or inhibitory. While the number of possible architectures of such Daleian networks is exponentially smaller than the number of non-Daleian ones, the computational and functional implications of using Daleian networks by the brain are mostly unknown. Here, we use models of recurrent spiking neural networks and rate-based ones to show, surprisingly, that despite the structural limitations on Daleian networks, they can approximate the computation performed by non-Daleian networks to a very high degree of accuracy. Moreover, we find that Daleian networks are more functionally robust to synaptic noise. We then show that unlike non-Daleian networks, Daleian ones can learn efficiently by tuning of single neuron features, nearly as well as learning by tuning individual synaptic weights. Importantly, this suggests a simpler and more biologically plausible learning mechanisms. We therefore suggest that in addition to architectural simplicity, Dale's principle confers computational and learning benefits for biological networks, and offer new directions for constructing and training biologically-inspired artificial neural networks. | Accept | This paper examines the impact of Dale's principal from neuroscience on neural network computation. Dale's principle says that neurons only release a single neurotransmitter type from their axons, which in principle, means neurons are either excitatory or inhibitory, but not both. Using both spiking and rate-based recurrent networks, the authors show that networks that respect Dale's principle can recapitulate the same computations as those that do not while exhibiting greater robustness to noise. This provides some account for why Dale's principle may provide actual benefits to neural computation.
The reviewers agreed that this paper is well-written and addresses an important question. The decision to accept was unanimous. | test | [
"0lKUBybAmeL",
"W52keo0px7",
"m5w0GqYygL",
"CeD1BM9KTmV",
"HY8W7u3Knu",
"AQEXqSVHEzK",
"mc1e9KEwlHk",
"TSWVTA_PRTS",
"ZSu6No9-1sg",
"08oEv-O5sV"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" > The one point that I think was missed is that, even if the Hessians are evaluated exactly, any weights that are at the constraint boundary (weights = 0) should not be perturbed even infinitesimally in the direction of negative weight. Have the authors thought of this possibility?\n\nIndeed, for the case of weig... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5
] | [
"m5w0GqYygL",
"CeD1BM9KTmV",
"mc1e9KEwlHk",
"HY8W7u3Knu",
"08oEv-O5sV",
"ZSu6No9-1sg",
"TSWVTA_PRTS",
"nips_2022_ckQvYXizgd1",
"nips_2022_ckQvYXizgd1",
"nips_2022_ckQvYXizgd1"
] |
nips_2022_dAZdQM32IoK | Robust Streaming PCA | We consider streaming principal component analysis when the stochastic data-generating model is subject to perturbations. While existing models assume a fixed covariance, we adopt a robust perspective where the covariance matrix belongs to a temporal uncertainty set. Under this setting, we provide fundamental limits on any algorithm recovering principal components. We analyze the convergence of the noisy power method and Oja’s algorithm, both studied for the stationary data generating model, and argue that the noisy power method is rate-optimal in our setting. Finally, we demonstrate the validity of our analysis through numerical experiments. | Accept | The authors consider the streaming principal component analysis problem where the data generation process may be subject to perturbations. The paper provides fundamental limits and analyze the convergence of the noisy power method and Oja's method in this setting. This shows that the former is rate-optimal. The reviewers found the paper to be a solid contribution. Further, the paper has undergone some substantial improvements during the discussion process, which I commend the authors on. I recommend acceptance while strongly encouraging the authors to take any remaining reviewer comments into account while crafting the next version of the manuscript. | train | [
"HlNiyRaQ9Hc",
"Z4guacDJN-c",
"ew7-FgfsVyA",
"KBvfP6udL0D",
"Md7PHiSzIwk",
"t2SJGu3L9_",
"RrDhSb-AE9Q",
"K37RaE-1MrE",
"wDA5jkDGNSw",
"mj-SPwnU8PC",
"NDM7w3Qi1h2"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you to the authors for responding in a detailed manner. I am convinced by the answers and recommend publication of the paper. ",
" I thank the authors for the reply. I also thank them for modifying the paper and for providing new experiments in the appendix of the paper.",
" We thank the reviewer for th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
3,
3
] | [
"t2SJGu3L9_",
"Md7PHiSzIwk",
"NDM7w3Qi1h2",
"mj-SPwnU8PC",
"wDA5jkDGNSw",
"K37RaE-1MrE",
"nips_2022_dAZdQM32IoK",
"nips_2022_dAZdQM32IoK",
"nips_2022_dAZdQM32IoK",
"nips_2022_dAZdQM32IoK",
"nips_2022_dAZdQM32IoK"
] |
nips_2022_xT5rDp5VqKO | Coincidence Detection Is All You Need | This paper demonstrates that the performance of coincidence detection - a classic neuromorphic signal processing method found in Rosenblatt's perceptrons with distributed transmission times, can be competitive to a state-of-the-art deep learning method for pattern recognition. Hence, we cannot remain comfortably numb to the prevailing dogma that efficient matrix-vector operations is all we need; but should enquire with greater vigour if more advanced continual learning methods (running on spiking-neural network hardware with neuromodulatory mechanisms at multiple timescales) can beat the accuracy of task-specific deep learning methods. | Reject | The authors use a simple coincidence detection algorithm on pathogenic bacteria data set and increase performance by 0.5% with respect to a ResNet-26 that was published previously.
The manuscript is quite thought provoking showing that a simple algorithm can potentially outperform a complex deep neural network. However, all reviewers agreed that the study should be extended in order to be publishable at NeurIPS. In particular, they raised the following points.
- Relevance with respect to existing methods is not discussed. Related to that, the model seems similar to a perceptron with some normalization as preprocessing. This relation should be discussed.
- The method is tested only on a single data set. More evaluation would be needed in order to assess the generality of the method. Also, a clean statistical analysis is missing.
- An analysis for why method performs well on this (and potentially also other) data set is missing. | train | [
"y3U60EMSkA",
"PcZx9HCktbg",
"m4kL4PS9lVo",
"8gbHis633t",
"dwr6IfFSqY",
"sOD9DZfhVDL",
"JehvBlQhXrs",
"s3SI2thrQua",
"OcQnGVWhgxR",
"bSKv2UBbfnG",
"nTJUW520xZG",
"qvVGGme721n",
"Ww16UitdxKm",
"v_VbuPkvHb",
"FEi2c9H8kHX"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Yes, we need more datasets for future work. But it is not so wise to demand that for \"first\" empirical demonstrations, if you want to accelerate impact coming from outside the resource-rich industry.",
" Adding stats would ideally only strengthen the presented results and verify against strong variation in ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
1,
2
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"PcZx9HCktbg",
"bSKv2UBbfnG",
"8gbHis633t",
"JehvBlQhXrs",
"sOD9DZfhVDL",
"qvVGGme721n",
"s3SI2thrQua",
"nTJUW520xZG",
"nips_2022_xT5rDp5VqKO",
"FEi2c9H8kHX",
"v_VbuPkvHb",
"Ww16UitdxKm",
"nips_2022_xT5rDp5VqKO",
"nips_2022_xT5rDp5VqKO",
"nips_2022_xT5rDp5VqKO"
] |
nips_2022_AVh_HTC76u | A Reparametrization-Invariant Sharpness Measure Based on Information Geometry | It has been observed that the generalization performance of neural networks correlates with the sharpness of their loss landscape. Dinh et al. (2017) have observed that existing formulations of sharpness measures fail to be invariant with respect to scaling and reparametrization. While some scale-invariant measures have recently been proposed, reparametrization-invariant measures are still lacking. Moreover, they often do not provide any theoretical insights into generalization performance nor lead to practical use to improve the performance. Based on an information geometric analysis of the neural network parameter space, in this paper we propose a reparametrization-invariant sharpness measure that captures the change in loss with respect to changes in the probability distribution modeled by neural networks, rather than with respect to changes in the parameter values. We reveal some theoretical connections of our measure to generalization performance. In particular, experiments confirm that using our measure as a regularizer in neural network training significantly improves performance. | Accept | The expert reviewers appreciated the contributions and ideas in this paper and also liked their rebuttals.
It is also good to see new ideas being introduced to the important topic of flatness measures for deep learning.
Also, the presentation also enhances the paper's reputation.
In contrast, it is worth noting that their claims are somewhat excessive. The reviewers have pointed out specific modifications, hence I recommend to accept this paper on the condition that the points are modified. | train | [
"lyJQGqSd7oR",
"wSQkj0msNLq",
"ZeVDYNuzQ6j",
"s5kuqIDXo9g",
"cJTNOHa58u4",
"qonFa49eEWf",
"wn6KAy-omil",
"flaHYPSGmvG",
"J93dMqSCRD",
"lTCAmhzcWr",
"BEjtbKhkNBD",
"5OnWgCF2-NX",
"s-SwxX8oXjC",
"nD4mXH2mCzC",
"AxEsyuRevju",
"uFsuizr8qQp",
"-1wnbegBSCT"
] | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate the reviewer for asking interesting questions which have helped to improve our manuscript.\n\nWe also appreciate the reviewer for the kind replies and for raising the score.",
" 1 – The insights obtained from the eigenspectra of the FIM are indeed similar to those from the Hessian in the sense tha... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"ZeVDYNuzQ6j",
"qonFa49eEWf",
"flaHYPSGmvG",
"cJTNOHa58u4",
"lTCAmhzcWr",
"J93dMqSCRD",
"-1wnbegBSCT",
"-1wnbegBSCT",
"uFsuizr8qQp",
"uFsuizr8qQp",
"AxEsyuRevju",
"AxEsyuRevju",
"nD4mXH2mCzC",
"nips_2022_AVh_HTC76u",
"nips_2022_AVh_HTC76u",
"nips_2022_AVh_HTC76u",
"nips_2022_AVh_HTC7... |
nips_2022_ocViyp73pFO | Disentangling Causal Effects from Sets of Interventions in the Presence of Unobserved Confounders | The ability to answer causal questions is crucial in many domains, as causal inference allows one to understand the impact of interventions. In many applications, only a single intervention is possible at a given time. However, in some important areas, multiple interventions are concurrently applied. Disentangling the effects of single interventions from jointly applied interventions is a challenging task---especially as simultaneously applied interventions can interact. This problem is made harder still by unobserved confounders, which influence both treatments and outcome. We address this challenge by aiming to learn the effect of a single-intervention from both observational data and sets of interventions. We prove that this is not generally possible, but provide identification proofs demonstrating that it can be achieved under non-linear continuous structural causal models with additive, multivariate Gaussian noise---even when unobserved confounders are present. Importantly, we show how to incorporate observed covariates and learn heterogeneous treatment effects. Based on the identifiability proofs, we provide an algorithm that learns the causal model parameters by pooling data from different regimes and jointly maximising the combined likelihood. The effectiveness of our method is empirically demonstrated on both synthetic and real-world data. | Accept | The authors describe a method to estimate the effect of an intervention on a single variable X_j in a setting where data from interventions on multiple variables is available. This data is combined with observational data to arrive at an identification formula. The procedure relies heavily on (a) assuming that the noise / confounding is additive (b) that there is no causal relationship between the covariates and to lesser extent on (c) a Gaussianity assumption. The identification strategy is based on a neat idea, and it is refreshing to read about novel identification results that make use of both observational and interventional data. However, it is unclear how robust the procedure is under small violations of the assumptions. Furthermore, in many applications, some of the covariates might be categorical. This is problematic under the additive confounding assumption. To summarize, this paper presents an interesting idea that allows combining evidence from observational and interventional data. In its current form, the setting is likely too artificial to be directly relevant for practice. | train | [
"xZ-ObUUjSi",
"lHkE-ux1VgY",
"wZJ4qmkmtb7",
"EYVvKE0_-tx",
"jEOLBfRn22J",
"hWj6OaCivHC",
"QEdFCDrtjze",
"1YiS1preSC",
"jpBaWWXh7q",
"tvmP9YVjdf0",
"bK8m4xKjs-",
"-Hibm3PqRP",
"uLBEJGTncNI",
"DTEs27u4ay",
"GXFXzLeHmNh"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Not exactly.\n\nThe estimation of $\\Sigma$ comprises the E-step: we use the training sample to estimate the distribution for the latent variables, which we can then use to compute the expected likelihood.\nBecause of the additive noise assumption, we can recover estimates for the noise terms themselves (i.e. $\\... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
3,
4
] | [
"lHkE-ux1VgY",
"jEOLBfRn22J",
"bK8m4xKjs-",
"nips_2022_ocViyp73pFO",
"hWj6OaCivHC",
"tvmP9YVjdf0",
"1YiS1preSC",
"GXFXzLeHmNh",
"DTEs27u4ay",
"uLBEJGTncNI",
"-Hibm3PqRP",
"nips_2022_ocViyp73pFO",
"nips_2022_ocViyp73pFO",
"nips_2022_ocViyp73pFO",
"nips_2022_ocViyp73pFO"
] |
nips_2022_zt4xNo0lF8W | Mask Matching Transformer for Few-Shot Segmentation | In this paper, we aim to tackle the challenging few-shot segmentation task from a new perspective. Typical methods follow the paradigm to firstly learn prototypical features from support images and then match query features in pixel-level to obtain segmentation results. However, to obtain satisfactory segments, such a paradigm needs to couple the learning of the matching operations with heavy segmentation modules, limiting the flexibility of design and increasing the learning complexity. To alleviate this issue, we propose Mask Matching Transformer (MM-Former), a new paradigm for the few-shot segmentation task. Specifically, MM-Former first uses a class-agnostic segmenter to decompose the query image into multiple segment proposals. Then, a simple matching mechanism is applied to merge the related segment proposals into the final mask guided by the support images. The advantages of our MM-Former are two-fold. First, the MM-Former follows the paradigm of \textit{decompose first and then blend}, allowing our method to benefit from the advanced potential objects segmenter to produce high-quality mask proposals for query images. Second, the mission of prototypical features is relaxed to learn coefficients to fuse correct ones within a proposal pool, making the MM-Former be well generalized to complex scenarios or cases. We conduct extensive experiments on the popular COCO-$20^i$ and Pascal-$5^i$ benchmarks. Competitive results well demonstrate the effectiveness and the generalization ability of our MM-Former. Code is available at https://github.com/Picsart-AI-Research/Mask-Matching-Transformer. | Accept | This paper introduced a new matching mechanism in mask-level instead of pixel-level matching in previous few-shot segmentation methods. Additionally, they propose a feature-alignment block based on the attention mechanism to align both support and query features individually and cross-align between them. Reviewers in general agree on the novelty of the proposed approach. Some reviewers were concerned about some details of module design, and the authors have answered those satisfactorily. Finally, AC believes that this approach of producing mask proposals and matching at the mask level is a novel direction for few-shot segmentation, and recommends acceptance of the paper. | train | [
"t2XFFBBUrbZ",
"3VpctsEFP2",
"RDtophvLppy",
"whTzYO6tkuZ",
"cOZhp1JqrLY",
"Nva3NvLMa7w",
"zIagAP2kKdL",
"Gqe1kRnCmQ1",
"8qs_0BO1W6p",
"kQawFL_31zU",
"charCZMzBER",
"N-G1GCteqcM",
"Ot7c6apDE8",
"Ae6QIcIO1K7",
"_0fbLaCXFbD"
] | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewers, \n\nWe thank all of you for taking your valuable time to provide insightful comments, which significantly strengthen our paper. We have carefully responded to your questions accordingly with the necessary additional experiments and analyses. We hope our responses have addressed all your concerns.... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5,
4
] | [
"nips_2022_zt4xNo0lF8W",
"RDtophvLppy",
"Ae6QIcIO1K7",
"cOZhp1JqrLY",
"Gqe1kRnCmQ1",
"Ae6QIcIO1K7",
"Ae6QIcIO1K7",
"_0fbLaCXFbD",
"Ot7c6apDE8",
"N-G1GCteqcM",
"nips_2022_zt4xNo0lF8W",
"nips_2022_zt4xNo0lF8W",
"nips_2022_zt4xNo0lF8W",
"nips_2022_zt4xNo0lF8W",
"nips_2022_zt4xNo0lF8W"
] |
nips_2022_G8BExMno316 | Adjoint-aided inference of Gaussian process driven differential equations | Linear systems occur throughout engineering and the sciences, most notably as differential equations. In many cases the forcing function for the system is unknown, and interest lies in using noisy observations of the system to infer the forcing, as well as other unknown parameters. In differential equations, the forcing function is an unknown function of the independent variables (typically time and space), and can be modelled as a Gaussian process (GP). In this paper we show how the adjoint of a linear system can be used to efficiently infer forcing functions modelled as GPs, after using a truncated basis expansion of the GP kernel. We show how exact conjugate Bayesian inference for the truncated GP can be achieved, in many cases with substantially lower computation than would be required using MCMC methods. We demonstrate the approach on systems of both ordinary and partial differential equations, and show that the basis expansion approach approximates well the true forcing with a modest number of basis vectors. Finally, we show how to infer point estimates for the non-linear model parameters, such as the kernel length-scales, using Bayesian optimisation. | Accept | The paper looks at a method for inference in Latent Force Models that uses the adjoint method to help out with the inference. Three of the reviewers describe the work as easy to read, sound, relevant and useful.
Some of the reviewers lament the lack of more experiments - I would have loved to see the results on the spatial experiment described in the discussion. The authors did add one experiment as part of the rebuttal - I urge them to add this to the manuscript.
One reviewer complained about the 'mathiness' of the presentation. They gave an overall score that I don't feel reflects the overall quality of the paper, and I'm inclined to discard it. Yet, I urge the authors to consider whether all of the terms used are maximising the accessibility and therefore impact of the paper. If Banach spaces are essential to the work, please explain why. If they are a small technical necessity, but unimportant to understand the main ideas, I suggest you relegate them to a formal proof in the appendix. | val | [
"1n1AZ8inKWAg",
"eerxdILc2aZ",
"uyqZ5WCGkF",
"jeMqVCxNCa",
"xiP1XfYDj0",
"oMu997uyPMi",
"f34yAcSy8Vf",
"wNjGPqdti-mx",
"JGkQmwdtqdV",
"89DaOHC-ZN0",
"QwA_7hvLvsG",
"v0MV43lR-l5",
"mkhATMoMsEN",
"JAq7g865oH4",
"oDO0j6-Igk",
"QgQgF6qAehb",
"axp4izgBIf",
"L8Kz4w7AqV",
"usTgsRpmbb"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for clarifying, and thank you for noting that the ideas in the paper 'are good and certainly worthy of NeurIPS'. We think NeurIPS is the ideal venue for interesting new ideas and we hope the area chairs will take this into consideration.\n\nWe don't think your comment about there being a 'strong focus on B... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"uyqZ5WCGkF",
"oMu997uyPMi",
"jeMqVCxNCa",
"xiP1XfYDj0",
"wNjGPqdti-mx",
"JAq7g865oH4",
"v0MV43lR-l5",
"QwA_7hvLvsG",
"89DaOHC-ZN0",
"QwA_7hvLvsG",
"usTgsRpmbb",
"mkhATMoMsEN",
"L8Kz4w7AqV",
"axp4izgBIf",
"QgQgF6qAehb",
"nips_2022_G8BExMno316",
"nips_2022_G8BExMno316",
"nips_2022_G... |
nips_2022_q4IG88RJiMv | Estimating the Arc Length of the Optimal ROC Curve and Lower Bounding the Maximal AUC | In this paper, we show the arc length of the optimal ROC curve is an $f$-divergence. By leveraging this result, we express the arc length using a variational objective and estimate it accurately using positive and negative samples. We show this estimator has a non-parametric convergence rate $O_p(n^{-\beta/4})$ ($\beta \in (0,1]$ depends on the smoothness). Using the same technique, we show the surface area sandwiched between the optimal ROC curve and the diagonal can be expressed via a similar variational objective. These new insights lead to a novel two-step classification procedure that maximizes an approximate lower bound of the maximal AUC. Experiments on CIFAR-10 datasets show the proposed two-step procedure achieves good AUC performance in imbalanced binary classification tasks. | Accept | The paper shows that the arc length of the optimal ROC curve is an $f$-divergence, propose san estimator for it, and builds on the insights obtained to design a new algorithm for approximately maximizing the area under the ROC curve.
The reviewers generally appreciate the theoretical results / insights. The only concern seems to be about the empirical evaluation of the AUC maximization procedure and about the lack of sufficient comparison to the state-of-the-art AUC maximization methods.
Given that the main contribution is the connection drawn between the arc length of ROC curve and $f$-divergence, a majority of the reviewers are in favor of accepting the paper even if the empirical evaluation is not entirely satisfactory.
**Authors inaccurate about offline AUC maximization taking $O(n^2)$ time**
One of the selling points in the paper is that the new AUC maximization approach achieves a better run-time complexity than more traditional methods for AUC-maximization. Based on the authors' back-and-forth with Reviewer WsBr, it appears that the method of Ying et al. (2016) already achieves an O(n) run-time even for the offline setting.
I would also like to point to the authors that, dating back to as early as 2005, there have been methods for offline AUC maximization with O(n log(n)) computational cost. For example, with the pairwise SVM loss, Joachims (2005, lemma 2) show that the loss/gradient computation only requires $O(n \log(n))$ computation. This computational time applies to both linear and non-linear models, as I elaborate below.
Suppose there are $n^+$ positive examples and $n^-$ negative examples, and we would like to minimize the pairwise hinge loss for scoring function $f$:
$L(f) = \sum_{i=1}^{n^+} \sum_{j=1}^{n^-} \phi(f(x^+_i) - f(x^-_j) )$
where $\phi(z) = \max(0, 1 - z)$. Computing this loss does not need us to explicitly enumerate $O(n^2)$ pairs. Instead it suffices to sort the positives according to $f(x^+_i)$ and the negatives according to $1 + f(x^-_j)$, and then compute the following cumulative stats by taking a single pass (O(n)) over the sorted examples:
$N_{i}^{+}= \sum^{n^-}_{j=1} \mathbb{I}( 1 + f(x^-_j) \geq f(x^+_i) )$
$L_{i}^{+}= \sum_{j:~ 1 + f(x^-_j) \geq f(x^+_i) } f(x^-_j)$
The pairwise loss can then be computed in O(n) time:
$L(f) = \sum_{i=1}^{n^+} L_{i}^{+} + N_{i}^{+} \cdot (1 - f(x^+_i) )$
A similar procedure can be used to compute gradients for the pairwise loss, and would again require only $O(n\log(n))$ computation (for the sorting step).
*Ref*: Joachims, A Support Vector Method for Multivariate Performance Measures, ICML 2005.
**Recommended changes/inclusions to camera-ready version**
We are accepting this paper under the expectation that the authors will include a more accurate description off-line AUC maximization methods, and accurately describe the exact computational advantages their method has (if any) over prior AUC maximization methods. If there are none, please don't highlight them in the paper. A review of stochastic AUC maximization methods is also highly desirable. | train | [
"rstHhWT3z4h",
"CCOw9SGkAsE",
"t2rknuc2X4j",
"n0NQmAyhVxN",
"ntt9MDR0pl",
"dcxB_4nf2K",
"GvS6somMTuz",
"HD0fHeBPOjY",
"9FTInvpAJxQ",
"wcC4vfVd3_P",
"M3bb5lDPMBO",
"1BktPaoIWgO",
"LvhJB_bYggt",
"_dVWybjPaNd",
"c2mWu7eURmz",
"6YJSYdbNikH",
"c2BBG4T8FII"
] | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" During the discussion, Reviewer WSBr raised an interesting point that Theorem 1 in Ying et al., 2016 offers a way of rewriting the pairwise AUC loss into a linear form, which has a computational complexity of O(n), instead of O(n^2). \n\nWe would like to comment that this technique only works for linear models ($... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"nips_2022_q4IG88RJiMv",
"t2rknuc2X4j",
"dcxB_4nf2K",
"ntt9MDR0pl",
"1BktPaoIWgO",
"GvS6somMTuz",
"HD0fHeBPOjY",
"LvhJB_bYggt",
"nips_2022_q4IG88RJiMv",
"6YJSYdbNikH",
"c2mWu7eURmz",
"_dVWybjPaNd",
"c2BBG4T8FII",
"nips_2022_q4IG88RJiMv",
"nips_2022_q4IG88RJiMv",
"nips_2022_q4IG88RJiMv"... |
nips_2022_47lpv23LDPr | Unsupervised Learning of Group Invariant and Equivariant Representations | Equivariant neural networks, whose hidden features transform according to representations of a group $G$ acting on the data, exhibit training efficiency and an improved generalisation performance. In this work, we extend group invariant and equivariant representation learning to the field of unsupervised deep learning. We propose a general learning strategy based on an encoder-decoder framework in which the latent representation is separated in an invariant term and an equivariant group action component. The key idea is that the network learns to encode and decode data to and from a group-invariant representation by additionally learning to predict the appropriate group action to align input and output pose to solve the reconstruction task. We derive the necessary conditions on the equivariant encoder, and we present a construction valid for any $G$, both discrete and continuous. We describe explicitly our construction for rotations, translations and permutations. We test the validity and the robustness of our approach in a variety of experiments with diverse data types employing different network architectures. | Accept | The paper proposes an auto-encoder that maps a point to a canonical point and to a group element such that the composition of the group and the canonical point reconstructs the point (the invariance / equivariance is in this sense). The method is promising since it learn in an unsupervised way the group actions. Experiments were on simple tasks and group actions. Reviewers were overall positive and the paper improved a lot during rebuttal/ revision thanks to their recommendations.
I have one recommendation to the authors to include the use of the representation learned in a classification task to see how the learned representation alleviates the need of large training samples.
Accept
| test | [
"rHMXfEdHxt6",
"lrMpuRzrZP",
"frspvHIBmL2",
"RMc1R3Uk-A",
"4tTd4DaNvqm",
"Uz-IB04RXoI",
"7XU-A0-bDJ",
"AOGuo00rI5H",
"nmijBZ-0hZ",
"ipu2AjqQHQP",
"My031lmnnQC",
"-YUe6QSI1RG",
"4qamzVm0D2",
"UF9qo5oz5g",
"en-mspcCH-",
"dJBwneNXCf1y",
"ZvcWs6WyhyJ",
"UmF2iXsd5c_",
"KuhdvDPvdIY",
... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" We take again the opportunity to thank all the reviewers for the insightful reviews, comments and discussion. \n\nWe have uploaded a new updated version of the paper with an expended related work section. This can be currently found in Appendix G to respect the current page limit, but will be moved in the main se... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"nips_2022_47lpv23LDPr",
"vc_MwH93TNK",
"7XU-A0-bDJ",
"ipu2AjqQHQP",
"nmijBZ-0hZ",
"pZOP7eIYdfJ",
"4qamzVm0D2",
"-YUe6QSI1RG",
"UF9qo5oz5g",
"UF9qo5oz5g",
"vc_MwH93TNK",
"j836-tmDV9F",
"kjG1Rp4uCN",
"kblwrCGgEuh",
"nips_2022_47lpv23LDPr",
"vc_MwH93TNK",
"ZdQeyp-BVm4",
"gZFXNuV-DlI"... |
nips_2022_LffWuGtC9BE | Bounding and Approximating Intersectional Fairness through Marginal Fairness | Discrimination in machine learning often arises along multiple dimensions (a.k.a. protected attributes); it is then desirable to ensure \emph{intersectional fairness}---i.e., that no subgroup is discriminated against. It is known that ensuring \emph{marginal fairness} for every dimension independently is not sufficient in general. Due to the exponential number of subgroups, however, directly measuring intersectional fairness from data is impossible. In this paper, our primary goal is to understand in detail the relationship between marginal and intersectional fairness through statistical analysis. We first identify a set of sufficient conditions under which an exact relationship can be obtained. Then, we prove bounds (easily computable through marginal fairness and other meaningful statistical quantities) in high-probability on intersectional fairness in the general case. Beyond their descriptive value, we show that these theoretical bounds can be leveraged to derive a heuristic improving the approximation and bounds of intersectional fairness by choosing, in a relevant manner, protected attributes for which we describe intersectional subgroups. Finally, we test the performance of our approximations and bounds on real and synthetic data-sets. | Accept | This paper focuses on intersectional fairness in supervised learning, specifically on understanding the relationship between ensuring fairness on one particular sensitive attribute and on the intersection of those attributes. A largely analytical paper, bounds are given relating the latter to exact guarantees on the former. Reviewers took minor issue with the experimental validation and with the applicability of these bounds; I agree with the former, but do not agree with the latter. This work adds a nice building block to the foundations of fair ML research. | train | [
"TLzWZEWGypj",
"yEJx339rwR9",
"ZQL8rxgrkoN",
"w2f_QIJKbe",
"wdOhFIbCeW4",
"RjVDjC4hRix",
"VfezjyGm18U",
"0i5V83fGYvf",
"vXYLraq7mSt"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for their responses which address most of my concerns.",
" I wish to thank the authors for their response. Following the response to my questions and clarifications, I tend to keep my original score.",
" Thanks, all (authors and reviewers!) for participating in this process. ... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"w2f_QIJKbe",
"RjVDjC4hRix",
"nips_2022_LffWuGtC9BE",
"vXYLraq7mSt",
"0i5V83fGYvf",
"VfezjyGm18U",
"nips_2022_LffWuGtC9BE",
"nips_2022_LffWuGtC9BE",
"nips_2022_LffWuGtC9BE"
] |
nips_2022_qHGCH75usg | BYOL-Explore: Exploration by Bootstrapped Prediction | We present BYOL-Explore, a conceptually simple yet general approach for curiosity-driven exploration in visually complex environments. BYOL-Explore learns the world representation, the world dynamics and the exploration policy all-together by optimizing a single prediction loss in the latent space with no additional auxiliary objective. We show that BYOL-Explore is effective in DM-HARD-8, a challenging partially-observable continuous-action hard-exploration benchmark with visually rich 3-D environment. On this benchmark, we solve the majority of the tasks purely through augmenting the extrinsic reward with BYOL-Explore intrinsic reward, whereas prior work could only get off the ground with human demonstrations. As further evidence of the generality of BYOL-Explore, we show that it achieves superhuman performance on the ten hardest exploration games in Atari while having a much simpler design than other competitive agents. | Accept | The reviewers agreed overall that this work is a solid contribution, combining multiple earlier methods/ideas. The experimental validation is good. Together, these merit acceptance. | train | [
"M1gCNhziWCz",
"tVvxxhC6JDf",
"ZdNHtxwUD8X",
"IZuCvqZvr98",
"GPXvW_2gso6",
"bbECwhm2MEw",
"sd3hEu4Wxwq",
"Eu7h2fVKxcG",
"VG-4bjjPG8O",
"9IxCBA2kXGC"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their clarifications, and my apologies to the extent that I missed some of the things that I asked about in the first place. I have switched my rating to an \"Accept\".\n\nConcerning the frame count, I would note that although the RND paper does not put the frame count on the x axis, it do... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"ZdNHtxwUD8X",
"nips_2022_qHGCH75usg",
"9IxCBA2kXGC",
"VG-4bjjPG8O",
"Eu7h2fVKxcG",
"sd3hEu4Wxwq",
"nips_2022_qHGCH75usg",
"nips_2022_qHGCH75usg",
"nips_2022_qHGCH75usg",
"nips_2022_qHGCH75usg"
] |
nips_2022_-kS21GWVJU | Meta-sketch: A Neural Data Structure for Estimating Item Frequencies of Data Streams | To estimate item frequencies of data streams with limited space, sketches are widely used in real applications, including real-time web analytics, network monitoring, and self-driving. Sketches can be viewed as a model which maps the identifier of a stream item to the corresponding frequency domain. Starting from the premise, we envision a neural data structure, which we term the meta-sketch, to go beyond the basic structure of conventional sketches. The meta-sketch learns basic sketching abilities from meta-tasks constituted with synthetic datasets following Zipf distributions in the pre-training phase and can be fast adapted to real (skewed) distributions in the adaption phase. Extensive experiments demonstrate the performance gains of the meta-sketch and offer insights into our proposals.
| Reject | The paper had mixed reviews in terms of scores but if we put the strengths and the weakness together the weakness appears stronger.
Strengths: People liked the neural only method and good experimental results on two datasets.
Weakness: Evaluations may not make sense in data stream settings, only two datasets, no theoretical motivation etc. Paper do not cite and compare recent literature on learned/adaptive learned sketches
The AC went through the paper and did find that there are certain major arguments that needs to be made before the paper is accepted.
1. The paper is about learned sketches that typically works in a bursty environment and with a lot of distribution changes all the time. No wonder all the learning augmented method still use sketches to provide worst case guarantees and control over the estimation errors. This is more or less required if at all any learning based method claims to replace sketches. As a result, the purely neural approach which treats this as a learning problem without any theoretical understanding requires more justification and case study of real scenarios.
2. The paper did not cite and compare with several recent learned/adaptive learned sketches in the literature (including recent papers in NeurIPS/ICML)
3. Looking at supplementary martial and meta-task generation, it seems that there is a very strong assumption that distribution of frequency does not change over intervals. For instance, one of the typical use case of frequency estimation is to recover frequency in any interval of time (See papers on sketches over time or adaptive sketches) and dyadic interval tricks to extend sketches to do that. A purely learning based approach is unlikely to achieve much there.
| train | [
"A5mymgyRKTV",
"iUuwm367zBn",
"3cEcdYzYoDR",
"_yjRZRL9hUb",
"L0uN4JbNy6_",
"CEu8a7wAgWL",
"Nq8PlfAtlPP"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank all the reviewers for their time and constructive comments. We would like to know whether our responses have addressed your concerns. Please feel free to comment if there are any further confusions.",
" Thanks a lot for your comments! We respond to your comments below.\n>### Q1: Discussion of the archi... | [
-1,
-1,
-1,
-1,
4,
7,
7
] | [
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"nips_2022_-kS21GWVJU",
"Nq8PlfAtlPP",
"CEu8a7wAgWL",
"L0uN4JbNy6_",
"nips_2022_-kS21GWVJU",
"nips_2022_-kS21GWVJU",
"nips_2022_-kS21GWVJU"
] |
nips_2022_cmKZD3wdJBT | Lipschitz Bandits with Batched Feedback | In this paper, we study Lipschitz bandit problems with batched feedback, where the expected reward is Lipschitz and the reward observations are communicated to the player in batches. We introduce a novel landscape-aware algorithm, called Batched Lipschitz Narrowing (BLiN), that optimally solves this problem. Specifically, we show that for a $T$-step problem with Lipschitz reward of zooming dimension $d_z$, our algorithm achieves theoretically optimal (up to logarithmic factors) regret rate $\widetilde{\mathcal{O}}\left(T^{\frac{d_z+1}{d_z+2}}\right)$ using only $ \mathcal{O} \left( \log\log T\right) $ batches. We also provide complexity analysis for this problem. Our theoretical lower bound implies that $\Omega(\log\log T)$ batches are necessary for any algorithm to achieve the optimal regret. Thus, BLiN achieves optimal regret rate using minimal communication. | Accept | This is an interesting work that initiates the study of Lipschitz bandits in the case of batched feedback. That it is possible to obtain nearly optimal regret with such little adaptation (only $\log \log T$ batches), as the authors do, is an interesting result; moreover, the regret lower bound based on the number of batches shows that the optimal regret can be achieved only with $\Omega(\log \log T)$ batches, implying the authors’ method A-BLiN has communication complexity no higher than methods obtaining the optimal regret.
All reviewers are positive on this work and the work merits acceptance; that said, as some reviewers suggested, the authors would do well to highlight the novelty of their approach and relate their proof techniques to proof techniques used for similar problems. Also, I noticed that the claims need to be adjusted somewhat. At times the authors claim that their regret rate is optimal or (prior to Definition 1 and in reference to A-BLiN) "without incurring increasing regret"; this is not correct, due to the extra $\log \log T$ factor. The authors should instead say that the regret rate is near-optimal and that the regret does not increase by much (but the rate certainly does worsen, however so slightly). It is important to be clear on optimality vs near-optimality. In any case, congratulations. | train | [
"IobK1L74YLY",
"WgQprVrYHp3",
"HqRfVCRg4wu",
"IZRhizO8EbJ",
"3tI3B3Ni_HE",
"bP690WTPnMV",
"OxoJeo03krk",
"yhy2jTfQXk7",
"elGyofNnM-H",
"FvqN0mGOH3",
"S8QIJWg6Rw8",
"RDQzq37e_Au"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your recommendation. We will add it in our final version.",
" Thank you for your response. We agree with your comment, and will consider it in our further work.",
" Thanks for your response and I have no further questions. I would recommend mentioning the challenge of obtaining the zooming dimen... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"HqRfVCRg4wu",
"IZRhizO8EbJ",
"3tI3B3Ni_HE",
"OxoJeo03krk",
"RDQzq37e_Au",
"S8QIJWg6Rw8",
"FvqN0mGOH3",
"elGyofNnM-H",
"nips_2022_cmKZD3wdJBT",
"nips_2022_cmKZD3wdJBT",
"nips_2022_cmKZD3wdJBT",
"nips_2022_cmKZD3wdJBT"
] |
nips_2022_z9CkpUorPI | NeuroSchedule: A Novel Effective GNN-based Scheduling Method for High-level Synthesis | High-level synthesis (HLS) is widely used for transferring behavior-level specifications into circuit-level implementations. As a critical step in HLS, scheduling arranges the execution order of operations for enhanced performance. However, existing scheduling methods suffer from either exponential runtime or poor quality of solutions.
This paper proposes an efficient and effective GNN-based scheduling method called NeuroSchedule, with both fast runtime and enhanced solution quality. Major features are as follows: (1) The learning problem for HLS scheduling is formulated for the first time, and a new machine learning framework is proposed. (2) Pre-training models are adopted to further enhance the scalability for various scheduling problems with different settings. Experimental results show that NeuroSchedule obtains near-optimal solutions while achieving more than 50,000x improvement in runtime compared with the ILP-based scheduling method. At the same time, NeuroSchedule improves the scheduling results by 6.10% on average compared with state-of-the-art entropy-directed method. To the best of our knowledge, this is the first GNN-based scheduling method for HLS. | Accept | This paper presents NeuroSchedule, a GNN-based scheduling method for high-level synthesis. To enhance the scalability for various scheduling problems with different settings, the paper additionally adopts pre-training/fine-tuning methodology. For pre-training, the paper use data labeled by the ILP-based scheduler in supervised way fashion. Experimental results indicate that NeuroSchedule can obtain near-optimal solution with significant speedup compared with traditional method.
All reviews have similar opinion on the strength and weakness of the paper. On the strength side, the paper is well written, and details are clearly explained. Both the GNN-based scheduling method and pre-training/fine-tuning approach are novel and interesting. The experiments are reasonably sound.
The concerns are 1). The testing benchmarks are small; and 2). The optimization run time is longer than the known Entropy-Directed scheduling method. During rebuttal period, the authors provide further explanation, partially addressing the concerns.
Overall, I think it is a solid paper, particularly in the aspect of novelty. Therefore, the paper is recommended for acceptance.
| train | [
"TgHmvb-hq_",
"UucsR0GMakM",
"QxbW_JaMuR",
"gBoAZSsWGJH",
"Z3dIy9Sfi8b",
"OnjNh-5WX30",
"anbF_Grqbj",
"AbQa2-2Y2vw",
"D8ZO1TP_Z1E",
"agfYioCnOXD"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the valuable comments. We noticed that the reviewers are concerned about the scheduling results and the runtime of our method compared with EDS. First, for HLS, 6.1% improvement in the scheduling result is significant enhancement, which notably reduces the runtime of the circuits and thu... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
2,
4,
3
] | [
"nips_2022_z9CkpUorPI",
"agfYioCnOXD",
"agfYioCnOXD",
"D8ZO1TP_Z1E",
"AbQa2-2Y2vw",
"anbF_Grqbj",
"nips_2022_z9CkpUorPI",
"nips_2022_z9CkpUorPI",
"nips_2022_z9CkpUorPI",
"nips_2022_z9CkpUorPI"
] |
nips_2022_Cd-b50MZ0Gc | Quantized Training of Gradient Boosting Decision Trees | Recent years have witnessed significant success in Gradient Boosting Decision Trees (GBDT) for a wide range of machine learning applications. Generally, a consensus about GBDT's training algorithms is gradients and statistics are computed based on high-precision floating points. In this paper, we investigate an essentially important question which has been largely ignored by the previous literature - how many bits are needed for representing gradients in training GBDT? To solve this mystery, we propose to quantize all the high-precision gradients in a very simple yet effective way in the GBDT's training algorithm. Surprisingly, both our theoretical analysis and empirical studies show that the necessary precisions of gradients without hurting any performance can be quite low, e.g., 2 or 3 bits. With low-precision gradients, most arithmetic operations in GBDT training can be replaced by integer operations of 8, 16, or 32 bits. Promisingly, these findings may pave the way for much more efficient training of GBDT from several aspects: (1) speeding up the computation of gradient statistics in histograms; (2) compressing the communication cost of high-precision statistical information during distributed training; (3) the inspiration of utilization and development of hardware architectures which well support low-precision computation for GBDT training. Benchmarked on CPUs, GPUs, and distributed clusters, we observe up to 2$\times$ speedup of our simple quantization strategy compared with SOTA GBDT systems on extensive datasets, demonstrating the effectiveness and potential of the low-precision training of GBDT. The code will be released to the official repository of LightGBM. | Accept | The reviewers conclude on an interesting paper with broad messaging that does make sense -- I do subscribe to it as well. I recommend it for acceptance, noting that interactions with reviewers were the occasion to provide additional remarks that have to be used to craft the camera ready, in particular for the remarks made to PPQi (mostly the technical part of this discussion). | train | [
"tQmJm0B7rm",
"c9I8M2eg_sn",
"JKMt8e1VKp8",
"gT2fucWlyVp",
"CwZEPeTXC0W",
"SDd2F9lelx",
"glFKKSj_OXs",
"dT_tUrmIlcl",
"Sqr3lokpck2"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We sincerely thank the reviewer for reviewing our paper, supporting our work, and providing valuable suggestions. We address your concerns as follows:\n\n**Regrading the novelty.** Training models with quantized gradients for neural networks are well-studied in previous literature but little is known on tree mode... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
3
] | [
"Sqr3lokpck2",
"Sqr3lokpck2",
"dT_tUrmIlcl",
"dT_tUrmIlcl",
"glFKKSj_OXs",
"glFKKSj_OXs",
"nips_2022_Cd-b50MZ0Gc",
"nips_2022_Cd-b50MZ0Gc",
"nips_2022_Cd-b50MZ0Gc"
] |
nips_2022_-_AMpmyV0Ll | On the difficulty of learning chaotic dynamics with RNNs | Recurrent neural networks (RNNs) are wide-spread machine learning tools for modeling sequential and time series data. They are notoriously hard to train because their loss gradients backpropagated in time tend to saturate or diverge during training. This is known as the exploding and vanishing gradient problem. Previous solutions to this issue either built on rather complicated, purpose-engineered architectures with gated memory buffers, or - more recently - imposed constraints that ensure convergence to a fixed point or restrict (the eigenspectrum of) the recurrence matrix. Such constraints, however, convey severe limitations on the expressivity of the RNN. Essential intrinsic dynamics such as multistability or chaos are disabled. This is inherently at disaccord with the chaotic nature of many, if not most, time series encountered in nature and society. It is particularly problematic in scientific applications where one aims to reconstruct the underlying dynamical system.
Here we offer a comprehensive theoretical treatment of this problem by relating the loss gradients during RNN training to the Lyapunov spectrum of RNN-generated orbits. We mathematically prove that RNNs producing stable equilibrium or cyclic behavior have bounded gradients, whereas the gradients of RNNs with chaotic dynamics always diverge.
Based on these analyses and insights we suggest ways of how to optimize the training process on chaotic data according to the system's Lyapunov spectrum, regardless of the employed RNN architecture. | Accept | This paper studies the ability for an RNN to learn the dynamics of a chaotic system. The authors relate the learning dynamics for these systems to the Lyapunov exponents of the underlying dynamical system. They then propose ameliorating the role of chaotic dynamics by “sparsely forced BPTT” which forces the RNN hidden state on a timescale that is induced by the Lyapunov exponents.
All of the reviewers supported accepting this paper and it was the highest rated paper on my stack. Reviewers cited the clear writing and motivation as well as the simplicity and effectiveness of the proposed solution. Reviewers also noted the importance of time-series analysis as a research problem. Improving modeling of chaotic time series seems like a valuable and important contribution. | train | [
"O01uYqyrASR",
"dPMNmptmHIP",
"B_U-0c0xx9",
"QudViovG0M5",
"6xoPWIacZSG",
"zRTnIcvGPJ8",
"e5MWB-Dym5",
"ZA3Lo3wYBlc",
"luRFUmJQkbP",
"PMPYyDrLJbR",
"nSHfZ3X4A6i",
"jFrW-KDbCuO",
"qkrpWC52I_v"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response!",
" We thank the referee for the kind reply and time taken to evaluate our manuscript and rebuttal. We are glad we could address your concerns, and agree that the suggested comparisons and discussion are very helpful!",
" Thank you for expanding the discussion in relation to the simi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
9,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4,
4
] | [
"e5MWB-Dym5",
"B_U-0c0xx9",
"QudViovG0M5",
"6xoPWIacZSG",
"qkrpWC52I_v",
"jFrW-KDbCuO",
"nSHfZ3X4A6i",
"PMPYyDrLJbR",
"nips_2022_-_AMpmyV0Ll",
"nips_2022_-_AMpmyV0Ll",
"nips_2022_-_AMpmyV0Ll",
"nips_2022_-_AMpmyV0Ll",
"nips_2022_-_AMpmyV0Ll"
] |
nips_2022_NzFtM5Pzvm | Embracing Consistency: A One-Stage Approach for Spatio-Temporal Video Grounding | Spatio-Temporal video grounding (STVG) focuses on retrieving the spatio-temporal tube of a specific object depicted by a free-form textual expression. Existing approaches mainly treat this complicated task as a parallel frame-grounding problem and thus suffer from two types of inconsistency drawbacks: feature alignment inconsistency and prediction inconsistency. In this paper, we present an end-to-end one-stage framework, termed Spatio-Temporal Consistency-Aware Transformer (STCAT), to alleviate these issues. Specially, we introduce a novel multi-modal template as the global objective to address this task, which explicitly constricts the grounding region and associates the predictions among all video frames. Moreover, to generate the above template under sufficient video-textual perception, an encoder-decoder architecture is proposed for effective global context modeling. Thanks to these critical designs, STCAT enjoys more consistent cross-modal feature alignment and tube prediction without reliance on any pre-trained object detectors. Extensive experiments show that our method outperforms previous state-of-the-arts with clear margins on two challenging video benchmarks (VidSTG and HC-STVG), illustrating the superiority of the proposed framework to better understanding the association between vision and natural language. Code is publicly available at https://github.com/jy0205/STCAT. | Accept | This paper presents a novel end-to-end framework for Spatio-Temporal video grounding where the feature alignment and prediction inconsistency can be jointly addressed. All of our reviewers believe this paper is well-presented with a novel idea and SOTA results. Overall, I would like to recommend this paper to be accepted with a poster presentation. | train | [
"Fn_XlNOs7Be",
"LCgD6BTV0qa",
"fFd9ZCoU8xN",
"i2ZjsDNn72x",
"jp-Ey7SAqJ",
"YxHAm7kccOd"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for appreciating our work and providing valuable feedback. Below are our responses to the raised concerns.\n\n*Q-1: I find that there is no explicit loss to enforce the feature consistency, I wonder how your designed module can make the feature “consistency”. I think more visualization and a... | [
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
4,
3,
4
] | [
"YxHAm7kccOd",
"jp-Ey7SAqJ",
"i2ZjsDNn72x",
"nips_2022_NzFtM5Pzvm",
"nips_2022_NzFtM5Pzvm",
"nips_2022_NzFtM5Pzvm"
] |
nips_2022_Cr4_3ptitj | Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks | Few-shot learning for neural networks (NNs) is an important problem that aims to train NNs with a few data. The main challenge is how to avoid overfitting since over-parameterized NNs can easily overfit to such small dataset. Previous work (e.g. MAML by Finn et al. 2017) tackles this challenge by meta-learning, which learns how to learn from a few data by using various tasks. On the other hand, one conventional approach to avoid overfitting is restricting hypothesis spaces by endowing sparse NN structures like convolution layers in computer vision. However, although such manually-designed sparse structures are sample-efficient for sufficiently large datasets, they are still insufficient for few-shot learning. Then the following questions naturally arise: (1) Can we find sparse structures effective for few-shot learning by meta-learning? (2) What benefits will it bring in terms of meta-generalization? In this work, we propose a novel meta-learning approach, called Meta-ticket, to find optimal sparse subnetworks for few-shot learning within randomly initialized NNs. We empirically validated that Meta-ticket successfully discover sparse subnetworks that can learn specialized features for each given task. Due to this task-wise adaptation ability, Meta-ticket achieves superior meta-generalization compared to MAML-based methods especially with large NNs. | Accept | This paper proposes a meta-learning algorithm which aims to find a subnetwork of a randomly initialized network, that is optimized for each task. Specifically, the authors propose to meta-learn the latent continuous parameters that correspond to a task-specific binary mask for each task, and provide empirical evidence and theoretical analysis which show that the method does rapid learning for each task rather than reusing the meta-learned features. The proposed Meta-ticket algorithm is validated on the few-shot classification tasks from various domains, and is shown to outperform existing meta-learning algorithms such as MAML, ANIL, and BOIL.
The reviewers in general were positive about the idea of finding a lottery ticket, or an optimal subnetwork for each task in a meta-learning scenario, and the discussion on rapid learning v.s. feature reuse. Also, they found the experimental validation as sufficiently extensive and the paper well-written.
However, there was a major concern regarding the work’s limited novelty over existing works that aim to learn a binary masking in a meta-learning, or multi-task scenarios, such as [Alizadeh et al. 2022], [Mallya et al. 2018]. Also, while not mentioned by the reviewers, [Lee et al. 20] cited in the paper also aim to learn a task-adaptive masking of shared initialization parameters. While the authors provided experimental comparison to ProsPr [Alizadeh et al. 2022], other methods were not discussed or compared against despite their relevance. There were also other minor concerns, such as missing ablation study on the hyperparameter, degraded performance with small neural networks, but they were mostly addressed away in the author responses.
In sum, this is a well-written paper that provides a fresh view of the task-adaptation in the meta-learning in the perspective of lottery ticket hypothesis and feature reuse v.s. Rapid adaptation argument. However, due to limited novelty, discussion, and experimental comparison and analyses of the comparative study over existing works, I recommend a borderline accept for this paper in its current state. The authors are strongly advised to include discussion of [Mallya et al. 18], and experimental comparison against [Lee et al. 20] in the final version of the paper.
[Mallya et al.] Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights, CVPR 2018
[Lee et al. 20] Learning to Balance: Bayesian Meta-Learning for Imbalanced and Out-of-distribution Tasks, ICLR 2020
[Alizadeh et al. 22] Prospect Pruning: Finding Trainable Weights at Initialization using Meta-Gradients, ICLR 2022 | train | [
"FT2-uit-ayC",
"B46qXGmOPm",
"PYNI1PddIAe",
"xspZ1IQTEgt",
"xG9OHNe1ik",
"Io3-GX7pZK2",
"UQ-t68fiZAf",
"psZZe4xjFPK",
"phCxuJzQrIP",
"KMXmbp2ne2RN",
"LI8aK3QPBJr",
"HsgknrvCVnd",
"hsENL_PuFdS",
"ul2idaS2CI-",
"OaaOxQmy6E0",
"yH5Wqfv2OSa",
"D_GSZvuUrV"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" \n> In the results on specific general/specific adaptations, one possible analysis will be more helpful to better understand the meta-ticket, such as why was the meta-ticket not able to acquire a gain in CUB->CUB setting?\n\nThanks for posing such an interesting question.\nOne possible reason is: MAML-based metho... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4,
3
] | [
"B46qXGmOPm",
"hsENL_PuFdS",
"HsgknrvCVnd",
"Io3-GX7pZK2",
"phCxuJzQrIP",
"psZZe4xjFPK",
"psZZe4xjFPK",
"D_GSZvuUrV",
"yH5Wqfv2OSa",
"LI8aK3QPBJr",
"OaaOxQmy6E0",
"hsENL_PuFdS",
"ul2idaS2CI-",
"nips_2022_Cr4_3ptitj",
"nips_2022_Cr4_3ptitj",
"nips_2022_Cr4_3ptitj",
"nips_2022_Cr4_3pti... |
nips_2022_ZPyKSBaKkiO | FR: Folded Rationalization with a Unified Encoder | Rationalization aims to strengthen the interpretability of NLP models by extracting a subset of human-intelligible pieces of their inputting texts. Conventional works generally employ a two-phase model in which a generator selects the most important pieces, followed by a predictor that makes predictions based on the selected pieces. However, such a two-phase model may incur the degeneration problem where the predictor overfits to the noise generated by a not yet well-trained generator and in turn, leads the generator to converge to a suboptimal model that tends to select senseless pieces. To tackle this challenge, we propose Folded Rationalization (FR) that folds the two phases of the rationale model into one from the perspective of text semantic extraction. The key idea of FR is to employ a unified encoder between the generator and predictor, based on which FR can facilitate a better predictor by access to valuable information blocked by the generator in the traditional two-phase model and thus bring a better generator. Empirically, we show that FR improves the F1 score by up to 10.3% as compared to state-of-the-art methods. | Accept | There is consensus between reviewers that this is a worthwhile paper suitable for this venue. I also apppreciate the extensive back and forths between the authors and the reviewers that seem to have improved the paper during the reviewing period. | train | [
"3kx2Gux1aj9",
"EjO22lhG7Hu",
"Of8AzDI7SH",
"qQ7bIUWn1ip",
"1qDAwyJRVn",
"Y-cNRD13yII",
"kFSrKbG020U",
"b_HX0XimYzf",
"VjRB32lkXR6",
"qfsts8jsD4",
"qdO6WixF58",
"BXc3WwHqY41",
"r0EVu7iTezf",
"T0WXWF5VMhg",
"J50qz6a_Gtg",
"YWBmv1PUlhS",
"Bq7YRgSuL9e",
"kqE2dH0k4zJ",
"LVtd0tRHpy3",... | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" Thank you for the response and added experiments. My questions have been addressed. I will keep my original score.",
" Thank you very much for taking the time to review our paper! With best wishes to you and yours!",
" Thank you very much for taking the time to review our paper! With best wishes to you and yo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4
] | [
"VjRB32lkXR6",
"tiSqpPHgj2OA",
"H_R2CtoIip",
"1qDAwyJRVn",
"kFSrKbG020U",
"r0EVu7iTezf",
"b_HX0XimYzf",
"qfsts8jsD4",
"J50qz6a_Gtg",
"T0WXWF5VMhg",
"BXc3WwHqY41",
"oQ_Zmz2Y--",
"kqE2dH0k4zJ",
"Bq7YRgSuL9e",
"80fezxG7r3E",
"nips_2022_ZPyKSBaKkiO",
"B2U1H-g3xT",
"B2U1H-g3xT",
"B2U1... |
nips_2022_aQySSrCbBul | Generalization Properties of NAS under Activation and Skip Connection Search | Neural Architecture Search (NAS) has fostered the automatic discovery of state-of-the-art neural architectures. Despite the progress achieved with NAS, so far there is little attention to theoretical guarantees on NAS. In this work, we study the generalization properties of NAS under a unifying framework enabling (deep) layer skip connection search and activation function search. To this end, we derive the lower (and upper) bounds of the minimum eigenvalue of the Neural Tangent Kernel (NTK) under the (in)finite-width regime using a certain search space including mixed activation functions, fully connected, and residual neural networks. We use the minimum eigenvalue to establish generalization error bounds of NAS in the stochastic gradient descent training. Importantly, we theoretically and experimentally show how the derived results can guide NAS to select the top-performing architectures, even in the case without training, leading to a train-free algorithm based on our theory. Accordingly, our numerical validation shed light on the design of computationally efficient methods for NAS. Our analysis is non-trivial due to the coupling of various architectures and activation functions under the unifying framework and has its own interest in providing the lower bound of the minimum eigenvalue of NTK in deep learning theory. | Accept | This work relates NAS to the conditioning of a DNN through the NTK framework. This work is well supported theoretically and empirically, and making this connexion is surprising. Given the potential interest for the NAS community, I recommend accepting this paper. | train | [
"PzdA2wtITUD",
"5qn5ufHmX6",
"n_QjFomhPjA",
"UIgQV4b06UG",
"3edogJBzuqD",
"1iK5UMhqA7Y",
"lJm1wiXZOe9",
"5OBSuVNemsc",
"_8BKJLQjnj",
"LhWoq1xzo8t",
"Ph8zspFwXL9",
"7X0XiCH43J_",
"sf1tY8L8yjp",
"AJWqgMr9CeP",
"bcysAdgUQBx",
"GLQQcbDa93L",
"hIMSbSTDiGj",
"fL3-XsgtIuOE",
"FmnjimDhwv... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official... | [
" Thank you for your comment, my answer indeed happened concurrently with your change.\n\nThough I find the new title / abstract much better, I decided to keep my rating to 6 after some hesitations, due to limited significance.",
" Dear reviewer h7Bq, \n\nWe are thankful for your feedback and thoughtful questions... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
1,
4,
3
] | [
"3edogJBzuqD",
"Ph8zspFwXL9",
"UIgQV4b06UG",
"1iK5UMhqA7Y",
"lJm1wiXZOe9",
"LhWoq1xzo8t",
"hIMSbSTDiGj",
"nips_2022_aQySSrCbBul",
"sf1tY8L8yjp",
"7X0XiCH43J_",
"FmnjimDhwvy",
"fL3-XsgtIuOE",
"bcysAdgUQBx",
"G7iJ-GGJ-ai",
"G7iJ-GGJ-ai",
"vDNAK89XmqJ",
"A1yQaxuBJXS",
"vDNAK89XmqJ",
... |
nips_2022_Ku1afTnmozi | Improving Out-of-Distribution Generalization by Adversarial Training with Structured Priors | Deep models often fail to generalize well in test domains when the data distribution differs from that in the training domain. Among numerous approaches to address this Out-of-Distribution (OOD) generalization problem, there has been a growing surge of interest in exploiting Adversarial Training (AT) to improve OOD performance. Recent works have revealed that the robust model obtained by conducting sample-wise AT also retains transferability to biased test domains. In this paper, we empirically show that sample-wise AT has limited improvement on OOD performance. Specifically, we find that AT can only maintain performance at smaller scales of perturbation while Universal AT (UAT) is more robust to larger-scale perturbations. This provides us with clues that adversarial perturbations with universal (low dimensional) structures can enhance the robustness against large data distribution shifts that are common in OOD scenarios. Inspired by this, we propose two AT variants with low-rank structures to train OOD-robust models. Extensive experiments on DomainBed benchmark show that our proposed approaches outperform Empirical Risk Minimization (ERM) and sample-wise AT. Our code is available at https://github.com/NOVAglow646/NIPS22-MAT-and-LDAT-for-OOD. | Accept | The authors have provided a strong rebuttal addressing many of the concerns brought up by reviewers. Two of the reviewers have in response increased their scores significantly. At the end of the process most of the reviewers have converged in favour of acceptance. | train | [
"hAYCvMmCfV_",
"GVrgpxqOWHn",
"xEo8ayg2MS2",
"aEr3J_70Tc",
"zMkqWXQ1q3y",
"uy2tV__pT-",
"MTjCobPXJNJ",
"8pwDZ1B98f4",
"2PivK1p9EeM",
"8nOFDAjmNz0l",
"bNyRQSHT1_-",
"s_l5g0dwkL4",
"OssbHXDSuPo",
"TAuxDOMxb-",
"HYeq8oS8DE0",
"BVVpq2KhK8B",
"9FA5XLREwd",
"JWOMYmnGbWy",
"4_Wbmso4d5",... | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
... | [
" It's interesting to see that the proposed method can outperform NCDG, which performs conventional data augmentation for domain generalization. I am more positive about this submission. \n\nJust curious (nothing to do with the decision of NeurIPS), how does the proposed method perform under the setting of data cor... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3,
3,
2
] | [
"OssbHXDSuPo",
"2PivK1p9EeM",
"uy2tV__pT-",
"MTjCobPXJNJ",
"rCQjG9PT9uc",
"PYTjBFiMJi",
"2PivK1p9EeM",
"4_Wbmso4d5",
"9FA5XLREwd",
"rCQjG9PT9uc",
"PYTjBFiMJi",
"DTMxa0xaER8",
"JWOMYmnGbWy",
"zLVvDOsZFp3",
"rCQjG9PT9uc",
"PYTjBFiMJi",
"DTMxa0xaER8",
"zLYzGhJrZfQ",
"zLVvDOsZFp3",
... |
nips_2022_LODRFJr96v | Batch Bayesian Optimization on Permutations using the Acquisition Weighted Kernel | In this work we propose a batch Bayesian optimization method for combinatorial problems on permutations, which is well suited for expensive-to-evaluate objectives. We first introduce LAW, an efficient batch acquisition method based on determinantal point processes using the acquisition weighted kernel. Relying on multiple parallel evaluations, LAW enables accelerated search on combinatorial spaces. We then apply the framework to permutation problems, which have so far received little attention in the Bayesian Optimization literature, despite their practical importance. We call this method LAW2ORDER. On the theoretical front, we prove that LAW2ORDER has vanishing simple regret by showing that the batch cumulative regret is sublinear. Empirically, we assess the method on several standard combinatorial problems involving permutations such as quadratic assignment, flowshop scheduling and the traveling salesman, as well as on a structure learning task. | Accept | This work introduces a new method, based on Bayesian optimization, for solving combinatorial problems. It uses determinantal point processes in their LAW method, which is applied to permutation problems.
The authors provide a theoretical analysis of their method, as well as empirical evidence on several of these problems.
The authors appropriately addressed the comments that the reviewers had, and they all feel comfortable to recommend to accept, and so do I. | train | [
"QMcfb8-z51W",
"bcxcKfzuJVa",
"o96adBof0Hw",
"OL2gDoOvOEw",
"ZiBF_qZM3Mb",
"7CHGdpQ5Xbv",
"K5LRWThRoK0f",
"ijjyBS9S3m",
"N0v989oCQD6",
"_L0yb7c_lw",
"64jdLXlBuZ5",
"Gn6VV6Nt5Jp",
"UBHNrT3-ob4",
"T604v0RpKE",
"buE72QrjpdS",
"x4XrnSdc3ft",
"--96zbxj8lU",
"KALUFVrX-AA"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"o... | [
" Thank you for your detailed answer to my questions.\nI was convinced that it is difficult to use existing information-theoretic acquisition functions on combinatorial spaces. I think this is a material to support the strength of the proposed method.\nIt is also understandable that UCB would actually do the explor... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
2
] | [
"UBHNrT3-ob4",
"OL2gDoOvOEw",
"K5LRWThRoK0f",
"7CHGdpQ5Xbv",
"KALUFVrX-AA",
"ijjyBS9S3m",
"N0v989oCQD6",
"Gn6VV6Nt5Jp",
"nips_2022_LODRFJr96v",
"KALUFVrX-AA",
"x4XrnSdc3ft",
"x4XrnSdc3ft",
"buE72QrjpdS",
"--96zbxj8lU",
"nips_2022_LODRFJr96v",
"nips_2022_LODRFJr96v",
"nips_2022_LODRFJ... |
nips_2022_6OLBVpoxrbW | Optimal Weak to Strong Learning | The classic algorithm AdaBoost allows to convert a weak learner, that is an algorithm that produces a hypothesis which is slightly better than chance, into a strong learner, achieving arbitrarily high accuracy when given enough training data. We present a new algorithm that constructs a strong learner from a weak learner, but uses less training data than AdaBoost and all other weak to strong learners to achieve the same generalization bounds. A sample complexity lower bound shows that our new algorithm uses the minimum possible amount of training data and is thus optimal. Hence, this work settles the sample complexity of the classic problem of constructing a strong learner from a weak learner. | Accept | The paper makes a nice progress on our understanding of boosting | train | [
"A0tBaIU0DH9",
"8z_JvxmO7IV",
"aq_agXqo1Jg",
"-fedeWSL1Fp",
"FAAItdFrcKh",
"VGS5wud1WyK",
"rAQ665oaUMc"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Appreciate your responses. Would definitely like to see extensions to the conclusion to discuss in more detail the connections between the approach presented in the paper and the different open problems mentioned. To what other problems might the proposed approach be applied? Would likely want to see the paper ac... | [
-1,
-1,
-1,
-1,
6,
8,
7
] | [
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"-fedeWSL1Fp",
"rAQ665oaUMc",
"VGS5wud1WyK",
"FAAItdFrcKh",
"nips_2022_6OLBVpoxrbW",
"nips_2022_6OLBVpoxrbW",
"nips_2022_6OLBVpoxrbW"
] |
nips_2022_pZtdVOQuA3 | Differentiable Rendering with Reparameterized Volume Sampling | We propose an alternative rendering algorithm for neural radiance fields based on importance sampling. In view synthesis, a neural radiance field approximates underlying density and radiance fields based on a sparse set of views of a scene. To generate a pixel of a novel view, it marches a ray through the pixel and computes a weighted sum of radiance emitted from a dense set of ray points. This rendering algorithm is fully differentiable and facilitates gradient-based optimization of the fields. However, in practice, only a tiny opaque portion of the ray contributes most of the radiance to the sum. Therefore, we can avoid computing radiance in the rest part. In this work, we use importance sampling to pick non-transparent points on the ray. Specifically, we generate samples according to the probability distribution induced by the density field. Our main contribution is the reparameterization of the sampling algorithm. It allows end-to-end learning with gradient descent as in the original rendering algorithm. With our approach, we can optimize a neural radiance field with just a few radiance field evaluations per ray. As a result, we alleviate the costs associated with the color component of the neural radiance field at the additional cost of the density sampling algorithm. | Reject | Reviewers noted that the paper contains some useful ideas for acceleration of neural rendering pipelines. However, the paper as initially presented was rather preliminary with very limited evaluation. The authors present considerably more material in the rebuttal, but as noted by the reviewers post-rebuttal, this extra material fails to demonstrate the computational advantage which is a primary contribution of this method. It is hoped that the authors can continue this line of work, following the reviewers' suggestions, and demonstrate the method's value. It may be that these ideas have other advantages, or that the advantage is ultimately less significant, in which case a more specific venue such as 3DV may be appropriate.
Although 2v6p did not provide a post-rebuttal comment, their initial review was closely consistent with the others (albeit marginally more positive), so the post-rebuttal analysis of the reviewers and AC remains valid.
| train | [
"kRJ6BcsIoh",
"fxVyiMhjOMC",
"QidS_o-JX-2p",
"_tFl4ir2Cwp",
"vXRy-6KAie",
"gXndcpF5PC",
"DFRBlP3I_-k"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response with extensive additional experiments. However, the concerns addressed by reviewers are not clearly addressed in the rebuttal, especially about the difference in the integral for color evaluation and computational advantage. The broken references and typos are also not corrected, which ... | [
-1,
-1,
-1,
-1,
3,
3,
4
] | [
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"fxVyiMhjOMC",
"QidS_o-JX-2p",
"_tFl4ir2Cwp",
"nips_2022_pZtdVOQuA3",
"nips_2022_pZtdVOQuA3",
"nips_2022_pZtdVOQuA3",
"nips_2022_pZtdVOQuA3"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.