paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nips_2022_BqnMaAvTNVq | Spectral Bias in Practice: The Role of Function Frequency in Generalization | Despite their ability to represent highly expressive functions, deep learning models seem to find simple solutions that generalize surprisingly well. Spectral bias -- the tendency of neural networks to prioritize learning low frequency functions -- is one possible explanation for this phenomenon, but so far spectral bias has primarily been observed in theoretical models and simplified experiments. In this work, we propose methodologies for measuring spectral bias in modern image classification networks on CIFAR-10 and ImageNet. We find that these networks indeed exhibit spectral bias, and that interventions that improve test accuracy on CIFAR-10 tend to produce learned functions that have higher frequencies overall but lower frequencies in the vicinity of examples from each class. This trend holds across variation in training time, model architecture, number of training examples, data augmentation, and self-distillation. We also explore the connections between function frequency and image frequency and find that spectral bias is sensitive to the low frequencies prevalent in natural images. On ImageNet, we find that learned function frequency also varies with internal class diversity, with higher frequencies on more diverse classes. Our work enables measuring and ultimately influencing the spectral behavior of neural networks used for image classification, and is a step towards understanding why deep models generalize well. | Accept | The authors have largely convinced the reviewers (and definitely myself) of the merits of the paper after extensive and detailed rebuttal and discussion. I am happy to recommend acceptance. | val | [
"2mLojquD4js",
"EEJbpuT3GC-",
"EXhvHPb_57",
"niNGNPaEIc",
"evVvTPYA-Yi",
"G3sLSREGlG6",
"1VDraGuzI8",
"jPbI3EIzLu",
"bb53P3pxJ1A",
"TU7ue6Bbm3",
"fchGSRxHo5E",
"GhN1f92XuRk",
"a602seFrEaP",
"07WjMw-249Q",
"ZNz5lb3_di2",
"gbBa_PXmQi8",
"nrMKJSGobCc",
"SZeEsNOrurj",
"vOvqANCQ5UV"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Hi Reviewer 7SRK, \n\nWe posted the following reply this morning but realized it is somewhat difficult to find because each reply is nested inside the previous. So we are re-posting here to make the requested figure easier to find. Thanks for your engagement with and consideration of our work.\n\nThanks for the f... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
4
] | [
"gbBa_PXmQi8",
"niNGNPaEIc",
"G3sLSREGlG6",
"jPbI3EIzLu",
"fchGSRxHo5E",
"GhN1f92XuRk",
"nips_2022_BqnMaAvTNVq",
"bb53P3pxJ1A",
"vOvqANCQ5UV",
"SZeEsNOrurj",
"nrMKJSGobCc",
"a602seFrEaP",
"07WjMw-249Q",
"gbBa_PXmQi8",
"nips_2022_BqnMaAvTNVq",
"nips_2022_BqnMaAvTNVq",
"nips_2022_BqnMa... |
nips_2022_5g-h_DILemH | Robust Binary Models by Pruning Randomly-initialized Networks | Robustness to adversarial attacks was shown to require a larger model capacity, and thus a larger memory footprint. In this paper, we introduce an approach to obtain robust yet compact models by pruning randomly-initialized binary networks. Unlike adversarial training, which learns the model parameters, we initialize the model parameters as either +1 or −1, keep them fixed, and find a subnetwork structure that is robust to attacks. Our method confirms the Strong Lottery Ticket Hypothesis in the presence of adversarial attacks, and extends this to binary networks. Furthermore, it yields more compact networks with competitive performance than existing works by 1) adaptively pruning different network layers; 2) exploiting an effective binary initialization scheme; 3) incorporating a last batch normalization layer to improve training stability. Our experiments demonstrate that our approach not only always outperforms the state-of-the-art robust binary networks, but also can achieve accuracy better than full-precision ones on some datasets. Finally, we show the structured patterns of our pruned binary networks.
| Accept | The reviewers agree the paper studies an interesting problem on training robust binary neural networks and the paper does a good job in evaluating the proposed approach on multiple datasets and compares well with baselines. However the paper also has some drawbacks such as the proposed method has limited novelty and is a combination of existing techniques, missing comparisons to standard training based approaches, evaluations limited to ResNets. Overall the paper is on borderline. I suggest acceptance and encourage the authors to include all the changes that came up during discussion in the final version and discuss the limitations. | train | [
"XJ7zLlqveIr",
"y0lk1CFzMB0",
"Rl9iFFoy6V",
"yDkaqJBA3JJ",
"J6S7ZUEike",
"eZW1Xh83mN8M",
"aZVIqsNbls",
"AbMlYQKXfIE",
"iIVzlfrXuW",
"47AbQ041RGE",
"cQXiidBN4OB",
"j2yqPEnotda",
"SXMtB_cB4j",
"9dp5l--iMT3",
"XmrJtys65n",
"S1YCT_T2Gyr",
"oZ9Gq7uY0N",
"VgE5h7sGolQ"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer WyYF,\n\nThe author-reviewer discussion period will end soon, we are looking forward to your feedback to our response.\nDo we well address your concerns about our paper?\n\nWe really appreciate your time and effort to review our paper.\n\nPaper6197 Authors",
" Dear Reviewer WydW,\n\nThe author-rev... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"9dp5l--iMT3",
"AbMlYQKXfIE",
"yDkaqJBA3JJ",
"cQXiidBN4OB",
"eZW1Xh83mN8M",
"47AbQ041RGE",
"AbMlYQKXfIE",
"SXMtB_cB4j",
"nips_2022_5g-h_DILemH",
"VgE5h7sGolQ",
"j2yqPEnotda",
"oZ9Gq7uY0N",
"S1YCT_T2Gyr",
"XmrJtys65n",
"nips_2022_5g-h_DILemH",
"nips_2022_5g-h_DILemH",
"nips_2022_5g-h_... |
nips_2022_Ep98SUx9gka | Re-Analyze Gauss: Bounds for Private Matrix Approximation via Dyson Brownian Motion | Given a symmetric matrix $M$ and a vector $\lambda$, we present new bounds on the Frobenius-distance utility of the Gaussian mechanism for approximating $M$ by a matrix whose spectrum is $\lambda$, under $(\varepsilon,\delta)$-differential privacy. Our bounds depend on both $\lambda$ and the gaps in the eigenvalues of $M$, and hold whenever the top $k+1$ eigenvalues of $M$ have sufficiently large gaps. When applied to the problems of private rank-$k$ covariance matrix approximation and subspace recovery, our bounds yield improvements over previous bounds. Our bounds are obtained by viewing the addition of Gaussian noise as a continuous-time matrix Brownian motion. This viewpoint allows us to track the evolution of eigenvalues and eigenvectors of the matrix, which are governed by stochastic differential equations discovered by Dyson. These equations allow us to bound the utility as the square-root of a sum-of-squares of perturbations to the eigenvectors, as opposed to a sum of perturbation bounds obtained via Davis-Kahan-type theorems.
| Accept | This paper brings new mathematical tools (Dyson Brownian motion) to analyze the utility of a new mechanism, which improves over existing techniques when certain conditions on the original matrix spectrum is met. Although the conditions can be restrictive, the novelty of the idea can open new doors to differential private mechanism design. | train | [
"xwl_-gjcGhW",
"YpJzlGVWKNe",
"0ZAtYGBTfUA",
"4NmDuXWiBbX",
"iZPxVJj303R",
"5GAlzcXZIxa",
"D4-dKbFohbl",
"-SjF9V-Ei7",
"mxMZX53_kn7",
"jeB79LIeRjA",
"FfuPio0V_cl",
"I8SSxNiixCM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for these responses. My concern on Assumption 2.1 and expectation bound in Theorem 2.2 has been addressed.",
" Thank you for your response. As indicated in my review, I was hoping for theoretical justification with pointers to other works in the literature with similar assumptions. However, arguing ab... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
3,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"iZPxVJj303R",
"5GAlzcXZIxa",
"-SjF9V-Ei7",
"FfuPio0V_cl",
"I8SSxNiixCM",
"mxMZX53_kn7",
"jeB79LIeRjA",
"jeB79LIeRjA",
"nips_2022_Ep98SUx9gka",
"nips_2022_Ep98SUx9gka",
"nips_2022_Ep98SUx9gka",
"nips_2022_Ep98SUx9gka"
] |
nips_2022_oOte_397Q4P | Sparse Structure Search for Delta Tuning | Adapting large pre-trained models (PTMs) through fine-tuning imposes prohibitive computational and storage burdens. Recent studies of delta tuning (DT), i.e., parameter-efficient tuning, find that only optimizing a small portion of parameters conditioned on PTMs could yield on-par performance compared to conventional fine-tuning. Generally, DT methods exquisitely design delta modules (DT modules) which could be applied to arbitrary fine-grained positions inside PTMs. However, the effectiveness of these fine-grained positions largely relies on sophisticated manual designation, thereby usually producing sub-optimal results. In contrast to the manual designation, we explore constructing DT modules in an automatic manner. We automatically \textbf{S}earch for the \textbf{S}parse \textbf{S}tructure of \textbf{Delta} Tuning (S$^3$Delta). Based on a unified framework of various DT methods, S$^3$Delta conducts the differentiable DT structure search through bi-level optimization and proposes shifted global sigmoid method to explicitly control the number of trainable parameters. Extensive experiments show that S$^3$Delta surpasses manual and random structures with less trainable parameters. The searched structures preserve more than 99\% fine-tuning performance with 0.01\% trainable parameters. Moreover, the advantage of S$^3$Delta is amplified with extremely low trainable parameters budgets (0.0009\%$\sim$0.01\%). The searched structures are transferable and explainable, providing suggestions and guidance for the future design of DT methods. Our codes are publicly available at \url{https://github.com/thunlp/S3Delta}. | Accept | This paper presents work on parameter-efficient tuning of large pre-trained model. The main contribution is an automated search for the parameter efficient tuning modules, in a neural architecture search style.
The reviewers raised questions regarding the overall efficiency of the training scheme given the cost to the neural architecture search. However, they believed that the empirical results and overall novelty of the approach together were a solid contribution to research in this area. The additional clarifications brought into the main text help to make these efficiency concerns clearer and better position the work. Based the overall novelty of approach and results, this paper is ready for publication in NeurIPS. | train | [
"d9OHm8U0V1S",
"4bflUpIGTU0",
"nwHKucDULCa",
"71x_avMkWeb",
"xWDAhBQNOmt",
"zFIZSTJeWNm",
"3D_Ii5N_y5y",
"F60naYfFKIT",
"dvTWcc5jY0_",
"VnQq8vw1jf0"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the authors’ response. \n\nAs shown in my initial review, I still think that the performance improvement is not significant. It is obvious that NAS can improve performance, so it lacks insight. However, the authors have solved my part concerns (e.g., oracle, cost of NAS). I would update my rating from ... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2,
3
] | [
"xWDAhBQNOmt",
"VnQq8vw1jf0",
"dvTWcc5jY0_",
"F60naYfFKIT",
"3D_Ii5N_y5y",
"nips_2022_oOte_397Q4P",
"nips_2022_oOte_397Q4P",
"nips_2022_oOte_397Q4P",
"nips_2022_oOte_397Q4P",
"nips_2022_oOte_397Q4P"
] |
nips_2022_W1MUJv5zaXP | Modeling Human Exploration Through Resource-Rational Reinforcement Learning | Equipping artificial agents with useful exploration mechanisms remains a challenge to this day. Humans, on the other hand, seem to manage the trade-off between exploration and exploitation effortlessly. In the present article, we put forward the hypothesis that they accomplish this by making optimal use of limited computational resources. We study this hypothesis by meta-learning reinforcement learning algorithms that sacrifice performance for a shorter description length (defined as the number of bits required to implement the given algorithm). The emerging class of models captures human exploration behavior better than previously considered approaches, such as Boltzmann exploration, upper confidence bound algorithms, and Thompson sampling. We additionally demonstrate that changing the description length in our class of models produces the intended effects: reducing description length captures the behavior of brain-lesioned patients while increasing it mirrors cognitive development during adolescence. | Accept | This paper presents a resource constrained variant of the RL^2 algorithm, and studies how well it models human exploration behavior in bandit tasks. The resource constraint is the policy description length (in bits for the learned parameters). Based on the resource constraint, the algorithm produces a mix of Boltzmann-like and Thompson-sampling exploration. The space of exploration strategies from the algorithm is compared to human behavior data in three human populations. The fit to human data is substantially better than many alternatives that have been considered in previous papers.
The reviewers found many strong contributions in this paper. A particular strength was the ability of the proposed model to fit human behavior data from three substantially different populations in the past literature (xjvi, ). The reviewers also appreciated the clarity of the presentation that mixed ideas from neuroscience and machine learning (xjvi, orHY), and the motivation. Reviewers raised multiple detailed questions about the approach and results. The author response addressed each of the concerns in detail, and many reviewer suggestions were incorporated to improve the paper presentation. No reviewers raised additional concerns after the author discussion, and remaining questions were limited to potential directions for future work. Reviewers expressed interest in this work and in potential directions for future work that builds on the presented ideas.
Four reviewers indicate to accept this paper for its novel contribution of a better model of human exploration behavior in bandit tasks. The paper is therefore accepted. | train | [
"psxkYFhAQVA",
"oYCNKKkCzMZ",
"TnKNf98FBHag",
"t6Tbh_XlHA7",
"tWT1k66BD3w",
"VYoJ_W3Wntx",
"gL43Ri8AV5H",
"m8xv_d63jpr",
"2BQr9V3XW-h",
"ITnOomQb6Oj",
"ehjJV7iz9mJ",
"RR4R0jC6Rq2",
"tvSxlX1Dqpc",
"yPkclkgM5g",
"OjiIlhk-hsD",
"d7rpgNghsSr"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you -- I have raised my score accordingly.",
" We have uploaded a new PDF and changed the title to \"Modeling Human Exploration through\nResource-Rational Reinforcement Learning\". Thank you very much for all of your feedback!",
" Thank you for the detailed reply - I am very glad that the authors found ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
9
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
5
] | [
"oYCNKKkCzMZ",
"TnKNf98FBHag",
"VYoJ_W3Wntx",
"tWT1k66BD3w",
"RR4R0jC6Rq2",
"gL43Ri8AV5H",
"m8xv_d63jpr",
"OjiIlhk-hsD",
"ITnOomQb6Oj",
"yPkclkgM5g",
"tvSxlX1Dqpc",
"d7rpgNghsSr",
"nips_2022_W1MUJv5zaXP",
"nips_2022_W1MUJv5zaXP",
"nips_2022_W1MUJv5zaXP",
"nips_2022_W1MUJv5zaXP"
] |
nips_2022_GTde0BIHMGB | Order-Invariant Cardinality Estimators Are Differentially Private | We consider privacy in the context of streaming algorithms for cardinality estimation.
We show that a large class of algorithms all satisfy $\epsilon$-differential privacy,
so long as (a) the algorithm is combined with a simple
down-sampling procedure, and (b) the input stream cardinality
is $\Omega(k/\epsilon)$. Here, $k$ is a certain parameter of the sketch
that is always at most the sketch size in bits, but is typically much smaller.
We also show that, even with no modification, algorithms in our
class satisfy $(\epsilon, \delta)$-differential privacy,
where $\delta$ falls exponentially with the stream cardinality.
Our analysis applies to essentially all popular cardinality estimation
algorithms, and substantially generalizes and tightens privacy bounds from earlier works.
Our approach is faster and exhibits a better utility-space
tradeoff than prior art. | Accept | Differentially privacy is often an important design constraint in algorithms, and previous work has studied the question of designing streaming algorithms that satisfy DP in addition to other typical streaming algorithm desiderata (small space, small update time). For the problem of cardinality estimation, previous work has shown that specific streaming algorithms are, or can be modified to be, differentially private. This work shows a general result saying that any streaming algorithm for cardinality estimation, under mild assumptions, is differentially private. This is a clean unified result that the reviewers found appealing. I am in agreement and recommend acceptance.
| train | [
"8A2JfVmGnh",
"ypTUYlwx41od",
"LwZsfSi-zDE",
"IiOjBCOnrRrS",
"AWmUI7RMpZ-J",
"cAQqMOwwnR",
"pBLRBfNZ1f",
"m5fTfLowzMW",
"N7u3SfBuT5h",
"LdY885-e_gO",
"5KraWAn6JxW"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the comments. ",
" Thank you for the clarifications!",
" - I am a little bit worried about the accuracy of the proposed sketching with down-subsampling. In Section 5, only update time and space ratio are evaluated. However, it is unclear how QLL and PHLL compare in terms of accuracy. If QLL is muc... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"IiOjBCOnrRrS",
"cAQqMOwwnR",
"5KraWAn6JxW",
"LdY885-e_gO",
"N7u3SfBuT5h",
"m5fTfLowzMW",
"nips_2022_GTde0BIHMGB",
"nips_2022_GTde0BIHMGB",
"nips_2022_GTde0BIHMGB",
"nips_2022_GTde0BIHMGB",
"nips_2022_GTde0BIHMGB"
] |
nips_2022_n3lr7GdcbyD | Optimal and Adaptive Monteiro-Svaiter Acceleration | We develop a variant of the Monteiro-Svaiter (MS) acceleration framework that removes the need to solve an expensive implicit equation at every iteration. Consequently, for any $p\ge 2$ we improve the complexity of convex optimization with Lipschitz $p$th derivative by a logarithmic factor, matching a lower bound. We also introduce an MS subproblem solver that requires no knowledge of problem parameters, and implement it as either a second- or first-order method by solving linear systems or applying MinRes, respectively. On logistic regression problems our method outperforms previous accelerated second-order methods, but under-performs Newton's method; simply iterating our first-order adaptive subproblem solver is competitive with L-BFGS. | Accept | The paper proposes a variant of MS acceleration that requires no bisection. This in turn can be used to accelerate the cubic regularization method and other proximal based methods, matching previously established lower bounds. The specialized second order variant requires no knowledge of the lipschitz constant of the Hessian by using what can be considered as a new type of line-search. Furthermore, the level of writing and contributions was enough to motivate the reviewers to examine the paper in depth, including the supplementary material. Finally, the expert reviewers were very impressed with contributions of the paper, and believe this will have repercussions outside of the immediate targeted applications. | train | [
"4_hq1wDJ-OC",
"paVKlt_b6p",
"QQku55ZUrh5",
"ojRXfxcMf4V",
"avVqoH1FuIP",
"Lm8IR86ZLqw",
"9zjO2jyoLy7",
"vXYleHMb2Tk",
"ZNUUvMdPzVw",
"vBfwlBg8u0"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Since the base of the logarithm only changes constant factors, we omit it when using big-O notation. You are correct that binary logarithms naturally show up in the analysis of Algorithms 2 and 3 - we will clarify this in the revision. \n\nIt is true that in lines 895-896 we just “throw away” the outer logarithmi... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2,
4
] | [
"paVKlt_b6p",
"avVqoH1FuIP",
"vBfwlBg8u0",
"ZNUUvMdPzVw",
"vXYleHMb2Tk",
"9zjO2jyoLy7",
"nips_2022_n3lr7GdcbyD",
"nips_2022_n3lr7GdcbyD",
"nips_2022_n3lr7GdcbyD",
"nips_2022_n3lr7GdcbyD"
] |
nips_2022_mTra5BIUyRV | Fair Ranking with Noisy Protected Attributes | The fair-ranking problem, which asks to rank a given set of items to maximize utility subject to group fairness constraints, has received attention in the fairness, information retrieval, and machine learning literature. Recent works, however, observe that errors in socially-salient (including protected) attributes of items can significantly undermine fairness guarantees of existing fair-ranking algorithms and raise the problem of mitigating the effect of such errors. We study the fair-ranking problem under a model where socially-salient attributes of items are randomly and independently perturbed. We present a fair-ranking framework that incorporates group fairness requirements along with probabilistic information about perturbations in socially-salient attributes. We provide provable guarantees on the fairness and utility attainable by our framework and show that it is information-theoretically impossible to significantly beat these guarantees. Our framework works for multiple non-disjoint attributes and a general class of fairness constraints that includes proportional and equal representation. Empirically, we observe that, compared to baselines, our algorithm outputs rankings with higher fairness, and has a similar or better fairness-utility trade-off compared to baselines. | Accept | This paper looks at the fair ranking problem, a known variant of ranking where group fairness constraints (typically hard, sometimes soft) are imposed on the traditional ranking objective, but where membership of each item to be ranked in a group (aka the sensitive attribute's value associated with that item) is unknown. The paper provides strong theoretical results and, especially post-rebuttal, strong experimental backing of the setting at hand. Some assumptions are relatively strong, as surfaced by reviewers (e.g., 3sFW), but by and large reviewers believed the work to be well motivated and complete, and I agree with that. | train | [
"fIeTBbxuvvX",
"Trrcgqaz3W",
"1_GRu6Q29Vo",
"m6aKyqcBFF5",
"oREWN34emZ-",
"n4ByuRrBXN",
"mVLqFbV21IA",
"ZNJbwPIIuSI",
"8KJtlUziiZtt",
"h0_Z_vV1eyG",
"EVys39otf9e",
"y1bSBA-5wOi",
"camaM5Q4lFb"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the thorough responses and corresponding revisions - esp. the new experiments :) ",
" Thank you for your response. I especially appreciate your explanation for why you use upper bounds instead of lower bounds.",
" Thank you for the detailed response, providing further clarifications and additional ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"mVLqFbV21IA",
"m6aKyqcBFF5",
"oREWN34emZ-",
"camaM5Q4lFb",
"n4ByuRrBXN",
"y1bSBA-5wOi",
"ZNJbwPIIuSI",
"EVys39otf9e",
"h0_Z_vV1eyG",
"nips_2022_mTra5BIUyRV",
"nips_2022_mTra5BIUyRV",
"nips_2022_mTra5BIUyRV",
"nips_2022_mTra5BIUyRV"
] |
nips_2022_A0WsxAzR_yn | A consistently adaptive trust-region method | Adaptive trust-region methods attempt to maintain strong convergence guarantees without depending on conservative estimates of problem properties such as Lipschitz constants. However, on close inspection, one can show existing adaptive trust-region methods have theoretical guarantees with severely suboptimal dependence on problem properties such as the Lipschitz constant of the Hessian. For example, TRACE developed by Curtis et al. obtains a $O(\Delta_f L^{3/2} \epsilon^{-3/2}) + \tilde{O}(1)$ iteration bound where $L$ is the Lipschitz constant of the Hessian. Compared with the optimal $O(\Delta_f L^{1/2} \epsilon^{-3/2})$ bound this is suboptimal with respect to $L$. We present the first adaptive trust-region method which circumvents this issue and requires at most $O( \Delta_f L^{1/2} \epsilon^{-3/2}) + \tilde{O}(1)$ iterations to find an $\epsilon$-approximate stationary point, matching the optimal iteration bound up to an additive logarithmic term. Our method is a simple variant of a classic trust-region method and in our experiments performs competitively with both ARC and a classical trust-region method. | Accept | The paper proposes a new adaptive trust region method. Because most of the reviewers think the paper is interesting, I recommend an acceptance. | train | [
"pkFESy4osM",
"Anv6yvkkEE",
"tC2LWb6q_WI",
"4kujnRa9d-L",
"92cf4NoM8BT",
"-SQeruGYjb_",
"R_RODlcIyRV",
"Jgbzf0coQAc",
"6iQuOr9HXK6",
"2hZl3wQTahv"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The reviewer thanks the authors for the responses, I think the contribution is clear, I changed my score accordingly.",
" Another piece of feedback they gave is that we did not discuss the trust-region literature in sufficient detail, particularly in the introduction. Therefore, we will add a paragraph along th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"92cf4NoM8BT",
"R_RODlcIyRV",
"6iQuOr9HXK6",
"2hZl3wQTahv",
"6iQuOr9HXK6",
"Jgbzf0coQAc",
"nips_2022_A0WsxAzR_yn",
"nips_2022_A0WsxAzR_yn",
"nips_2022_A0WsxAzR_yn",
"nips_2022_A0WsxAzR_yn"
] |
nips_2022_w4X7GLThiuJ | Alternating Mirror Descent for Constrained Min-Max Games | In this paper we study two-player bilinear zero-sum games with constrained strategy spaces. An instance of natural occurrences of such constraints is when mixed strategies are used, which correspond to a probability simplex constraint. We propose and analyze the alternating mirror descent algorithm, in which each player takes turns to take action following the mirror descent algorithm for constrained optimization. We interpret alternating mirror descent as an alternating discretization of a skew-gradient flow in the dual space, and use tools from convex optimization and modified energy function to establish an $O(K^{-2/3})$ bound on its average regret after $K$ iterations. This quantitatively verifies the algorithm's better behavior than the simultaneous version of mirror descent algorithm, which is known to diverge and yields an $O(K^{-1/2})$ average regret bound. In the special case of an unconstrained setting, our results recover the behavior of alternating gradient descent algorithm for zero-sum games which was studied in (Bailey et al., COLT 2020). | Accept | The paper studies the regret of alternating mirror descent in constrained bilinear 2-player zero-sum games where each player can play within a compact and convex set. It is shown through a suitable reduction to the skew gradient flow dynamics that the average iterates converges to a Nash equilibrium at a speed K^{-2/3} where K is the no. of iterations.This work builds on previous work [1], in which the authors prove a constant regret for the class of two player zero sum games in the unconstrained setting.
The reviewers agreed during the post-discussion that this is a non-trivial paper that extends known results to the constrained setting in some meaningful way (yet, paying the price of some extra assumptions and limitations on the algorithms played by the two players).
To the authors: Please follow the reviewers' suggestions to improve presentation (like, more context and more discussion about the limitations of these results). | train | [
"h0xu4jGSu2",
"Shxdnd0Jnf",
"ex3CFlC_8tI",
"iUfFOG1Psy2",
"i5dW0EfWWGC",
"iiQGFIGaaST",
"GQMQpxqfoN",
"V7BQW5on8Xh"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I am increasing my score to 6. While I do still think the Legendre assumption is restrictive, this work studies a very natural update that merits this study.",
" Thank you for your response.\nI maintain my score. I would suggest that the authors clarify which functions can be used in each case. ",
" The auth... | [
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"ex3CFlC_8tI",
"iUfFOG1Psy2",
"V7BQW5on8Xh",
"GQMQpxqfoN",
"iiQGFIGaaST",
"nips_2022_w4X7GLThiuJ",
"nips_2022_w4X7GLThiuJ",
"nips_2022_w4X7GLThiuJ"
] |
nips_2022_BOQr80FBX_ | Semi-Supervised Video Salient Object Detection Based on Uncertainty-Guided Pseudo Labels | Semi-Supervised Video Salient Object Detection (SS-VSOD) is challenging because of the lack of temporal information in video sequences caused by sparse annotations. Most works address this problem by generating pseudo labels for unlabeled data. However, error-prone pseudo labels negatively affect the VOSD model. Therefore, a deeper insight into pseudo labels should be developed. In this work, we aim to explore 1) how to utilize the incorrect predictions in pseudo labels to guide the network to generate more robust pseudo labels and 2) how to further screen out the noise that still exists in the improved pseudo labels. To this end, we propose an Uncertainty-Guided Pseudo Label Generator (UGPLG), which makes full use of inter-frame information to ensure the temporal consistency of the pseudo labels and improves the robustness of the pseudo labels by strengthening the learning of difficult scenarios. Furthermore, we also introduce the adversarial learning to address the noise problems in pseudo labels, guaranteeing the positive guidance of pseudo labels during model training. Experimental results demonstrate that our methods outperform existing semi-supervised method and partial fully-supervised methods across five public benchmarks of DAVIS, FBMS, MCL, ViSal and SegTrack-V2. | Accept | In this paper the authors propose an approach for semi-supervised salient object detection using a combination of pseudo-label prediction and adversarial training, showing improved results on a number of benchmarks. Some concerns about more detailed analysis of aspects of the approach were raised, but seemed to be mostly addressed in the authors’ response. Some reviewers also expressed concerns about there being sufficient technical contributions, but seemed satisfied enough with the analysis and strong positive results. | train | [
"3vSRoICKB-6",
"eTyRKF_kpY",
"1ZOQ412WUK",
"BO8TaCgRv7",
"8LCRLPWmanA",
"clylwd1KQvd",
"IjlIht9hTi9",
"eFjGazEoh4I",
"VgEL9j8JVzQ",
"MQO6FW-pTyz"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer, we greatly appreciate the time and effort you have dedicated to providing insightful feedback. As per your advice, we will highlight the UADDM module in more detail and present that as a major contribution in the final version. On the other hand, we agree with you that optical flow along with an oc... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2
] | [
"1ZOQ412WUK",
"BO8TaCgRv7",
"IjlIht9hTi9",
"8LCRLPWmanA",
"MQO6FW-pTyz",
"VgEL9j8JVzQ",
"eFjGazEoh4I",
"nips_2022_BOQr80FBX_",
"nips_2022_BOQr80FBX_",
"nips_2022_BOQr80FBX_"
] |
nips_2022_suplyBhTDjC | Beyond Time-Average Convergence: Near-Optimal Uncoupled Online Learning via Clairvoyant Multiplicative Weights Update | In this paper we provide a novel and simple algorithm, Clairvoyant Multiplicative Weights Updates (CMWU), for convergence to \textit{Coarse Correlated Equilibria} (CCE) in general games. CMWU effectively corresponds to the standard MWU algorithm but where all agents, when updating their mixed strategies, use the payoff profiles based on tomorrow's behavior, i.e. the agents are clairvoyant. CMWU achieves constant regret of $\ln(m)/\eta$ in all normal-form games with m actions and fixed step-sizes $\eta$. Although CMWU encodes in its definition a fixed point computation, which in principle could result in dynamics that are neither computationally efficient nor uncoupled, we show that both of these issues can be largely circumvented. Specifically, as long as the step-size $\eta$ is upper bounded by $\frac{1}{(n-1)V}$, where $n$ is the number of agents and $[0,V]$ is the payoff range, then the CMWU updates can be computed linearly fast via a contraction map. This implementation results in an uncoupled online learning dynamic that admits a $O(\log T)$-sparse sub-sequence where each agent experiences at most $O(nV\log m)$ regret. This implies that the CMWU dynamics converge with rate $O(nV \log m \log T / T)$ to a CCE and improves on the current state-of-the-art convergence rate. | Accept | This paper proposes a new uncoupled method for computing CCE of a general-sum normal-form game with a SOTA rate.
As the reviewers point out, the biggest issue of this work is that the proposed algorithm is not no-regret considering all iterates. We strongly encourage the authors to make this clearer and avoid making misleading/confusing statements. Reviewer k1YV's concern on the "synchronization" requirement is also a very valid point, and the authors' response did not really address this concern. We encourage the authors to also make this clearer in the revision. | train | [
"Utqdv-6Q99s",
"7wvvrfYVa4l",
"ki-z8wys2Ic",
"ZvlkK93HZA",
"xhbzYAt_319",
"6377sQS-rdp",
"iTcVMe1nTnU",
"pcQXDjFK7hq",
"EDt9cYGClGo",
"W9MhETALXFmK",
"wV2_DnskznX",
"cVWzrL-Pdqf",
"EKBwZkRNXkw",
"Tdd3_G-Jkxd"
] | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewers, thank you for your time and effort reviewing our paper.\n\nWe would like to update you that in accordance with the wishes of Reviewer CoXh, we have performed experiments comparing CMWU and the state-of-the-art implementation of OMWU and observed experimentally that as our theory suggests, CMWU all... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"nips_2022_suplyBhTDjC",
"ki-z8wys2Ic",
"pcQXDjFK7hq",
"xhbzYAt_319",
"W9MhETALXFmK",
"nips_2022_suplyBhTDjC",
"Tdd3_G-Jkxd",
"EKBwZkRNXkw",
"cVWzrL-Pdqf",
"wV2_DnskznX",
"nips_2022_suplyBhTDjC",
"nips_2022_suplyBhTDjC",
"nips_2022_suplyBhTDjC",
"nips_2022_suplyBhTDjC"
] |
nips_2022_w6fj2r62r_H | Torsional Diffusion for Molecular Conformer Generation | Molecular conformer generation is a fundamental task in computational chemistry. Several machine learning approaches have been developed, but none have outperformed state-of-the-art cheminformatics methods. We propose torsional diffusion, a novel diffusion framework that operates on the space of torsion angles via a diffusion process on the hypertorus and an extrinsic-to-intrinsic score model. On a standard benchmark of drug-like molecules, torsional diffusion generates superior conformer ensembles compared to machine learning and cheminformatics methods in terms of both RMSD and chemical properties, and is orders of magnitude faster than previous diffusion-based models. Moreover, our model provides exact likelihoods, which we employ to build the first generalizable Boltzmann generator. Code is available at https://github.com/gcorso/torsional-diffusion. | Accept | This paper proposes a diffusion model for molecular conformation prediction in the space of torsion angles. The idea is new, and the experimental results are strong. All reviewers like this paper. | train | [
"mdwbg8-iIq",
"4xZ1We3jtB",
"OHgQIHtqCwu",
"ZXimuynJ_YY",
"eTl4eNd2IDw",
"UA_XWg5FxJ",
"C4w86ngDIBV",
"t7WDU3dgBcT",
"38vhvK5vsOa",
"IxGFEGAsUTj",
"B0duXulAweJ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the reply, it clears most of my concern. I also take other reviewers opinion as input, since all of the reviewers give positive feed back and I am not super confidence in this area, I will keep my current score.",
" Thank you for addressing all my questions! I learned a lot from your comments. Hoping... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5,
4
] | [
"eTl4eNd2IDw",
"UA_XWg5FxJ",
"B0duXulAweJ",
"IxGFEGAsUTj",
"38vhvK5vsOa",
"C4w86ngDIBV",
"t7WDU3dgBcT",
"nips_2022_w6fj2r62r_H",
"nips_2022_w6fj2r62r_H",
"nips_2022_w6fj2r62r_H",
"nips_2022_w6fj2r62r_H"
] |
nips_2022_foMcvT6R3VT | Can Variance-Based Regularization Improve Domain Generalization? | If there is no prior information, domain generalization with only access to multi-domain training data relies on guessing what the test data is. In this work, we consider mild assumptions that there is a distribution over domains and the out-of-distribution data is generated by the shift of the domain distribution. We study a domain-level variance-based regularizer. We show that the variance-regularized method can locally approximate the group distributionally robust optimization and embed the local information into the objective function as a weighting scheme. By taking the empirical domain distribution as an anchor of the location, we propose a weighting correction scheme and provide theoretical guarantees of in-distribution generalization. Compared to the Empirical Risk Minimization, we prove the potential benefits of our proposed method but do not observe consistent improvements in general. | Reject | This paper has been widely discussed between reviewers and authors. Unfortunately, even after the reviewers updated their scores, the paper was still judged to be below the acceptance threshold. I encourage the authors in taking into account the reviewers' comments while preparing the next iteration of their work. | train | [
"OrBaAGLKRao",
"Djd1IYXPWky",
"7jcOjHdfaNM",
"HIBCQ5p8D3",
"LLVRKN4cs-X",
"5RRPRqDOuNh",
"fJYry1mygyn",
"_z6vJ1tNOYr",
"mznAkDr88qP",
"GalqXc9ea0p",
"sPDH25iSUHl",
"ZdSWFl3w92",
"51SIFOpwS89",
"Zm-HI19jHB6",
"TWfKY2_ePE8",
"KlUE07gZmIJ",
"h2P09l9YBi2",
"2ReNegmqCGQ",
"l3bQH4UST-h... | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" Thank you for your further comments. We are sorry that our response did not provide you with a full discussion of the theory and experimental results. We hope our further response can address your concerns.\n\n> It is confusing why estimating the empirical domain distribution with the statistical information of t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
2
] | [
"Djd1IYXPWky",
"HIBCQ5p8D3",
"51SIFOpwS89",
"h2P09l9YBi2",
"5RRPRqDOuNh",
"sPDH25iSUHl",
"_z6vJ1tNOYr",
"l3bQH4UST-h",
"sPDH25iSUHl",
"TWfKY2_ePE8",
"ZdSWFl3w92",
"Zm-HI19jHB6",
"A-L9sxdXEu",
"vEP5gNn6Z-E",
"KlUE07gZmIJ",
"l3bQH4UST-h",
"2ReNegmqCGQ",
"nips_2022_foMcvT6R3VT",
"ni... |
nips_2022_Hb37zNk14e5 | Learning to Find Proofs and Theorems by Learning to Refine Search Strategies: The Case of Loop Invariant Synthesis | We propose a new approach to automated theorem proving where an AlphaZero-style agent is self-training to refine a generic high-level expert strategy expressed as a nondeterministic program. An analogous teacher agent is self-training to generate tasks of suitable relevance and difficulty for the learner. This allows leveraging minimal amounts of domain knowledge to tackle problems for which training data is unavailable or hard to synthesize. As a specific illustration, we consider loop invariant synthesis for imperative programs and use neural networks to refine both the teacher and solver strategies.
| Accept | This work presents an approach to learning to prove loop-invariant theorems, organized around jointly training teacher and solver models. Reviewers praised its originality and creativity, as well as the quality of the software artifacts produced. Both the ideas and the code could be valuable for the community.
At the same time, there is a consensus (which I agree with) that the work oversells itself by claiming to be a general framework for learning to prove theorems. Might be true that you could in principle apply the framework to other kinds of theorems, but that would have to be shown empirically. Ditto for the claim that this can be applied to program synthesis, which the paper makes in the very first sentence of the abstract.
Given these overreaches, the camera ready version of this paper _needs_ to soften its claims about its broad applicability for theorem proving and program synthesis. The authors also need to change the title so that it has "Loop Invariant" in it or something similar, which they are receptive to in the rebuttal. The paper can talk about these loftier ambitions in the conclusion, but should clearly demarcate the actual extent of the empirical results. | test | [
"1xPO7zrm4ne",
"G43YOHLcUOu",
"rBapgDbScr",
"xYIF4QV82N0",
"o1nd_8u8Ss",
"VpEjMRaGr6",
"0r0dJ5214U4",
"qfpG2KQ2RaY"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank you for an interesting and helpful review. We are glad to read that you appreciated our \"uniform AlphaZero-style approach\" to generating and solving problems along with our engineering work.\n\nWe provide a detailed answer to your question about the relevance of our approach to general theorem proving ... | [
-1,
-1,
-1,
-1,
-1,
4,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"qfpG2KQ2RaY",
"0r0dJ5214U4",
"VpEjMRaGr6",
"o1nd_8u8Ss",
"nips_2022_Hb37zNk14e5",
"nips_2022_Hb37zNk14e5",
"nips_2022_Hb37zNk14e5",
"nips_2022_Hb37zNk14e5"
] |
nips_2022_FJVB_tkiWpw | ZooD: Exploiting Model Zoo for Out-of-Distribution Generalization | Recent advances on large-scale pre-training have shown great potentials of leveraging a large set of Pre-Trained Models (PTMs) for improving Out-of-Distribution (OoD) generalization, for which the goal is to perform well on possible unseen domains after fine-tuning on multiple training domains. However, maximally exploiting a zoo of PTMs is challenging since fine-tuning all possible combinations of PTMs is computationally prohibitive while accurate selection of PTMs requires tackling the possible data distribution shift for OoD tasks. In this work, we propose ZooD, a paradigm for PTMs ranking and ensemble with feature selection. Our proposed metric ranks PTMs by quantifying inter-class discriminability and inter-domain stability of the features extracted by the PTMs in a leave-one-domain-out cross-validation manner. The top-K ranked models are then aggregated for the target OoD task. To avoid accumulating noise induced by model ensemble, we propose an efficient variational EM algorithm to select informative features. We evaluate our paradigm on a diverse model zoo consisting of 35 models for various OoD tasks and demonstrate: (i) model ranking is better correlated with fine-tuning ranking than previous methods and up to 9859x faster than brute-force fine-tuning; (ii) OoD generalization after model ensemble with feature selection outperforms the state-of-the-art methods and the accuracy on most challenging task DomainNet is improved from 46.5\% to 50.6\%. Furthermore, we provide the fine-tuning results of 35 PTMs on 7 OoD datasets, hoping to help the research of model zoo and OoD generalization. Code will be available at \href{https://gitee.com/mindspore/models/tree/master/research/cv/zood}{https://gitee.com/mindspore/models/tree/master/research/cv/zood}. | Accept | The paper proposes a new method for leveraging a zoo of pre-trained models for improving OOD generalization in an efficient way. The reviewers agree that the results are strong. | train | [
"aZVihJQFPdb",
"NEaH4RkDF2t",
"o728PbLF9VH",
"UBcWuYjPu76",
"ukK56WnuC7c",
"F07_X3AdSZW",
"4CI8C0YRdy",
"wqSfFwsh_jw",
"yYTQy2_4Kvb",
"o1l4247uv68",
"NN3A2qlkqWD"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewers,\n\nWe hope that our responses have addressed the concerns raised in your review. If there remain concerns or if you have more questions, we are more than happy to provide additional clarification. Thank you so much for your time!\n\nSincerely,\n\nAuthors of Paper 6146",
" We thank reviewers for ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"nips_2022_FJVB_tkiWpw",
"nips_2022_FJVB_tkiWpw",
"UBcWuYjPu76",
"NN3A2qlkqWD",
"F07_X3AdSZW",
"o1l4247uv68",
"wqSfFwsh_jw",
"yYTQy2_4Kvb",
"nips_2022_FJVB_tkiWpw",
"nips_2022_FJVB_tkiWpw",
"nips_2022_FJVB_tkiWpw"
] |
nips_2022_ex60CCi5GS | Debiasing Graph Neural Networks via Learning Disentangled Causal Substructure | Most Graph Neural Networks (GNNs) predict the labels of unseen graphs by learning the correlation between the input graphs and labels. However, by presenting a graph classification investigation on the training graphs with severe bias, surprisingly, we discover that GNNs always tend to explore the spurious correlations to make decision, even if the causal correlation always exists. This implies that existing GNNs trained on such biased datasets will suffer from poor generalization capability. By analyzing this problem in a causal view, we find that disentangling and decorrelating the causal and bias latent variables from the biased graphs are both crucial for debiasing. Inspired by this, we propose a general disentangled GNN framework to learn the causal substructure and bias substructure, respectively. Particularly, we design a parameterized edge mask generator to explicitly split the input graph into causal and bias subgraphs. Then two GNN modules supervised by causal/bias-aware loss functions respectively are trained to encode causal and bias subgraphs into their corresponding representations. With the disentangled representations, we synthesize the counterfactual unbiased training samples to further decorrelate causal and bias variables. Moreover, to better benchmark the severe bias problem, we construct three new graph datasets, which have controllable bias degrees and are easier to visualize and explain. Experimental results well demonstrate that our approach achieves superior generalization performance over existing baselines. Furthermore, owing to the learned edge mask, the proposed model has appealing interpretability and transferability. | Accept | The paper proposes a GNN framework where the causal substructure and the bias substructure are disentangled by a mask generator and separate GNNs. SOTA results on artificially generated severely biased data are reported.
Reviewers raised concerns mainly on the novelty, e.g., from [2, 20], insufficient experiments, missing baselines, and experiments only on artificially generated data based on images (not really graph data). The authors addressed those concerns mostly well.
Although two reviewers kept their rejecting scores, they didn't further raise criticisms after rebuttal, and I find no good reason for rejection from their reviews.
My concern that remains after the rebuttal is about the data. The authors argue that existing benchmark graph datasets don't have severe bias, which is why they generated artificially biased data by using image data, which should not necessarily be treated as graphs. This makes me wonder if the proposed method is really useful in practice. Namely, are there some application scenarios where severe bias is expected on graph data? If so, readers can expect that the authors would prepare real-world graph data that show severe bias (without manipulation), on which the proposed method outperforms the SOTA methods. Can readers expect this in the author's near future follow-up work? I strongly recommend the authors to discuss this point in the final version.
| train | [
"_ls3K9eQKxF",
"dLlD7_OKeM8",
"4HuUSWj2-5",
"SdnWpXiGhZn",
"yjnkOl9NvjM",
"dYX_aoKZ4hT",
"WAmRNfRRD3p",
"dPakNf2nWOO",
"Pg_aPruHW_",
"8UkOHsjCiX3",
"xtEzDvBrWaV",
"PUCUmzRQOGh",
"SfbTsNfNk80",
"J3rMyvwYgTs",
"YQ00WxSKcDo",
"JnB1yJiGhyU",
"QMqQaiZNgFs",
"eGid1H2Z18v",
"Q5UXFjom1h"... | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer TFtC:\n\nWe thank you for taking the time to provide critical comments. We have provided detailed responses that we believe have covered your concerns. As this is the last day for discussion, we kindly remind you that could you check out our reply. We hope to further discuss with you whether or not ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5,
4
] | [
"JnB1yJiGhyU",
"QMqQaiZNgFs",
"SdnWpXiGhZn",
"dYX_aoKZ4hT",
"Q5UXFjom1h",
"eGid1H2Z18v",
"QMqQaiZNgFs",
"QMqQaiZNgFs",
"QMqQaiZNgFs",
"QMqQaiZNgFs",
"JnB1yJiGhyU",
"JnB1yJiGhyU",
"JnB1yJiGhyU",
"JnB1yJiGhyU",
"nips_2022_ex60CCi5GS",
"nips_2022_ex60CCi5GS",
"nips_2022_ex60CCi5GS",
"... |
nips_2022_nDemfqKHTpK | GraB: Finding Provably Better Data Permutations than Random Reshuffling | Random reshuffling, which randomly permutes the dataset each epoch, is widely adopted in model training because it yields faster convergence than with-replacement sampling. Recent studies indicate greedily chosen data orderings can further speed up convergence empirically, at the cost of using more computation and memory. However, greedy ordering lacks theoretical justification and has limited utility due to its non-trivial memory and computation overhead. In this paper, we first formulate an example-ordering framework named \emph{herding} and answer affirmatively that SGD with herding converges at the rate $O(T^{-2/3})$ on smooth, non-convex objectives, faster than the $O(n^{1/3}T^{-2/3})$ obtained by random reshuffling, where $n$ denotes the number of data points and $T$ denotes the total number of iterations. To reduce the memory overhead, we leverage discrepancy minimization theory to propose an online Gradient Balancing algorithm (GraB) that enjoys the same rate as herding, while reducing the memory usage from $O(nd)$ to just $O(d)$ and computation from $O(n^2)$ to $O(n)$, where $d$ denotes the model dimension. We show empirically on applications including MNIST, CIFAR10, WikiText and GLUE that GraB can outperform random reshuffling in terms of both training and validation performance, and even outperform state-of-the-art greedy ordering while reducing memory usage over $100\times$. | Accept | This paper improves the Random Reshuffling method via a herding procedure aimed at finding a better permutation of the training dataset. Authors start by providing intuitive explanations based on a practically ineffective algorithm, and subsequently propose a gradient balancing technique; this enjoys the favorable properties of the herding procedure but is better in practice as it requires $O(n)$ times less memory and computation, where $n$ is number of data points. The authors provide convergence guarantees for Random Reshuffling with empowered with their herding procedure for smooth non-convex problems, under the standard L-smoothness and PL assumptions. Importantly, their theory points to an improvement from $O(n^{1/3}T^{2/3})$ to $O(T^{2/3})$. This result means that the new method is $O(n^{1/3})$ times faster, which is significant.
The reviewers were supportive of the paper, and described its various contributions in the following positive ways:
- Authors conducted a wide range of experiments with logistic regression and popular deep learning architecture. Plots show practical advantage of new methods compared with other permutation based algorithms.
- The paper makes a solid contribution to an important problem in machine learning.
- The development of an algorithm which can find provably better data permutations without storing all the gradients from the previous epoch enables the scaling of such techniques to large-scale problems.
- The paper is well-written and the flow of ideas is generally easy to follow.
- This paper has a good structure and it is well-written.
- This paper studies 6 algorithms, which is quite informative.
- All assumptions are clearly stated and described in remarks.
- The idea of finding a better permutation is not new and authors mentioned it in literature review. However, the proposed general framework and efficient implementation is interesting.
- The theory seems to be sound. I checked appendix briefly and I did not find problems.
- This paper has a lot of experiments with different large neural networks showing how this method works in practice.
- This paper provides a practical method for choosing orderings that theoretically converges faster than random reshuffling.
- Given that the algorithm overhead seems quite low (an additional vector sum at each iteration), this method should be applicable in many cases. Therefore, I believe this to be an important contribution both theoretically and practically and interesting to the community.
- The experiments are done on different tasks encompassing both image classification and NLP.
- Overall, the writing is clear and the paper is well organized so it is easy to follow.
This is a clear acceptance case in my view; the authors were supportive and the rebuttal and the subsequent discussion clarified and addressed most issues. I would request the authors to make sure all criticism will be properly addressed in the camera ready version of the paper.
Congratulations on a nice paper!
AC
| train | [
"pP9t25nFN_a",
"k62vPUyGHZ",
"9Ixt269zriU",
"P_2ylBGbuKk",
"igEHOIwdMgb",
"MinNqEZgBm",
"x7JQbm5HZTU",
"gi1h1rRyyci",
"_dIcusIff4J",
"053lgti1fkn",
"UV5W8kYtnSb",
"yImBJjAo59p",
"R8GYZ-BODx",
"veclbtr_Yi2"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Authors,\n\nThank you very much for your replies.\n\n> Thank you for this careful observation! Note that in line 489, the infinite norm can be naturally bounded by the L2 norm on the RHS of the inequality. And so that the norm can be moved to the LHS, by setting the learning rate appropriately, it gives us l... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"gi1h1rRyyci",
"P_2ylBGbuKk",
"MinNqEZgBm",
"UV5W8kYtnSb",
"_dIcusIff4J",
"053lgti1fkn",
"nips_2022_nDemfqKHTpK",
"veclbtr_Yi2",
"053lgti1fkn",
"R8GYZ-BODx",
"yImBJjAo59p",
"nips_2022_nDemfqKHTpK",
"nips_2022_nDemfqKHTpK",
"nips_2022_nDemfqKHTpK"
] |
nips_2022_GH4q4WmGAsl | Enhancing Safe Exploration Using Safety State Augmentation | Safe exploration is a challenging and important problem in model-free reinforcement learning (RL). Often the safety cost is sparse and unknown, which unavoidably leads to constraint violations - a phenomenon ideally to be avoided in safety-critical applications. We tackle this problem by augmenting the state-space with a safety state, which is nonnegative if and only if the constraint is satisfied. The value of this state also serves as a distance toward constraint violation, while its initial value indicates the available safety budget. This idea allows us to derive policies for scheduling the safety budget during training. We call our approach Simmer (Safe policy IMproveMEnt for RL) to reflect the careful nature of these schedules. We apply this idea to two safe RL problems: RL with constraints imposed on an average cost, and RL with constraints imposed on a cost with probability one. Our experiments suggest that "simmering" a safe algorithm can improve safety during training for both settings. We further show that Simmer can stabilize training and improve the performance of safe RL with average constraints. | Accept | The authors develop a novel method for safe RL extending the work of Sootla et al (https://proceedings.mlr.press/v162/sootla22a/sootla22a.pdf) to deal with both expected and probability one constraints, demonstrating the utility of safety state aggregation in both scenarios. The authors validate their approach empirically and conduct careful experiments on several benchmark domains, showing gains from their approach relative to prior work.
Reviewers pointed out several issues in presentation and novelty relative to prior work that the authors addressed adequately in the rebuttal phase. Hence, I recommend acceptance. | train | [
"QfGf7ZveNDB",
"whFWo0ql0As",
"KD3SwCyQmZc",
"lVuyO8gBX0A",
"rgNtDMLUQaU",
"ZGk5Jjf-EkI",
"A7N8oxfBPII",
"S5RLVnn0gle",
"fixLfCMGWRr",
"NUflfOfygX8",
"Zw_Uezw761yu",
"Rd0obr6WFdQ",
"b5aiLPczRNC",
"KRLpsB44jIu",
"cFcvt862Z6s",
"AbDWXxxjZyM",
"r5I2QoXXb-",
"Ry4j1xrg_FR",
"mGdhBc0_y... | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",... | [
" One of the reviewers was unhappy that we had 10 pages in our submission and we had to temporarily remove a figure with environments and 4 out of 6 figures with learning curves for the safety gym benchmark.\n\nApologies for the last minute, but hopefully a temporary change. ",
" We thank the reviewer for their r... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
3,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
3
] | [
"rgNtDMLUQaU",
"KD3SwCyQmZc",
"b5aiLPczRNC",
"rgNtDMLUQaU",
"S5RLVnn0gle",
"A7N8oxfBPII",
"KRLpsB44jIu",
"Zw_Uezw761yu",
"NUflfOfygX8",
"AbDWXxxjZyM",
"ZWJr_Gw0TUY",
"dDIPq0vSkq7",
"nips_2022_GH4q4WmGAsl",
"3v_6xperbhZ",
"3v_6xperbhZ",
"sgNyVKvpeOE",
"sgNyVKvpeOE",
"dDIPq0vSkq7",
... |
nips_2022_jdsmBlsHGF2 | Generalization Error Bounds on Deep Learning with Markov Datasets | In this paper, we derive upper bounds on generalization errors for deep neural networks with Markov datasets. These bounds are developed based on Koltchinskii and Panchenko's approach for bounding the generalization error of combined classifiers with i.i.d. datasets. The development of new symmetrization inequalities in high-dimensional probability for Markov chains is a key element in our extension, where the spectral gap of the infinitesimal generator of the Markov chain plays a key parameter in these inequalities. We also propose a simple method to convert these bounds and other similar ones in traditional deep learning and machine learning to Bayesian counterparts for both i.i.d. and Markov datasets. Extensions to $m$-order homogeneous Markov chains such as AR and ARMA models and mixtures of several Markov data services are given. | Accept | The reviewers support acceptance of the manuscript based on the quality of the results, but have expressed concerns about the organisation of the results which appear to be a result of the page length limits imposed on NeurIPS submissions. There reviewers would welcome publication if these organisational issues were results, but are unsure if the authors will do so. I would encourage acceptance as the authors have assured the reviewers that Section 3 will be compressed and Section 4 expanded upon so as to emphasise the extension to High-order Markov Chains; clearly Section 4 shouldn't be one paragraph and I appreciate the authors would likely have preferred to put more of the associated supplementary material here. Potentially abridging the rather length Section 2 would be beneficial, with Section 2 edited to take up far less space as most of (7) to (17) need not be stand alone numbered equations. | train | [
"omyBjY7M8Kh",
"HxbRNxD7f4s",
"e_Xc07LXaRAf",
"gQyrgGvFoXD",
"_tkSpIYazJV",
"ES9Xoyi3tJu"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for reviewing our paper. However, your concern related to the stationary assumption looks not totally correct.\n\nIn our opinion, the Markov model is used in a vast application in machine learning. For example, it models time-series data in machine learning, or hidden Markov models (a special... | [
-1,
-1,
-1,
8,
7,
3
] | [
-1,
-1,
-1,
4,
2,
4
] | [
"ES9Xoyi3tJu",
"_tkSpIYazJV",
"gQyrgGvFoXD",
"nips_2022_jdsmBlsHGF2",
"nips_2022_jdsmBlsHGF2",
"nips_2022_jdsmBlsHGF2"
] |
nips_2022_EFnI8Qc--jE | Posterior Matching for Arbitrary Conditioning | Arbitrary conditioning is an important problem in unsupervised learning, where we seek to model the conditional densities $p(\mathbf{x}_u \mid \mathbf{x}_o)$ that underly some data, for all possible non-intersecting subsets $o, u \subset \{1, \dots , d\}$. However, the vast majority of density estimation only focuses on modeling the joint distribution $p(\mathbf{x})$, in which important conditional dependencies between features are opaque. We propose a simple and general framework, coined Posterior Matching, that enables Variational Autoencoders (VAEs) to perform arbitrary conditioning, without modification to the VAE itself. Posterior Matching applies to the numerous existing VAE-based approaches to joint density estimation, thereby circumventing the specialized models required by previous approaches to arbitrary conditioning. We find that Posterior Matching is comparable or superior to current state-of-the-art methods for a variety of tasks with an assortment of VAEs (e.g.~discrete, hierarchical, VaDE). | Accept | The paper presents a method for conditional generation of part of the data variable given the rest part, where the joint data distribution is defined by a trained VAE. The partition can be made arbitrary. A model standing alone from the VAE is trained. Extension for faster active feature acquisition is also presented. All reviewers acknowledged the significance of the task and the simplicity/flexibility and empirical effectiveness of the method. Notably multi-modality generation results are seen. Reviewers also expressed concerns on the notation, clarity (e.g., insufficient emphasis on requiring fully observed data), and conceptual and empirical comparison with similar methods. The authors have addressed most of them. In all, this paper makes an interesting contribution to the community.
The authors are encouraged to include the discussions with reviewers into the paper (particularly, relation and comparison with similar methods mentioned). It would make Theorem 3.1 more insightful if the authors could explain how the second term in Eq. (4) makes the method better than directly optimizing the arbitrary conditioning likelihood. Moreover, it seems the method requires $x_u$ and $x_o$ are conditionally independent given $z$ (so that $p(x_u | x_o, z) = p(x_u | z)$). Though Line 163 mentioned factorization, it is not for addressing this point. | train | [
"1TCQN_BYen3",
"wEZQT-4HX1b",
"HrfXX6rZeYI",
"1xYbu1KIRVS",
"oQr7zeXIVm",
"sKmp9vVYxWR",
"3O2F-M4xnwa",
"duVFvJPgUhn",
"5T369ZoVhcx",
"YtKWkPhcSXg"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I am largely satisfied with this response. I appreciate the additional clarity on limitations in the manuscript.\n\nFigure 4, second column is nice. I would suggest highlighting the multi-modality more in the draft, as I find it easy to miss.",
" We would like to thank all the reviewers for their time and info... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
5
] | [
"oQr7zeXIVm",
"nips_2022_EFnI8Qc--jE",
"YtKWkPhcSXg",
"5T369ZoVhcx",
"duVFvJPgUhn",
"3O2F-M4xnwa",
"nips_2022_EFnI8Qc--jE",
"nips_2022_EFnI8Qc--jE",
"nips_2022_EFnI8Qc--jE",
"nips_2022_EFnI8Qc--jE"
] |
nips_2022_Dh7eLBlTXb5 | Learning to Re-weight Examples with Optimal Transport for Imbalanced Classification | Imbalanced data pose challenges for deep learning based classification models. One of the most widely-used approaches for tackling imbalanced data is re-weighting, where training samples are associated with different weights in the loss function. Most of existing re-weighting approaches treat the example weights as the learnable parameter and optimize the weights on the meta set, entailing expensive bilevel optimization. In this paper, we propose a novel re-weighting method based on optimal transport (OT) from a distributional point of view. Specifically, we view the training set as an imbalanced distribution over its samples, which is transported by OT to a balanced distribution obtained from the meta set. The weights of the training samples are the probability mass of the imbalanced distribution and
learned by minimizing the OT distance between the two distributions. Compared with existing methods, our proposed one disengages the dependence of the weight learning on the concerned classifier at each iteration. Experiments on image, text and point cloud datasets demonstrate that our proposed re-weighting method has excellent performance, achieving state-of-the-art results in many cases and
providing a promising tool for addressing the imbalanced classification issue. | Accept | This work proposes a new re-weighting method based on OT: it views the training set as an imbalanced distribution over its samples and transported by OT to a balanced distribution obtained from the meta set. The paper is well written, and the proposed method is verified on multiple image and text datasets.
Most of the comments were addressed by the authors in the discussion period. Three reviewers raised their scores.
The major remaining concern is regarding the relationship/novelty w.r.t. [1]. As clarified by the author, the problems, algorithms, tasks, and datasets are different. The novelty of this work is also supported by most reviewers.
-- 4F4H: "Re-weighting from optimal transport (OT) perspective is novel."
-- DJ7Q: "The idea of sample-reweighting is not new. But the method proposed by the author that learning the sample weights by minimizing the OT distance between the imbalanced training set and a balanced meta set seems reasonably novel."
-- 8X8s: "It is a very interesting idea to reweight the training samples based on OT in the training phase, this approach is therefore more flexible than the previous OT-based approaches to long-tailed problems. Whereas previous approaches have involved adjustments to the model's results in the post-processing stage, the authors go further and apply OT to the training stage and propose a corresponding solution. "
LeMk also raised the score to 4 due to "the contribution of this paper includes using OT loss in class imbalance and empirical contributions on many datasets".
Based on the above, the work meets the bar of a NeurIPS publication from the novelty point of view, and the AC recommends acceptance.
In the final version, please clearly discuss the position and contribution of this work w.r.t [1] based on LeMk's final comments.
-- "the two papers are the same in high-level thinking but different in the tasks. [1] considers a more general setting of distribution shift and this paper focuses on class imbalance."
-- “The authors said the key difference is that this paper disengages the dependence of the weight learning on the concerned classifier while [1] does not.... in [1] the weight learning is independent of the classifier when using hidden-layer-output transformation as the non-linear transformation of data. I would suggest the authors carefully check this point to avoid any potential misunderstanding about the position of this paper with related work.”
[1] Influence-Balanced Loss for Imbalanced Visual Classification, ICCV, 2021. | train | [
"8iuwoCgyxcK",
"dVvKPRagP6U",
"9UMvcoQVAOR",
"a9jKMmeCfx",
"OwK2bvQqYY4",
"LI_d7zqeeA",
"PWEmQ_ikbDX",
"Kys1D1ApqJG",
"EypYqrxru_w",
"W1vg3paJJW",
"Fr6p5MIzsQU",
"Tiw9Ou3Dnci",
"erx4EObUCn7",
"HOGKbdWSIEI"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks again for your time. Previously, you gave a relatively low rating to our work and raised some doubts. We have provided clarification and additional experimental results in our response and revision, which we hope can address your concerns. Please let us know if you have any further concerns. We notice tha... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"EypYqrxru_w",
"OwK2bvQqYY4",
"a9jKMmeCfx",
"W1vg3paJJW",
"LI_d7zqeeA",
"PWEmQ_ikbDX",
"HOGKbdWSIEI",
"erx4EObUCn7",
"Tiw9Ou3Dnci",
"Fr6p5MIzsQU",
"nips_2022_Dh7eLBlTXb5",
"nips_2022_Dh7eLBlTXb5",
"nips_2022_Dh7eLBlTXb5",
"nips_2022_Dh7eLBlTXb5"
] |
nips_2022_FFZYhY2z3j | Matrix Multiplicative Weights Updates in Quantum Zero-Sum Games: Conservation Laws & Recurrence | Recent advances in quantum computing and in particular, the introduction of quantum GANs, have led to increased interest in quantum zero-sum game theory, extending the scope of learning algorithms for classical games into the quantum realm. In this paper, we focus on learning in quantum zero-sum games under Matrix Multiplicative Weights Update (a generalization of the multiplicative weights update method) and its continuous analogue, Quantum Replicator Dynamics. When each player selects their state according to quantum replicator dynamics, we show that the system exhibits conservation laws in a quantum-information theoretic sense. Moreover, we show that the system exhibits Poincare recurrence, meaning that almost all orbits return arbitrarily close to their initial conditions infinitely often. Our analysis generalizes previous results in the case of classical games. | Accept | In this submission, the authors consider zero-sum games and analyze two algorithms for learning the Nash equilibrium. In this version, each player's strategy is to probabilistically prepare a "quantum pure state" that is sent to a referee who performs a joint measurement on the two quantum states to determine the payoffs of the players. This is an interesting generalization of zero-sum games to quantum computing.
While both algorithms (matrix multiplicative weights and its continuous-time analog, the quantum replicator dynamics) have been previously described in the literature. The authors prove many interesting new results, including the convergence of some observable, and the Poincare recurrence of quantum replicator dynamics.
The consensus among reviewers is that the paper is clearly written, and definitely presents a significant extension of known results on such problems. Following the reviewers' suggestions, the authors have expended their discussion on the relevance of this work to the general machine learning audiences. | train | [
"hxL-QpDYGcC",
"RrzwfayS5t9",
"JdQLSdUNV42",
"2nuD8ITaQHK",
"qpHx-YMEuk7",
"jHaINVDo8I",
"X767OPaXaWQ",
"wVJCGj4-Lhb",
"xNahNg6CFqz",
"PjslMQwonqD",
"BoXnhqIYFu",
"Znjo8OiCvAy",
"L_tDxfwUek8",
"-zbuUhpk-x2",
"Lgvbf3d90cC",
"W_GnwX2fMuY"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their efforts to update the experiments. Single-qubit experiments were too simple but as the results also hold for multi-qubit cases, I am convinced that the merits of accepting the paper outweight its rejection. \n\n",
" Thank you for your quick response to our rebuttal and for your sup... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
5
] | [
"JdQLSdUNV42",
"2nuD8ITaQHK",
"Lgvbf3d90cC",
"wVJCGj4-Lhb",
"nips_2022_FFZYhY2z3j",
"X767OPaXaWQ",
"PjslMQwonqD",
"L_tDxfwUek8",
"L_tDxfwUek8",
"W_GnwX2fMuY",
"Lgvbf3d90cC",
"-zbuUhpk-x2",
"nips_2022_FFZYhY2z3j",
"nips_2022_FFZYhY2z3j",
"nips_2022_FFZYhY2z3j",
"nips_2022_FFZYhY2z3j"
] |
nips_2022_Leg6spUEFFf | On the non-universality of deep learning: quantifying the cost of symmetry | We prove limitations on what neural networks trained by noisy gradient descent (GD) can efficiently learn. Our results apply whenever GD training is equivariant, which holds for many standard architectures and initializations. As applications, (i) we characterize the functions that fully-connected networks can weak-learn on the binary hypercube and unit sphere, demonstrating that depth-2 is as powerful as any other depth for this task; (ii) we extend the merged-staircase necessity result for learning with latent low-dimensional structure [ABM22] to beyond the mean-field regime. Under cryptographic assumptions, we also show hardness results for learning with fully-connected networks trained by stochastic gradient descent (SGD). | Accept | This paper continues a line of work on the universality of deep learning and provide some natural assumptions under which previous results showing universality are not valid any more. This is an important contribution as indeed previous universality result seem to rely on unnatural constructions and architectures and this work highlight this.
There were some concerns about the practicality of the results, however this paper is mostly theoretical and I would like to judge it in the context of previous work such as Abe & Sandon. As far as I can see, in this context, the authors provide new insights and help to advance our understanding. | train | [
"YSTOMNLzHfN",
"-6ycu5PZMUs",
"U0tWZLHqRQ",
"RK8T407h_eF",
"uFXWsqo8pv",
"diUbfBlPjm",
"dzOQfIuPL28",
"5tmDCVRgzFd",
"am1RKRCDMbj",
"tgBjVAHUXV4",
"7fQElNys6sJ",
"NCzQVMDdLLd"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your careful reading of the document and for raising your score.",
" Thanks so much for the authors' time and effort in the reply. My original review was incorrect about the hardness argument of SGD. Indeed, the second result of this paper provides a good insight into the limitation of SGD trainin... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
2,
2
] | [
"-6ycu5PZMUs",
"dzOQfIuPL28",
"diUbfBlPjm",
"NCzQVMDdLLd",
"7fQElNys6sJ",
"tgBjVAHUXV4",
"am1RKRCDMbj",
"nips_2022_Leg6spUEFFf",
"nips_2022_Leg6spUEFFf",
"nips_2022_Leg6spUEFFf",
"nips_2022_Leg6spUEFFf",
"nips_2022_Leg6spUEFFf"
] |
nips_2022_D87gRf2-np | Don’t fear the unlabelled: safe semi-supervised learning via simple debiasing | Semi-supervised learning (SSL) provides an effective means of leveraging unlabelled data to improve a model’s performance. Even though the domain has received a considerable amount of attention in the past years, most methods present the common drawback of lacking theoretical guarantees. Our starting point is to notice that the estimate of the risk that most discriminative SSL methods minimise is biased, even asymptotically. This bias impedes the use of standard statistical learning theory and can hurt empirical performance. We propose a simple way of removing the bias. Our debiasing approach is straightforward to implement and applicable to most deep SSL methods. We provide simple theoretical guarantees on the trustworthiness of these modified methods, without having to rely on the strong assumptions on the data distribution that SSL theory usually requires. In particular, we provide generalisation error bounds for the proposed methods. We evaluate debiased versions of different existing SSL methods, such as the Pseudo-label method and Fixmatch, and show that debiasing can compete with classic deep SSL techniques in various settings by providing better calibrated models. Additionally, we provide a theoretical explanation of the intuition of the popular SSL methods. | Reject | This paper provides an interesting generalized perspective on SSL techniques and proposes a debiasing technique that can be viewed as decreasing the variance of the risk estimate. The authors argue that this leads to estimators that are better than the purely supervised estimator under a rather weak assumption called MCAR - which assumes that the probability of a missing label is independent of covariate and label.
Although the exposition and perspective are interesting, this paper is borderline for the following two main reasons:
1. A few reviewers were not convinced of the theoretical result that is supposed to show that unlabeled data strictly helps - indeed, whereas usual variance reduction techniques (e.g for optimization schemes such as SGD etc.) lead to a strict gain in terms of convergence rate, there is no clear asymptotic/high probability statement that indicates statistical gain of the corresponding estimator (which is what we really care about - not the risk estimate). The dependence of lambda_opt on theta (which changes every iteration) does not help in providing such a statement. Since this is the primary contribution of the paper, I would suggest the authors follow through with the analysis to show a gain for the actual estimator compared to the "complete case" (only using supervised data).
On that note, the authors claimed in their rebuttal that they have added an asymptotic variance analysis in Appendix I, which indeed would have made a very valuable point - however, I could only find a copy-pasted version of Theorem 3.1. in Appendix I? Similarly, Appendix F does not seem to include the comparison between debasing using labeled and unlabeled data but instead contains the proof of Theorem 3.2. Perhaps the wrong revision was uploaded, but unfortunately, given the current version, this point is not adequately addressed.
2. If the experimental results were more extensive and conclusive, then the current theorem could have perhaps been alright as a mainly methodological contribution. However, as the authors note, extensive experiments require a lot of compute power - however, given the lack of the ultimate theorem, the methodology becomes the primary contribution and would thus require more experimental evidence as the reviewers asked for.
Addressing one of the above points would push the paper above the acceptance threshold which we hope the authors can pursue in their next submission. | train | [
"0YHsV3hPa-O",
"fFeMvETWA9c",
"fic0kAwbcu3",
"3NBO4HSBSDUB",
"BFfqHooaT2q",
"hciVqm41U41",
"tuhtIqCCIOp",
"-3p9yGtSUE4",
"AJpL2D_uLRA",
"SNiCJIVAWvI",
"zva4v1rTTZ2",
"EYktucKNDW",
"QOLHZ3OAix"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you again for your suggestions and for supporting our paper. The asymptotic normality was a good idea.",
" Thanks for replying to my earlier comments and clarifying the questions and issues. I have read carefully your response and think this paper can be a useful addition to the current studies. I hold m... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
5
] | [
"fFeMvETWA9c",
"BFfqHooaT2q",
"3NBO4HSBSDUB",
"hciVqm41U41",
"QOLHZ3OAix",
"EYktucKNDW",
"zva4v1rTTZ2",
"SNiCJIVAWvI",
"nips_2022_D87gRf2-np",
"nips_2022_D87gRf2-np",
"nips_2022_D87gRf2-np",
"nips_2022_D87gRf2-np",
"nips_2022_D87gRf2-np"
] |
nips_2022_wwW-1k1ljIg | Pre-activation Distributions Expose Backdoor Neurons | Convolutional neural networks (CNN) can be manipulated to perform specific behaviors when encountering a particular trigger pattern without affecting the performance on normal samples, which is referred to as backdoor attack. The backdoor attack is usually achieved by injecting a small proportion of poisoned samples into the training set, through which the victim trains a model embedded with the designated backdoor. In this work, we demonstrate that backdoor neurons are exposed by their pre-activation distributions, where populations from benign data and poisoned data show significantly different moments. This property is shown to be attack-invariant and allows us to efficiently locate backdoor neurons. On this basis, we make several proper assumptions on the neuron activation distributions, and propose two backdoor neuron detection strategies based on (1) the differential entropy of the neurons, and (2) the Kullback-Leibler divergence between the benign sample distribution and a poisoned statistics based hypothetical distribution. Experimental results show that our proposed defense strategies are both efficient and effective against various backdoor attacks. | Accept | The authors propose a hypothesis that backdoor neurons in an infected neural network have a mixture of two distributions with significantly different moments, formed by benign samples and poisoned samples, respectively. They then propose two mathematically informed and intuitive ways to defend against the attack. The method also seems general against most types of backdoor attacks, and the evaluations give more confidence in that direction. The evaluation is extensive and includes most of the state of the art as comparison. An additional advantage is that this method has better runtime than most.
The most critical reviewer, WDQn, was concerned with the lack of evaluations against multiple attack models (multiple label attack, etc) and the lack of comparisons against SOTA. The authors have responded with detailed and appropriate results in their last response.
Concerns from other reviewers, such as the concern about the robustness of hyperparameter u, were alleviated through ablations/results posted in the authors' rebuttal.
I therefore recommend accept. | train | [
"S91Cbbf91tS",
"JfsFXHteIuN",
"LgpZ0puctbn",
"U-cgnvfkSTo",
"dnwtEKrKAWQ",
"YlWXhUyQ8Hdp",
"g2z709sxrZy",
"DhKgD29wsJ1",
"PVwIU-sC6z1",
"9j-mCnzT0DY",
"vwrbdEvyh_m",
"nH_8kV7XMk2",
"cvSCmBYeJ0n",
"0fQ5UZZkHC2",
"aN4urZwuxJw",
"HaNLWTMOw6"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" __\"However, you should also compare the state-of-art approach for each setting. This is because defenders may have an incentive to implement them simultaneously for defense purposes. Also, comparing with the state-of-art approach for each setting is also better for the readers/reviewers to understand the potenti... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5,
4
] | [
"JfsFXHteIuN",
"9j-mCnzT0DY",
"aN4urZwuxJw",
"g2z709sxrZy",
"g2z709sxrZy",
"DhKgD29wsJ1",
"nH_8kV7XMk2",
"vwrbdEvyh_m",
"0fQ5UZZkHC2",
"aN4urZwuxJw",
"HaNLWTMOw6",
"cvSCmBYeJ0n",
"nips_2022_wwW-1k1ljIg",
"nips_2022_wwW-1k1ljIg",
"nips_2022_wwW-1k1ljIg",
"nips_2022_wwW-1k1ljIg"
] |
nips_2022_EbMuimAbPbs | Flamingo: a Visual Language Model for Few-Shot Learning | Building models that can be rapidly adapted to novel tasks using only a handful of annotated examples is an open challenge for multimodal machine learning research. We introduce Flamingo, a family of Visual Language Models (VLM) with this ability. We propose key architectural innovations to: (i) bridge powerful pretrained vision-only and language-only models, (ii) handle sequences of arbitrarily interleaved visual and textual data, and (iii) seamlessly ingest images or videos as inputs. Thanks to their flexibility, Flamingo models can be trained on large-scale multimodal web corpora containing arbitrarily interleaved text and images, which is key to endow them with in-context few-shot learning capabilities. We perform a thorough evaluation of our models, exploring and measuring their ability to rapidly adapt to a variety of image and video tasks. These include open-ended tasks such as visual question-answering, where the model is prompted with a question which it has to answer, captioning tasks, which evaluate the ability to describe a scene or an event, and close-ended tasks such as multiple-choice visual question-answering. For tasks lying anywhere on this spectrum, a single Flamingo model can achieve a new state of the art with few-shot learning, simply by prompting the model with task-specific examples. On numerous benchmarks, Flamingo outperforms models fine-tuned on thousands of times more task-specific data. | Accept | This paper proposed Flamingo, a visual-language pretrained model, which is built based on existing powerful pretrained pure language models and pure image models. By fixing parameters of the existing langauge model and visual model, the proposed model is further pretrained with additional perceiver and gated cross-attention components, on a mixture of vision and language datasets. The model can take as input an sequence of interleaved text and image/vidios, and generate text output. The model demonstrated its performances on a range of open-ended and close-ended visual-language tasks, in zero-shot or few-shot settings. Reviewer KruY proposed stronge concerns on the reproducibility of the work because of not releasing the source codes and datasets and the lack of some dataset details. Nevertheless, other reviewers all agree to accept the paper because of the contribution of the paper to the community while being aware of the reproducibility problem. I think the paper is good enough and represent a new sota in a range of tasks in this area, and is acceptable. | train | [
"PYiC0C8euRR",
"qIuBS1L1b47",
"nUFHyQ9vezy",
"S_xpwIDAxo0",
"Fp3rijc6fPp",
"LJhYH0MUqb",
"uy8sNoJCniD",
"YPkBZ6FERD9",
"eJvbC91XmwK",
"LmlXRUQG6NM",
"h_jNRxCf2m",
"vO82WS2NYAM",
"UGQZHpo_1R",
"IIxV8ZIP8al",
"ybt1fasAYKQ",
"nJKbl-6iZOA",
"XS40g9duY2r",
"VqP1qTLpaWt",
"i5XJlpG9KMF"... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" The main ethical issue brought up by several reviewers relates to reproducibility. More specifically, the dataset used to train the model as well as the trained model are proprietary. As a result, researchers will be unable to verify/reproduce the specific results presented in the paper.\n\nOne additional point t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
8,
8,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
4,
4
] | [
"nips_2022_EbMuimAbPbs",
"ybt1fasAYKQ",
"h_jNRxCf2m",
"Fp3rijc6fPp",
"LJhYH0MUqb",
"eJvbC91XmwK",
"YPkBZ6FERD9",
"XS40g9duY2r",
"LmlXRUQG6NM",
"nJKbl-6iZOA",
"vO82WS2NYAM",
"VqP1qTLpaWt",
"IIxV8ZIP8al",
"i5XJlpG9KMF",
"vQ0JJxwi7g",
"nips_2022_EbMuimAbPbs",
"nips_2022_EbMuimAbPbs",
... |
nips_2022_b57KM4ydqpp | The Curse of Unrolling: Rate of Differentiating Through Optimization | Computing the Jacobian of the solution of an optimization problem is a central problem in machine learning, with applications in hyperparameter optimization, meta-learning, optimization as a layer, and dataset distillation, to name a few. Unrolled differentiation is a popular heuristic that approximates the solution using an iterative solver and differentiates it through the computational path. This work provides a non-asymptotic convergence-rate analysis of this approach on quadratic objectives for gradient descent and the Chebyshev method. We show that to ensure convergence of the Jacobian, we can either 1) choose a large learning rate leading to a fast asymptotic convergence but accept that the algorithm may have an arbitrarily long burn-in phase or 2) choose a smaller learning rate leading to an immediate but slower convergence. We refer to this phenomenon as the curse of unrolling.
Finally, we discuss open problems relative to this approach, such as deriving a practical update rule for the optimal unrolling strategy and making novel connections with the field of Sobolev orthogonal polynomials. | Accept | All the reviewers judged the paper to be novel and interesting and voted to accept it. There were some concerns about the size of the experiments that were partially solved during the discussion phase. Hence, I encourage the authors to add even more experiments in the camera ready to increase the impact of the paper. | train | [
"Nc6D-YElV5s",
"fWsd8l3VXQO",
"nqI6jMv1nf4",
"L2k2YEPD9X_",
"V8D18OvoKGP",
"Ic-Iy7qoM_",
"NfdFc6yQ-nM"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank the anonymous reviewers for the time spent on the paper and their insightful comments. We have uploaded a revised manuscript. The main difference with respect to the original manuscript is the addition of experiments on logistic regression (Appendix B), and the corrections and clarificatio... | [
-1,
-1,
-1,
-1,
8,
6,
6
] | [
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"nips_2022_b57KM4ydqpp",
"NfdFc6yQ-nM",
"Ic-Iy7qoM_",
"V8D18OvoKGP",
"nips_2022_b57KM4ydqpp",
"nips_2022_b57KM4ydqpp",
"nips_2022_b57KM4ydqpp"
] |
nips_2022_kS5KG3mpSY | Adaptive Multi-stage Density Ratio Estimation for Learning Latent Space Energy-based Model | This paper studies the fundamental problem of learning energy-based model (EBM) in the latent space of the generator model. Learning such prior model typically requires running costly Markov Chain Monte Carlo (MCMC). Instead, we propose to use noise contrastive estimation (NCE) to discriminatively learn the EBM through density ratio estimation between the latent prior density and latent posterior density. However, the NCE typically fails to accurately estimate such density ratio given large gap between two densities. To effectively tackle this issue and further learn more expressive prior model, we develop the adaptive multi-stage density ratio estimation which breaks the estimation into multiple stages and learn different stages of density ratio sequentially and adaptively. The latent prior model can be gradually learned using ratio estimated in previous stage so that the final latent space EBM prior can be naturally formed by product of ratios in different stages. The proposed method enables informative and much sharper prior than existing baselines, and can be trained efficiently. Our experiments demonstrate strong performances in terms of image generation and reconstruction as well as anomaly detection. | Accept | Reviewers unanimously agree that this submission is of good technical quality and well-written. The author proposed an unsupervised learning paradigm for latent variable models, based on EBMs and NCE, by extending the prior work on telescoping density-ratio estimation
This paper proposes using EBMs in the latent space before being pushed through a latent variable model. The paper proposes learning this EBM using NCE instead of via MCMC sampling. The difference between the prior and the latent posterior is estimated using short-run langevin instead of a recognition network (variational inference). This technique is applied repeatedly to obtain several “stages” of density ratio estimation. Reviewers find the adaptive multi-stage method interesting and are convinced that it is effective. Authors rebuttal also helped Reviewers' understanding on this paper.
| train | [
"TMArpAoR-n",
"CexVK_FGJy0",
"kgFZRnK0TJlH",
"uFSeSy1jX3",
"qr5S4nOOiHDR",
"P0Icw60Tj6k",
"JeSXimmjQtN",
"AMnquyVmDTI",
"90-dYOFdiSd"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response and your detailed answer to my question. This clears up the uncertainty I had, and I am happy to increase my score.",
" Thank you for your response and your detailed answer to my question. This clears up the uncertainty I had, and I am happy to increase my score.",
" Please let us ... | [
-1,
-1,
-1,
-1,
7,
-1,
-1,
7,
7
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
4,
4
] | [
"kgFZRnK0TJlH",
"P0Icw60Tj6k",
"uFSeSy1jX3",
"qr5S4nOOiHDR",
"nips_2022_kS5KG3mpSY",
"90-dYOFdiSd",
"AMnquyVmDTI",
"nips_2022_kS5KG3mpSY",
"nips_2022_kS5KG3mpSY"
] |
nips_2022_Yg2CRGUln5k | Distributionally robust weighted k-nearest neighbors | Learning a robust classifier from a few samples remains a key challenge in machine learning. A major thrust of research has been focused on developing k-nearest neighbor (k-NN) based algorithms combined with metric learning that captures similarities between samples. When the samples are limited, robustness is especially crucial to ensure the generalization capability of the classifier. In this paper, we study a minimax distributionally robust formulation of weighted k-nearest neighbors, which aims to find the optimal weighted k-NN classifiers that hedge against feature uncertainties. We develop an algorithm, Dr.k-NN, that efficiently solves this functional optimization problem and features in assigning minimax optimal weights to training samples when performing classification. These weights are class-dependent, and are determined by the similarities of sample features under the least favorable scenarios. When the size of the uncertainty set is properly tuned, the robust classifier has a smaller Lipschitz norm than the vanilla k-NN, and thus improves the generalization capability. We also couple our framework with neural-network-based feature embedding. We demonstrate the competitive performance of our algorithm compared to the state-of-the-art in the few-training-sample setting with various real-data experiments. | Accept | The reviewers conclude on an interesting paper (especially PjNq) with substantial results that justify its acceptance. I can only recommend to include all of the discussion parts in the camera ready version. | train | [
"ZNzhx0PRW2K",
"Bz06NIjMc5",
"Twcsp5j5LUS",
"NjgngJgjMyx",
"zy1tNTfe6fj",
"lwqJVQ2Tpfr",
"ROCgqwPx9du",
"hgqYE1muUg",
"3ZXLyRJ3sAq",
"fufDhLEL1ku",
"ja_j8YtT0d",
"-GAb_eiAMVD",
"_JkIv_KW62a",
"2PKxr5PP0HU"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Authors,\n\nThank you for your reply. I understand your motivation. Thanks for the explanation. \n\nBest regards,\nReviewer PP67",
" Dear Reviewer, \n\nWe thank you for your feedback and please see our responses in the following:\n\n(1) As we mentioned in the introduction, we want to clarify that we focus ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"Twcsp5j5LUS",
"zy1tNTfe6fj",
"NjgngJgjMyx",
"hgqYE1muUg",
"ROCgqwPx9du",
"nips_2022_Yg2CRGUln5k",
"2PKxr5PP0HU",
"_JkIv_KW62a",
"-GAb_eiAMVD",
"ja_j8YtT0d",
"nips_2022_Yg2CRGUln5k",
"nips_2022_Yg2CRGUln5k",
"nips_2022_Yg2CRGUln5k",
"nips_2022_Yg2CRGUln5k"
] |
nips_2022_CEjuyeZj1jz | Finite-Time Analysis of Fully Decentralized Single-Timescale Actor Critic | Decentralized Actor-Critic (AC) algorithms have been widely utilized for multi-agent reinforcement learning (MARL) and have achieved remarkable success. Apart from its empirical success, the theoretical convergence property of decentralized AC algorithms is largely unexplored. The existing finite-time convergence results are derived based on either double-loop update or two-timescale step sizes rule, which is not often adopted in real implementation. In this work, we introduce a fully decentralized AC algorithm, where actor, critic, and global reward estimator are updated in an alternating manner with step sizes being of the same order, namely, we adopt the \emph{single-timescale} update. Theoretically, using linear approximation for value and reward estimation, we show that our algorithm has sample complexity of $\tilde{\mathcal{O}}(\epsilon^{-2})$ under Markovian sampling, which matches the optimal complexity with double-loop implementation (here, $\tilde{\mathcal{O}}$ hides a log term). The sample complexity can be improved to ${\mathcal{O}}(\epsilon^{-2})$ under the i.i.d. sampling scheme. The central to establishing our complexity results is \emph{the hidden smoothness of the optimal critic variable} we revealed. We also provide local action privacy preserving version of our algorithm and its analysis. Finally, we conduct experiments to show the superiority of our algorithm over the existing decentralized AC algorithms. | Reject | This paper proposes and analyzes the convergence rate of a single-timescale actor-critic algorithm. The reviewers reached the consensus that it is above the borderline-acceptance bar. However, after an in-depth discussion during the reviewer-metareviewer discussion period, the reviewers pointed out some critical problems: although the superiority of the proposed algorithm over classic double-loop algorithms is validated, that over previous single-timescale single-loop is not discussed. One reviewer carefully checked both the original and the revised version of the paper but still claimed that extending the proof technique adopted in this paper would lead to a worse convergence rate compared with previous work. In general, I agree with the reviewer. Thus, I think the current version of the paper does not make a compelling case for its archive value due to the lack of a more rigorous discussion of related issues.
| train | [
"04e_jeV0Md",
"pPHG77GRD-G",
"qu9odZyOgZh",
"hOVfljU1mq9",
"gES9vjrltkn",
"WiIkUbGt4G",
"jmcAgcdrPC7",
"Q4hq2Nr3Hyf",
"ym5vJAECYs6",
"GMqjCmayzy4",
"jCdWzfulYm9",
"VWv9b5N2EQ4"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer T2LJ,\n\nSince the deadline of the open-discussion period is within hours, we eagerly look forward to your feedback on our response. If you think we have addressed most of your concerns, we would greatly appreciate if you could reconsider your score (as indicated in your initial review report).\n\nB... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4
] | [
"GMqjCmayzy4",
"GMqjCmayzy4",
"nips_2022_CEjuyeZj1jz",
"gES9vjrltkn",
"WiIkUbGt4G",
"VWv9b5N2EQ4",
"jCdWzfulYm9",
"GMqjCmayzy4",
"GMqjCmayzy4",
"nips_2022_CEjuyeZj1jz",
"nips_2022_CEjuyeZj1jz",
"nips_2022_CEjuyeZj1jz"
] |
nips_2022_O0HTonUP2A2 | Towards Disentangling Information Paths with Coded ResNeXt | The conventional, widely used treatment of deep learning models as black boxes provides limited or no insights into the mechanisms that guide neural network decisions. Significant research effort has been dedicated to building interpretable models to address this issue. Most efforts either focus on the high-level features associated with the last layers, or attempt to interpret the output of a single layer. In this paper, we take a novel approach to enhance the transparency of the function of the whole network. We propose a neural network architecture for classification, in which the information that is relevant to each class flows through specific paths. These paths are designed in advance before training leveraging coding theory and without depending on the semantic similarities between classes. A key property is that each path can be used as an autonomous single-purpose model. This enables us to obtain, without any additional training and for any class, a lightweight binary classifier that has at least $60\%$ fewer parameters than the original network. Furthermore, our coding theory based approach allows the neural network to make early predictions at intermediate layers during inference, without requiring its full evaluation. Remarkably, the proposed architecture provides all the aforementioned properties while improving the overall accuracy. We demonstrate these properties on a slightly modified ResNeXt model tested on CIFAR-10/100 and ImageNet-1k. | Accept | The authors propose a modification to ResNeXt where each sample is routed to a subset of the network. The aim is to activate only a subset of the network (subNN), in fact extract a binary classifier for each class from the trained larger network, with a significantly smaller parameter footprint. The main idea is conceptually simple -- in each ResNeXt block, pick a subset of paths for each class (e.g. 3 sub-blocks for each class) precomputed in a data-independant way via a simple binary coding scheme. To ensure that these blocks specialize to a given class a "coding loss" is added to the classic cross-entropy loss. The coding loss forces the mean energies of the subNNs (sum of squared activations, summed across C, H, W) inactive for class k to zero and those of the active subNNs to positive values. In addition, the authors add a stochastic dropout-like operation where the output of a subNN can be zeroed-out. The authors empirically validate the idea on multiple datasets and show that (1) It can improve ResNeXt accuracy, and (2) that competitive class-specific binary classifiers can be extracted post-hoc. The ablation studies clearly demonstrate that a high degree of specialization is enforced.
The reviewers appreciated the novelty and the conceptual simplicity. The method also seems practically significant, especially if it can be extended to other neural architectures. The ablation studies were well-received. After the discussion phase there are still some questions related to the rather specific choice of the 4th power for the coding loss, as well as positioning with respect to works building on the early exit ideas. Nevertheless, the reviewers agreed that this idea is novel and interesting for the larger crowd and seems practically significant. Please update the manuscript as discussed. | train | [
"z5-65boqyMZ",
"C7J8Pn0PVH",
"MZIoT9frRPe",
"OT16J-JP293",
"dEMD4dp6wThZ",
"Sq5J5mkpfF",
"piP-mkXyuQ",
"l7aCvywymVO",
"mkt4gXvvjdD"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your responses. I have also read other reviews and responses to them. \n\n- I do think the proposed presentation improvement is helpful! \n- I really hope that there is some time and space in the paper to explore the nature of feature disentanglement and shared network portions before camera-ready b... | [
-1,
-1,
-1,
-1,
-1,
4,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
2,
4
] | [
"C7J8Pn0PVH",
"mkt4gXvvjdD",
"l7aCvywymVO",
"piP-mkXyuQ",
"Sq5J5mkpfF",
"nips_2022_O0HTonUP2A2",
"nips_2022_O0HTonUP2A2",
"nips_2022_O0HTonUP2A2",
"nips_2022_O0HTonUP2A2"
] |
nips_2022_P7TayMSBhnV | Stability and Generalization for Markov Chain Stochastic Gradient Methods | Recently there is a large amount of work devoted to the study of Markov chain stochastic gradient methods (MC-SGMs) which mainly focus on their convergence analysis for solving minimization problems. In this paper, we provide a comprehensive generalization analysis of MC-SGMs for both minimization and minimax problems through the lens of algorithmic stability in the framework of statistical learning theory. For empirical risk minimization (ERM) problems, we establish the optimal excess population risk bounds for both smooth and non-smooth cases by introducing on-average argument stability. For minimax problems, we develop a quantitative connection between on-average argument stability and generalization error which extends the existing results for uniform stability (Lei et al., 2021). We further develop the first nearly optimal convergence rates for convex-concave problems both in expectation and with high probability, which, combined with our stability results, show that the optimal generalization bounds can be attained for both smooth and non-smooth cases. To the best of our knowledge, this is the first generalization analysis of SGMs when the gradients are sampled from a Markov process.
| Accept | The paper authors a new generalization analysis of SGD with MC sampling by using algorithmic stability. The reviewers agreed that the technical contribution is novel and interesting. Though initially two reviewers were concerned about the potential applications for SGD with MC sampling, the authors have updated their paper pointing out several applications that fits the type of MC sampling assumed in their proof. | val | [
"wQPLBy03X67",
"-JK3ggbTCIN",
"mzXK6RNOEUY",
"MKAGXgHjmed",
"JaEin7_woyaR",
"jZGt4EZFzFk",
"uWDa8kgB0Wo",
"skOiyKaBh0p"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank all the reviewers for the constructive comments given in the initial reviews. We hope our responses convince the reviewers about the merits of this work. If the reviewer has any other suggestions or comments, please don’t hesitate to let us know.",
" Thank you for the constructive comment... | [
-1,
-1,
-1,
-1,
-1,
6,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
1
] | [
"nips_2022_P7TayMSBhnV",
"skOiyKaBh0p",
"uWDa8kgB0Wo",
"JaEin7_woyaR",
"jZGt4EZFzFk",
"nips_2022_P7TayMSBhnV",
"nips_2022_P7TayMSBhnV",
"nips_2022_P7TayMSBhnV"
] |
nips_2022_YpHb0IVJu92 | Safe Opponent-Exploitation Subgame Refinement | In zero-sum games, an NE strategy tends to be overly conservative confronted with opponents of limited rationality, because it does not actively exploit their weaknesses. From another perspective, best responding to an estimated opponent model is vulnerable to estimation errors and lacks safety guarantees. Inspired by the recent success of real-time search algorithms in developing superhuman AI, we investigate the dilemma of safety and opponent exploitation and present a novel real-time search framework, called Safe Exploitation Search (SES), which continuously interpolates between the two extremes of online strategy refinement. We provide SES with a theoretically upper-bounded exploitability and a lower-bounded evaluation performance. Additionally, SES enables computationally efficient online adaptation to a possibly updating opponent model, while previous safe exploitation methods have to recompute for the whole game. Empirical results show that SES significantly outperforms NE baselines and previous algorithms while keeping exploitability low at the same time. | Accept | This paper proposes an algorithm for searching for safe opponent exploitation strategies. The algorithm is based on alternating between a a safe max-margin search and an unsafe exploitation search. The paper provides strong theoretical guarantees for the algorithm, and some experiments that show the advantages of this algorithm. The reviewers found the paper well written and the idea of the algorithm interesting. A concern raised by one reviewer is the limited novelty of this work, since it is an application of DeepStack. Another reviewer shared this concern during the discussion, but argues that the paper still has merits because of the provided theoretical bounds and experiments. | train | [
"aVqAEhSMAvy",
"N_lo4M1oUH2",
"AaF2IHQM03M",
"bbSliDm6dE",
"pmJNsV0LMgw",
"TU2U3EfTE6D",
"ij2Dlnrq7gg",
"CAG5paDSsZw",
"_7JO1-isID",
"PsvpUXREW7",
"XQ8SdZbkIwz",
"6UTmKNQb6j6"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for your prompt feedback and raising the score to 7. But the current score is still 6 and we will really appreciate it if you can update the score accordingly.",
" Right, one difference in technical details is that our method uses \"Maxmargin\" search, while DeepStack uses \"resolving\". Ano... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"AaF2IHQM03M",
"bbSliDm6dE",
"_7JO1-isID",
"CAG5paDSsZw",
"nips_2022_YpHb0IVJu92",
"6UTmKNQb6j6",
"6UTmKNQb6j6",
"XQ8SdZbkIwz",
"PsvpUXREW7",
"nips_2022_YpHb0IVJu92",
"nips_2022_YpHb0IVJu92",
"nips_2022_YpHb0IVJu92"
] |
nips_2022_nZRTRevUO- | Local Latent Space Bayesian Optimization over Structured Inputs | Bayesian optimization over the latent spaces of deep autoencoder models (DAEs) has recently emerged as a promising new approach for optimizing challenging black-box functions over structured, discrete, hard-to-enumerate search spaces (e.g., molecules). Here the DAE dramatically simplifies the search space by mapping inputs into a continuous latent space where familiar Bayesian optimization tools can be more readily applied. Despite this simplification, the latent space typically remains high-dimensional. Thus, even with a well-suited latent space, these approaches do not necessarily provide a complete solution, but may rather shift the structured optimization problem to a high-dimensional one. In this paper, we propose LOL-BO, which adapts the notion of trust regions explored in recent work on high-dimensional Bayesian optimization to the structured setting. By reformulating the encoder to function as both an encoder for the DAE globally and as a deep kernel for the surrogate model within a trust region, we better align the notion of local optimization in the latent space with local optimization in the input space. LOL-BO achieves as much as 20 times improvement over state-of-the-art latent space Bayesian optimization methods across six real-world benchmarks, demonstrating that improvement in optimization strategies is as important as developing better DAE models. | Accept | This paper develops a well-engineered approach to black-box optimization of expensive functions over structured spaces (e.g., graphs). The solution falls within the framework of Bayesian optimization (BO) over latent (continuous) space learned from unsupervised structures using deep generative models. One challenge in this framework is the high-dimensionality of the learned latent space. One straight-forward approach to handle this challenge is to apply state-of-the-art high-dimensional BO methods such as trust region based BO. The paper hypothesizes that there is a mismatch between the trust region in the latent space and the corresponding trust region in the structured space, and proposes a joint training/inference method over the Gaussian process surrogate model and the deep generative model to overcome this challenge. Experimental results on multiple molecule design optimization and arithmetic expression tasks show very good results over prior methods and shows improvements due to SELFIES representation for this approach.
Some of the reviewers' questioned the overall novelty of the approach as it combines advances in latent space BO, trust-region BO, and high-dimensional BO, but there is an agreement that overall strengths outweigh this concern. This paper makes a useful contribution to the latent space BO framework. Therefore, I recommend acceptance. | train | [
"dc9pcNgLYX",
"hSB8zl9Tuu",
"QZkrYK7-hDk",
"-jSg25eP_PB",
"u19PajejhOO",
"xc76MyNvwtC",
"8-9a5Les3YC",
"cPyzw-nJSNm",
"RUHEojCK9XB",
"XQuF0cFBn8W"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the detailed comments which help in understanding several aspects of the paper.\nI agree with the authors regarding the applicability of BO to more general settings and the overall objective of the paper. \nAlthough I still believe novelty is a concern, however, I agree and appreciate the simplicity in... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"hSB8zl9Tuu",
"QZkrYK7-hDk",
"u19PajejhOO",
"xc76MyNvwtC",
"XQuF0cFBn8W",
"RUHEojCK9XB",
"cPyzw-nJSNm",
"nips_2022_nZRTRevUO-",
"nips_2022_nZRTRevUO-",
"nips_2022_nZRTRevUO-"
] |
nips_2022_deyqjpcTfsG | Iron: Private Inference on Transformers | We initiate the study of private inference on Transformer-based models in the client-server setting, where clients have private inputs and servers hold proprietary models. Our main contribution is to provide several new secure protocols for matrix multiplication and complex non-linear functions like Softmax, GELU activations, and LayerNorm, which are critical components of Transformers. Specifically, we first propose a customized homomorphic encryption-based protocol for matrix multiplication that crucially relies on a novel compact packing technique. This design achieves $\sqrt{m} \times$ less communication ($m$ is the number of rows of the output matrix) over the most efficient work. Second, we design efficient protocols for three non-linear functions via integrating advanced underlying protocols and specialized optimizations. Compared to the state-of-the-art protocols, our recipes reduce about half of the communication and computation overhead. Furthermore, all protocols are numerically precise, which preserve the model accuracy of plaintext. These techniques together allow us to implement \Name, an efficient Transformer-based private inference framework. Experiments conducted on several real-world datasets and models demonstrate that \Name achieves $3 \sim 14\times$ less communication and $3 \sim 11\times$ less runtime compared to the prior art. | Accept | The paper studies private inference on transformer-based models. It provides methods to securely perform matrix multiplication and certain other non-linear function computation. The evaluations find that their methods are much more efficient than the state of the art.
All of the reviewers are positive about the paper and I recommend acceptance subject to the authors following up on the promised changes (including "We have submitted our source code in the supplementary material in the revision, and will open source it in the near future"). | train | [
"NQSn59-Yb79",
"CmudyIKzNzdy",
"1OdmFsJoGLl",
"RI0HE0xFGq9",
"m3dv8pNoZpG",
"VL1foo40Wk3",
"c_oha9hZxo-",
"wRKyDLfRNsvK",
"7gSaRGVnbJ3",
"iEjuD6te6V",
"bLCwQvM_nAn",
"uyXk9Ah6Qlk",
"d1YE6P2iw-N",
"pJzkU1MePJb"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are sorry for not citing the concurrent work [50] in the main content. We will add it in the revised version. Thanks for your positive review.",
" Thanks for clarifying the difference. I noticed that the reference [50] was not added to the main content. It would be helpful for other researchers to better und... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
5
] | [
"CmudyIKzNzdy",
"7gSaRGVnbJ3",
"RI0HE0xFGq9",
"VL1foo40Wk3",
"pJzkU1MePJb",
"pJzkU1MePJb",
"pJzkU1MePJb",
"d1YE6P2iw-N",
"d1YE6P2iw-N",
"d1YE6P2iw-N",
"uyXk9Ah6Qlk",
"nips_2022_deyqjpcTfsG",
"nips_2022_deyqjpcTfsG",
"nips_2022_deyqjpcTfsG"
] |
nips_2022_AJ_flTkNFhP | Uncertainty-Aware Hierarchical Refinement for Incremental Implicitly-Refined Classification | Incremental implicitly-refined classification task aims at assigning hierarchical labels to each sample encountered at different phases. Existing methods tend to fail in generating hierarchy-invariant descriptors when the novel classes are inherited from the old ones. To address the issue, this paper, which explores the inheritance relations in the process of multi-level semantic increment, proposes an Uncertainty-Aware Hierarchical Refinement (UAHR) scheme. Specifically, our proposed scheme consists of a global representation extension strategy that enhances the discrimination of incremental representation by widening the corresponding margin distance, and a hierarchical distribution alignment strategy that refines the distillation process by explicitly determining the inheritance relationship of the incremental class. Particularly, the shifting subclasses are corrected under the guidance of hierarchical uncertainty, ensuring the consistency of the homogeneous features. Extensive experiments on widely used benchmarks (i.e., IIRC-CIFAR, IIRC-ImageNet-lite, IIRC-ImageNet-Subset, and IIRC-ImageNet-full) demonstrate the superiority of our proposed method over the state-of-the-art approaches. | Accept | Three reviewers are positive on this paper. Although the rating of one reviewer is borderline reject, he is fairly confident in his assessment. In effect,the authors have well addressed all the reviewers' concerns in the rebuttal. So I suggest accepting this paper. | train | [
"QDphioUB9G2",
"jhdefsAUuT1",
"T0zdUoXMwnT",
"tD1EXFKDays",
"gYRilA44F8d",
"p9ohM5cWCGb",
"jIOwAL8WQ52",
"AhVWDCc5M8Nq",
"neSvSDnShjn",
"d45SJ2-Llkcd",
"KhwqgPQFaVe",
"zPaLF2SVqis",
"DcaAQrP7Iq",
"1LdsLJFgNP0",
"qFkyoAiuwm",
"ig8stBzEsVt",
"LC5cWpTDRWd",
"vGAjZmbV8sI",
"9V8N4IL_Q... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the authors’ detailed response, which addresses my concerns about 1) the motivation of the RBF kernel; 2) the effectiveness of KD. But the authors seem to ignore the reviewer’s major concern about 1) the missing quantitative comparisons with the work [Ref 4], which is a recent representative work in th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
2
] | [
"LC5cWpTDRWd",
"KhwqgPQFaVe",
"qFkyoAiuwm",
"DcaAQrP7Iq",
"p9ohM5cWCGb",
"neSvSDnShjn",
"nips_2022_AJ_flTkNFhP",
"9V8N4IL_Qea",
"9V8N4IL_Qea",
"vGAjZmbV8sI",
"vGAjZmbV8sI",
"LC5cWpTDRWd",
"LC5cWpTDRWd",
"ig8stBzEsVt",
"ig8stBzEsVt",
"nips_2022_AJ_flTkNFhP",
"nips_2022_AJ_flTkNFhP",
... |
nips_2022_rjbl59Qkf_ | Understanding Why Generalized Reweighting Does Not Improve Over ERM | Empirical risk minimization (ERM) is known in practice to be non-robust to distributional shift where the training and the test distributions are different. A suite of approaches, such as importance weighting, and variants of distributionally robust optimization (DRO), have been proposed to solve this problem. But a line of recent work has empirically shown that these approaches do not significantly improve over ERM in real applications with distribution shift. The goal of this work is to obtain a comprehensive theoretical understanding of this intriguing phenomenon. We first posit the class of Generalized Reweighting (GRW) algorithms, as a broad category of approaches that iteratively update model parameters based on iterative reweighting of the training samples. We show that when overparameterized models are trained under GRW, the resulting models are close to that obtained by ERM. We also show that adding small regularization which does not greatly affect the empirical training accuracy does not help. Together, our results show that a broad category of what we term GRW approaches are not able to achieve distributionally robust generalization. Our work thus has the following sobering takeaway: to make progress towards distributionally robust generalization, we either have to develop non-GRW approaches, or perhaps devise novel classification/regression loss functions that are adapted to the class of GRW approaches. | Reject | The paper investigates generalized reweighting schemes that assign to each sample a weight during each iteration of GD over the empirical error. In the case of over-parameterized linear models and linearly independent samples it is shown that the final(= GD with infinite trajectory) ERM solution is not affected by the reweighting as long as the weights converge. These results are extended to the NTK regime with 0-initialization of the last layer under the assumption that the samples in the initial NTK-feature space are linearly independent.
Compared to previous papers it allows different weights at each iteration and applies the setup to neural networks. However, as the weights need to converge (see Assumption 1) and only the NTK-regime (which is rather close to the previously considered linear case) is treated, the novelty is somewhat limited. In addition, it would have been nice to see interesting examples, in which the assumptions are actually satisfied.
Although I tend to vote for rejection, I think this paper should to be compared to other papers of similar strength in order to make a final decision. | train | [
"MB-9-2h-GwU",
"xE036BE1upK",
"fvF2byBhV8",
"evH1VV1Ncu",
"Z55DP3sTG7Z",
"BphKykTzKja",
"aY0YEY2e01t",
"lHl3UMf35x",
"oB1sy5Aya3X",
"yKDPpj1sv-9",
"aBn5REnsxC",
"nQqRl1cPmDC",
"UliqQCtlW9a"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewers,\n\nIf you would like to have a last minute discussion with us, we would be very happy to discuss.\n\nThanks, \nAuthors",
" Dear reviewer,\n\nIf you would like to have a discussion with us, we would be very happy to do so.\n\nThanks,\nAuthors",
" Dear reviewer,\n\nIf you would like to have a d... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
2
] | [
"nips_2022_rjbl59Qkf_",
"aY0YEY2e01t",
"BphKykTzKja",
"lHl3UMf35x",
"oB1sy5Aya3X",
"UliqQCtlW9a",
"nQqRl1cPmDC",
"aBn5REnsxC",
"yKDPpj1sv-9",
"nips_2022_rjbl59Qkf_",
"nips_2022_rjbl59Qkf_",
"nips_2022_rjbl59Qkf_",
"nips_2022_rjbl59Qkf_"
] |
nips_2022_e4Wf6112DI | Increasing the Scope as You Learn: Adaptive Bayesian Optimization in Nested Subspaces | Recent advances have extended the scope of Bayesian optimization (BO) to expensive-to-evaluate black-box functions with dozens of dimensions, aspiring to unlock impactful applications, for example, in the life sciences, neural architecture search, and robotics. However, a closer examination reveals that the state-of-the-art methods for high-dimensional Bayesian optimization (HDBO) suffer from degrading performance as the number of dimensions increases, or even risk failure if certain unverifiable assumptions are not met. This paper proposes BAxUS that leverages a novel family of nested random subspaces to adapt the space it optimizes over to the problem. This ensures high performance while removing the risk of failure, which we assert via theoretical guarantees. A comprehensive evaluation demonstrates that BAxUS achieves better results than the state-of-the-art methods for a broad set of applications. | Accept | New active-subspaces type approach for high dimensional blackbox optimization that works with a family of nested subspaces of increasing dimensionality, with some theoretical control over the failure risk. Overall well written and complete with convincing experiments that indicate better performance than popular approaches like CMA-ES/Random-search as well as several recent methods. Please consider reviewer feedback on clarity and presentation for the final set of revisions. | train | [
"fB0Y3seZiI",
"rL_HeEaJpaA",
"wIuFLbsGbqa",
"9rz3SLX3ml9",
"UKqQERCK0dY",
"iY3OwIzRCdc",
"YEHF12XIeUEc",
"tFOL-vuJdK2",
"K6TIMmwros",
"BP_Hd0g3QpH",
"Vo2t56KB97W"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the reply. I also believe that both its relative minor difference from the strict uniform random hashing and good downstream performance make it a promising embedding. I'm good with the proposed revisions. I've updated the score.",
" Thank you for this interesting comment. We agree that it is not cle... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
3
] | [
"rL_HeEaJpaA",
"wIuFLbsGbqa",
"tFOL-vuJdK2",
"iY3OwIzRCdc",
"YEHF12XIeUEc",
"Vo2t56KB97W",
"BP_Hd0g3QpH",
"K6TIMmwros",
"nips_2022_e4Wf6112DI",
"nips_2022_e4Wf6112DI",
"nips_2022_e4Wf6112DI"
] |
nips_2022_oMhmv3hLOF2 | Streaming Radiance Fields for 3D Video Synthesis | We present an explicit-grid based method for efficiently reconstructing streaming radiance fields for novel view synthesis of real world dynamic scenes. Instead of training a single model that combines all the frames, we formulate the dynamic modeling problem with an incremental learning paradigm in which per-frame model difference is trained to complement the adaption of a base model on the current frame. By exploiting the simple yet effective tuning strategy with narrow bands, the proposed method realizes a feasible framework for handling video sequences on-the-fly with high training efficiency. The storage overhead induced by using explicit grid representations can be significantly reduced through the use of model difference based compression. We also introduce an efficient strategy to further accelerate model optimization for each frame. Experiments on challenging video sequences demonstrate that our approach is capable of achieving a training speed of 15 seconds per-frame with competitive rendering quality, which attains $1000 \times$ speedup over the state-of-the-art implicit methods. | Accept | All reviewers agree that the paper should be accepted, despite some flaws that can be addressed in future work | val | [
"g_BOIOuKPLU",
"_-UpumJWBmz",
"2TQtCqwjnjU",
"rB4h8i9-EB9",
"9HCLwPDoeEs",
"snYJBo6yqNM"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the valuable suggestions and address the reviewer's concerns as follows.\n\n**Q1 The pilot model guidance framework requires better exposition and motivation.**\n\nWe apply a streaming framework. For each incoming frame, we first downsample the grid model and apply the standard training ... | [
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
4,
4,
3
] | [
"snYJBo6yqNM",
"9HCLwPDoeEs",
"rB4h8i9-EB9",
"nips_2022_oMhmv3hLOF2",
"nips_2022_oMhmv3hLOF2",
"nips_2022_oMhmv3hLOF2"
] |
nips_2022_ePgJfxYxl7m | Universal approximation and model compression for radial neural networks | We introduce a class of fully-connected neural networks whose activation functions, rather than being pointwise, rescale feature vectors by a function depending only on their norm. We call such networks radial neural networks, extending previous work on rotation equivariant networks that considers rescaling activations in less generality. We prove universal approximation theorems for radial neural networks, including in the more difficult cases of bounded widths and unbounded domains. Our proof techniques are novel, distinct from those in the pointwise case. Additionally, radial neural networks exhibit a rich group of orthogonal change-of-basis symmetries on the vector space of trainable parameters. Factoring out these symmetries leads to a practical lossless model compression algorithm. Optimization of the compressed model by gradient descent is equivalent to projected gradient descent for the full model. | Reject | This paper is very strong in some regards -- it gives a clear technical setup, a sophisticated but accessible analysis (impressively so), and solid coverage of related work. At the same time, I believe that it is missing key pieces -- ones that would naturally concern a reader in the NeurIPS community. The paper mainly claims a theoretical advancement, and while the path to establishing it is interesting, the end result is not especially impactful in theory, partly because it is not especially connected to other work (whether theoretical or practical).
On the side of strengths, the paper is very clear and well written, including the technical walkthough. I appreciate the intuitive support that the authors work hard to ensure (e.g. Figure 3), and the grounding in constructed examples (e.g. Figure 4). Beyond merely being useful for following, it was altogether enjoyable to read. The paper is also very thorough and complete in its presentation. There are details on every matter in the appendix.
What's missing roughly comes down to (a) grounding/motivation and (b) effect of the end result. Naturally (a) could strengthen (b), but to comment on these individually:
** Motivation
The neural net architecture studied in this paper is also first introduced in this paper. Why this architecture? It bears some relation to another architecture, namely RBFNs, but the authors stress novely of the definition. (For instance, they highlight that "this specific type of RBFN has not previously appeared in the literature.") In turn, my understanding is that the results do not directly bear on RBFNs of prior interest.
There is separately a clear opportunity to motivate radial neural nets by experiment, as highlighted by two reviewers (g15c and nux4). However the authors contend that "comprehensive empirical studies are ... beyond the scope of this work." Short of a comprehensive study, the paper does not investigate this emprically at all (the only experiments learn the scalar function exp(-x^2)), so this key question remains without even partial evidence.
** Impact of result
Say we assume the architecture is motivated, and consider the theoretical contributions. The end results are (a) a universality theorem and (b) a proof of equivalence between GD in a compressed model and PGD in an uncompressed one.
Regarding (a), I appreciate that proving universality of a strict subclass of networks is (as the author response puts it) "generally harder than proving it for the entire class". However difficulty is not an indication of impact in this case, and many types of networks are universal in some way. The bounded-width aspect makes (a) more technically substantial, and I understand that Reviewer RXP7 finds the result (and techniques) valuable in the context of approximation theory. However, I would not recommend this for NeurIPS on this grounds alone.
Regarding (b), compressibility is an appealing property, but again the impact reduces back to the question of whether the (full) model is useful (theoretically or practically). This is again where even the simplest experiment could go a long way. Better yet, a positive observation here seems like it could have really meaningful consequences!
The reviewers did not come to a consensus on ratings, but there was also no clear argument among them for acceptance. Many of the points from reviews and discussion -- both approvals and concerns -- are reflected above. Others concerns initially raised in reviews were completely and thoroughly addressed by the authors in their response. (For example, reviewer g15c initially questioned the relation to RBFNs, and the authors' reply was comprehensive.)
As a side note, the paper's font deviates from the conference format. This did not factor in my evaluation at all, but as a general tip I would avoid these sorts of modifications. | test | [
"LjfLJE-olq",
"dPS7zGlLu0z",
"m9FNS4NB2Wo",
"BlXerZ0Ta3J",
"W62P7ci8LVX",
"MzAKWu3sxH5",
"PSBYzm5siRB",
"ln3bStd9I2"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Hi,\n\nI am also working on approximation theory, but have never heard of an ideas that there is a state-of-the-art on expressivity.\nCould you clarify what kind of state did you expect?\n",
" We appreciate the reviewer's careful reading of our paper and comments. While the reviewer is correct to detect a conne... | [
-1,
-1,
-1,
-1,
-1,
6,
7,
3
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"MzAKWu3sxH5",
"ln3bStd9I2",
"MzAKWu3sxH5",
"PSBYzm5siRB",
"nips_2022_ePgJfxYxl7m",
"nips_2022_ePgJfxYxl7m",
"nips_2022_ePgJfxYxl7m",
"nips_2022_ePgJfxYxl7m"
] |
nips_2022_-t9FUWW5f3u | MOVE: Unsupervised Movable Object Segmentation and Detection | We introduce MOVE, a novel method to segment objects without any form of supervision. MOVE exploits the fact that foreground objects can be shifted locally relative to their initial position and result in realistic (undistorted) new images. This property allows us to train a segmentation model on a dataset of images without annotation and to achieve state of the art (SotA) performance on several evaluation datasets for unsupervised salient object detection and segmentation. In unsupervised single object discovery, MOVE gives an average CorLoc improvement of 7.2% over the SotA, and in unsupervised class-agnostic object detection it gives a relative AP improvement of 53% on average. Our approach is built on top of self-supervised features (e.g. from DINO or MAE), an inpainting network (based on the Masked AutoEncoder) and adversarial training. | Accept | This paper received mixed scores, with two reviewers recommending acceptance (Strong Accept) and one rejection. The paper was thoroughly discussed by the reviewers and the authors, but the reviewers failed to reach a consensus. Ultimately, after the authors' feedback having addressed most of the reviewers' concerns, the main point of disagreement that remains is the fact that the proposed method cannot be used to train or fine-tune the backbone parameters. While tM6J sees this as evidenced that the method is not sound, 9LXw and tSAq both see sufficient merit in the approach to consider this only as a minor drawback that will eventually be addressed in the future. Considering that the method nonetheless learns some parameters and produces convincing results, the AC agrees with 9LXw and tSAq that one paper does not necessarily need to address all problems. We nonetheless strongly encourage the authors to make this clear in the final version of the paper. | train | [
"juhgGdzsydy",
"KeRBT3epMcA",
"jfv2A1V8yI",
"Fr9e88w8ECM",
"z3lc4PZBx3i",
"8l3vGbuFFgP",
"BKT3Jk7yBE9",
"yq6RoG1GG7V",
"3JR1f3_rJuI",
"zUKJG4IgfUB",
"NvlHI_D8uft",
"O5f9HP3imeg",
"u9dj4F0cI41",
"9zwlLmOTrt"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Of course, we can provide numbers for the experiment with MAE encoder features. Below we present a comparison with DINO features for different datasets:\n| Dataset | DINO features | MAE features |\n| --- |--- |--- |\n| ECSSD | 0.809 ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
3,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
4
] | [
"u9dj4F0cI41",
"jfv2A1V8yI",
"Fr9e88w8ECM",
"3JR1f3_rJuI",
"8l3vGbuFFgP",
"BKT3Jk7yBE9",
"yq6RoG1GG7V",
"3JR1f3_rJuI",
"u9dj4F0cI41",
"9zwlLmOTrt",
"O5f9HP3imeg",
"nips_2022_-t9FUWW5f3u",
"nips_2022_-t9FUWW5f3u",
"nips_2022_-t9FUWW5f3u"
] |
nips_2022_lDohSFOHr0 | Robust Semi-Supervised Learning when Not All Classes have Labels | Semi-supervised learning (SSL) provides a powerful framework for leveraging unlabeled data. Existing SSL typically requires all classes have labels. However, in many real-world applications, there may exist some classes that are difficult to label or newly occurred classes that cannot be labeled in time, resulting in there are unseen classes in unlabeled data. Unseen classes will be misclassified as seen classes, causing poor classification performance. The performance of seen classes is also harmed by the existence of unseen classes. This limits the practical and wider application of SSL. To address this problem, this paper proposes a new SSL approach that can classify not only seen classes but also unseen classes. Our approach consists of two modules: unseen class classification and learning pace synchronization. Specifically, we first enable the SSL methods to classify unseen classes by exploiting pairwise similarity between examples and then synchronize the learning pace between seen and unseen classes by proposing an adaptive threshold with distribution alignment. Extensive empirical results show our approach achieves significant performance improvement in both seen and unseen classes compared with previous studies. | Accept | This paper presents a method for discovering novel classes in the test data, while not deteriorating in performance on already known (seen) classes. The problem setting is similar to Novel Class Discovery (NCD) with the additional requirement that the performance on seen classes should not suffer.
The author response was discussed. In general, the paper received positive reviews. However, one of the reviewers had some concerns about the specific adaptive threshold method proposed in the work as compared to other existing adaptive threshold methods (in particular, what was the motivation behind using a new method). This aspect should be clarified in the paper.
In addition, I would like to point out that NCD with no forgetting on the seen classes has been proposed in other recent works as well, such as
Novel Class Discovery without Forgetting: https://arxiv.org/abs/2207.10659
This work should be discuss because it is solving a very similar problem.
Regardless of these concerns (which should be addressed in the final version), the paper has received largely positive reviews. Therefore I vote for acceptance. | train | [
"pFsFeJ2tnb",
"2rwIeE7qUsv",
"ThRM2PWgLN",
"QWsXwFTn3gW",
"8UMMo_j7_2C",
"A36Q9zxr6O2",
"nOc2kFTS5rA",
"SYvbQFFAoUB",
"tGapPI2U_r6",
"aIp2wU--O1Z",
"jOqR2xW1PQl"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewers,\n\nPlease go through the rebuttal (if you have not) and acknowledge that you have done so. Thanks!\n\nAC",
" Thanks for your response. The difference between our paper and FlexMatch lies in the following aspects:\n1) We consider a more challenging and realistic SSL scenario, i.e., the unlabeled... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
8,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5,
4
] | [
"nips_2022_lDohSFOHr0",
"ThRM2PWgLN",
"nOc2kFTS5rA",
"jOqR2xW1PQl",
"aIp2wU--O1Z",
"tGapPI2U_r6",
"SYvbQFFAoUB",
"nips_2022_lDohSFOHr0",
"nips_2022_lDohSFOHr0",
"nips_2022_lDohSFOHr0",
"nips_2022_lDohSFOHr0"
] |
nips_2022_dfOBSd3tF9p | An Error Analysis of Deep Density-Ratio Estimation with Bregman Divergence | We establish non-asymptotic error bounds for a nonparametric density-ratio estimator using deep neural networks with the Bregman divergence. We also show that the deep density-ratio estimator can mitigate the curse of dimensionality when the data is supported on an approximate low-dimensional manifold. Our error bounds are optimal in the minimax sense and the pre-factors in our error bounds depend on the dimensionality of the data polynomially. We apply our results to investigate the convergence properties of the telescoping density-ratio estimator (Rhodes et al., 2020) and provide sufficient conditions under which it has a smaller upper error bound than a single-ratio estimator. | Reject | This paper establishes non-asymptotic error bounds for nonparametric density-ratio estimators using deep neural networks.
According to the reviews this is a borderline paper and some changes will be needed before publication. In particular, some reviewers found the paper to be too dense and not easy to follow. The paper lacks intuitions, detailed explanations, and numerical illustrations.
| train | [
"KBpPf1HpQJ",
"pfHcXAlUcGQ",
"KMcGe20CU_",
"n9s3tyh2Zk9",
"a6n_UOQRNv0",
"eJyhOnJktp",
"Ru-91Vp46aN",
"mf4oVr4yEX2",
"5huM1M7oroG8",
"EMCwz5V1BDSD",
"e1_FwVjLs8",
"famGARna5cU",
"jyRkqxoiNze",
"BsKHYMTRVq",
"w-468UrfWF",
"wLlaW9Jr3AI",
"rVUMavJK-3Q",
"BVs1RVbBoe1",
"rYwrvNnvTAN"
... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_re... | [
" Thank you so much for your positive feedback to our response and revision.\nWe also appreciate your additional comments and suggestions. We will made changes accordingly in the next revision. \n\nL59-L62: we have now changed to $E_{p^*}\\Delta_{\\phi}=0$ if and only if $R=R^*$ almost everywhere with respect to th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3,
4
] | [
"pfHcXAlUcGQ",
"BsKHYMTRVq",
"n9s3tyh2Zk9",
"w-468UrfWF",
"Ru-91Vp46aN",
"w-468UrfWF",
"5huM1M7oroG8",
"EMCwz5V1BDSD",
"e1_FwVjLs8",
"famGARna5cU",
"jyRkqxoiNze",
"rYwrvNnvTAN",
"BVs1RVbBoe1",
"rVUMavJK-3Q",
"wLlaW9Jr3AI",
"nips_2022_dfOBSd3tF9p",
"nips_2022_dfOBSd3tF9p",
"nips_202... |
nips_2022_2xfJ26BuFP | Near-Optimal Collaborative Learning in Bandits | This paper introduces a general multi-agent bandit model in which each agent is facing a finite set of arms and may communicate with other agents through a central controller in order to identify -in pure exploration- or play -in regret minimization- its optimal arm. The twist is that the optimal arm for each agent is the arm with largest expected mixed reward, where the mixed reward of an arm is a weighted sum of the rewards of this arm for all agents. This makes communication between agents often necessary. This general setting allows to recover and extend several recent models for collaborative bandit learning, including the recently proposed federated learning with personalization [Shi et al., 2021]. In this paper, we provide new lower bounds on the sample complexity of pure exploration and on the regret. We then propose a near-optimal algorithm for pure exploration. This algorithm is based on phased elimination with two novel ingredients: a data-dependent sampling scheme within each phase, aimed at matching a relaxation of the lower bound. | Accept | The paper introduces a new formulation for a multi-player MAB motivated by federated learning. The reviews are all positive and agree that the paper is a significant contribution. | train | [
"092nUgbfZ5X",
"dxGRVuocYXE",
"Ogyt58Pr_QW",
"8v_lC5yJ6OK",
"GlmCRoNZy8W",
"nSD4AjXXith",
"HBNQ9NIh8L_",
"BFYbJFwjU3B"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your detailed response to my comments.\nAll my concerns are resolved.",
" >Can you give a setting that is faithfully captured by the proposed model that was not already captured by a previous model?\n\nAside from the idealized example of clinical trials discussed above, in which we agree that finding... | [
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"8v_lC5yJ6OK",
"Ogyt58Pr_QW",
"BFYbJFwjU3B",
"HBNQ9NIh8L_",
"nSD4AjXXith",
"nips_2022_2xfJ26BuFP",
"nips_2022_2xfJ26BuFP",
"nips_2022_2xfJ26BuFP"
] |
nips_2022_pfI7u0eJAIr | On Embeddings for Numerical Features in Tabular Deep Learning | Recently, Transformer-like deep architectures have shown strong performance on tabular data problems. Unlike traditional models, e.g., MLP, these architectures map scalar values of numerical features to high-dimensional embeddings before mixing them in the main backbone. In this work, we argue that embeddings for numerical features are an underexplored degree of freedom in tabular DL, which allows constructing more powerful DL models and competing with gradient boosted decision trees (GBDT) on some GBDT-friendly benchmarks (that is, where GBDT outperforms conventional DL models). We start by describing two conceptually different approaches to building embedding modules: the first one is based on a piecewise linear encoding of scalar values, and the second one utilizes periodic activations. Then, we empirically demonstrate that these two approaches can lead to significant performance boosts compared to the embeddings based on conventional blocks such as linear layers and ReLU activations. Importantly, we also show that embedding numerical features is beneficial for many backbones, not only for Transformers. Specifically, after proper embeddings, simple MLP-like models can perform on par with the attention-based architectures. Overall, we highlight embeddings for numerical features as an important design aspect with good potential for further improvements in tabular DL. The source code is available at https://github.com/Yura52/tabular-dl-num-embeddings | Accept | The authors propose and study the use of embedding scheme to apply deep learning to tabular problems. According to reviewers HiMf and 5oG4 and reading the submission, the method is simple and clearly explained. The experiments are comprehensive and demonstrates empirical improvements on small scale datasets. Moreover, discussions with reviewers have allowed the authors to provide additional relevant experiments providing comparisons with other methods.
I recommend this paper for acceptance. | train | [
"zq21Ps_6RC7X",
"H8tMdcuJnpR6",
"gKGU43QahAu",
"hESXiYxa9Jk",
"BQQBn4PK97-",
"DL4JaPUM0V0",
"xPc2XILW9Ab",
"sF2SJzJYo",
"5m2HyuvPTP3",
"hXZPSvbfO5y",
"03ByRxbdWh",
"61rhcDMt8Ra",
"aqjIpGmtm8",
"gHTVXxhOWfB"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewers,\n\nWe are open for further feedback and discussion.\n\nAuthors\n",
" Dear Reviewer 5oG4,\n\nDoes our reply below (\"R3 answer\") properly answer your questions? If you have more comments and feedback, we will be glad to continue the discussion.",
" We thank the reviewer for their detailed comm... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
3
] | [
"nips_2022_pfI7u0eJAIr",
"aqjIpGmtm8",
"gHTVXxhOWfB",
"gHTVXxhOWfB",
"aqjIpGmtm8",
"xPc2XILW9Ab",
"61rhcDMt8Ra",
"03ByRxbdWh",
"03ByRxbdWh",
"nips_2022_pfI7u0eJAIr",
"nips_2022_pfI7u0eJAIr",
"nips_2022_pfI7u0eJAIr",
"nips_2022_pfI7u0eJAIr",
"nips_2022_pfI7u0eJAIr"
] |
nips_2022_Vj-jYs47cx | Decentralized Gossip-Based Stochastic Bilevel Optimization over Communication Networks | Bilevel optimization have gained growing interests, with numerous applications found in meta learning, minimax games, reinforcement learning, and nested composition optimization.
This paper studies the problem of distributed bilevel optimization over a network where agents can only communicate with neighbors, including examples from multi-task, multi-agent learning and federated learning.
In this paper, we propose a gossip-based distributed bilevel learning algorithm that allows networked agents to solve both the inner and outer optimization problems in a single timescale and share information via network propagation. We show that our algorithm enjoys the $\mathcal{O}(\frac{1}{K \epsilon^2})$ per-agent sample complexity for general nonconvex bilevel optimization and $\mathcal{O}(\frac{1}{K \epsilon})$ for strongly convex objective, achieving a speedup that scales linearly with the network size. The sample complexities are optimal in both $\epsilon$ and $K$.
We test our algorithm on the examples of hyperparameter tuning and decentralized reinforcement learning. Simulated experiments confirmed that our algorithm achieves the state-of-the-art training efficiency and test accuracy. | Accept | This paper proposed a fully decentralized algorithm for bilevel optimization. Although the techniques are a combination of existing ones from bilevel literature and the decentralized optimization literature, but the setting considered is considerably sophisticated (i.e., both levels are distributed). The algorithm is a single-timescale, and the rates are good for both nonconvex and convex settings. The reviewers all appreciate the contribution of this work. Therefore I recommend acceptance of the paper. | train | [
"2Hd0EsA5rqb",
"reYxVKq3JJe",
"w-3MWrgmC2l",
"DV9fmeHgEH",
"quSGZWzGHj",
"inWxvtpe3tV",
"PLka48VxwwG",
"8xzvpABzCqm",
"e5lVbo4LKS8",
"VPXp8nWiNN4",
"UMn5F2ACyqi"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response and detailed explanation of my questions.\n\nNow I see your more contributions to the optimization area.\n\nI will raise the score to 7.",
" **Question.** $\\epsilon$-stationary point\n\\\n**Response.** \nDue to the non-convexity, our algorithm is not guaranteed to generate a sequence t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5,
3
] | [
"reYxVKq3JJe",
"DV9fmeHgEH",
"nips_2022_Vj-jYs47cx",
"UMn5F2ACyqi",
"VPXp8nWiNN4",
"e5lVbo4LKS8",
"8xzvpABzCqm",
"nips_2022_Vj-jYs47cx",
"nips_2022_Vj-jYs47cx",
"nips_2022_Vj-jYs47cx",
"nips_2022_Vj-jYs47cx"
] |
nips_2022_yipUuqxveCy | Offline Multi-Agent Reinforcement Learning with Knowledge Distillation | We introduce an offline multi-agent reinforcement learning ( offline MARL) framework that utilizes previously collected data without additional online data collection. Our method reformulates offline MARL as a sequence modeling problem and thus builds on top of the simplicity and scalability of the Transformer architecture. In the fashion of centralized training and decentralized execution, we propose to first train a teacher policy as if the MARL dataset is generated by a single agent. After the teacher policy has identified and recombined the "good" behavior in the dataset, we create separate student policies and distill not only the teacher policy's features but also its structural relations among different agents' features to student policies. Despite its simplicity, the proposed method outperforms state-of-the-art model-free offline MARL baselines while being more robust to demonstration's quality on several environments. | Accept | This paper deals with offline multi-agent RL tasks. Based on decision transformer, the authors propose to first train a centralized decision transformer over the offline data to capture the agent interaction patterns and then distill the knowledge from the centralized decision transformer to the individual agents. The idea is clear and the experiment looks comprehensive. As mentioned by the reviewers, the performance of the proposed method looks just comparable to MADT, which is a very simple parameter-sharing architecture.
| train | [
"cJKewuFwGcq",
"jOtM9rVCYnq",
"B_K0jvtDT6R",
"Ihi_g9ZdbWK",
"sxMPDKarhBK",
"nrUjow3tAX",
"4hZoA-e9QgS"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for addressing all of my queries. After reading the rebuttal from the authors, I would like to raise my score to a weak accept, as I think that the proposed method is well-motivated with strong empirical gains.",
" We thank reviewer 3 for the detailed comments and helpful suggestions. The ty... | [
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"jOtM9rVCYnq",
"4hZoA-e9QgS",
"nrUjow3tAX",
"sxMPDKarhBK",
"nips_2022_yipUuqxveCy",
"nips_2022_yipUuqxveCy",
"nips_2022_yipUuqxveCy"
] |
nips_2022_muvlhVKvd4 | A Unified Convergence Theorem for Stochastic Optimization Methods | In this work, we provide a fundamental unified convergence theorem used for deriving expected and almost sure convergence results for a series of stochastic optimization methods. Our unified theorem only requires to verify several representative conditions and is not tailored to any specific algorithm. As a direct application, we recover expected and almost sure convergence results of the stochastic gradient method (SGD) and random reshuffling (RR) under more general settings. Moreover, we establish new expected and almost sure convergence results for the stochastic proximal gradient method (prox-SGD) and stochastic model-based methods for nonsmooth nonconvex optimization problems. These applications reveal that our unified theorem provides a plugin-type convergence analysis and strong convergence guarantees for a wide class of stochastic optimization methods. | Accept | The authors provide a blanket convergence analysis for several stochastic optimization methods. The techniques are interesting and will be useful.
The authors may want to be a bit more careful on the details on some of their convergence results when they make comparisons. For instance, the main difference between [3] and [26] is in the noise assumptions in [26], which allow to use more aggressive step-size policies. Otherwise, the difference in assumptions that the paper alludes to is reflected in the fact that [26] is getting a stronger convergence result (to a component of critical points), whereas [3] leaves open the possibility that the process escapes to infinity (the assumptions in [26] rule out this behavior). The authors also miss the recent work, which provide a tighter, general characterization:
Y.-P. Hsieh, P. Mertikopoulos, and V. Cevher. The limits of min-max optimization algorithms: Convergence to spurious non-critical sets. In ICML '21: Proceedings of the 38th International Conference on Machine Learning, 2021.
| val | [
"_84cBhjpuM0",
"aVs6IpN7Qr8",
"tQKc2k4t4HY",
"IeuI_7Zb9Ru",
"QrN5cHGCDRv",
"AyYpuHEusu",
"a9VeryFUch",
"oermhtvOB7",
"Ki-ddrH_UsQ",
"SnQCrsmlA7r",
"dUlKHUb5fzQ",
"2faY6PTmcEI",
"XROcuKkLKl",
"om9KluZeEm",
"erHJYkmeRox",
"Fur4lT2klQm",
"Bel-ZtSDYr",
"mT4RMS7YFz_",
"DbqWoIvnsCx",
... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for your feedback, positive evaluation, and support! We will include a more detailed and clarifying discussion of the overall application scope of the unified convergence theorem in the main part of the updated manuscript. We believe that such a discussion will be very helpful and we plan to i... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
3
] | [
"aVs6IpN7Qr8",
"tQKc2k4t4HY",
"Bel-ZtSDYr",
"QrN5cHGCDRv",
"mT4RMS7YFz_",
"a9VeryFUch",
"om9KluZeEm",
"mT4RMS7YFz_",
"nips_2022_muvlhVKvd4",
"mT4RMS7YFz_",
"mT4RMS7YFz_",
"DbqWoIvnsCx",
"Bel-ZtSDYr",
"DbqWoIvnsCx",
"hf3ipPqRNSX",
"Bel-ZtSDYr",
"nips_2022_muvlhVKvd4",
"nips_2022_muv... |
nips_2022_wSVEd3Ta42m | Distributional Reinforcement Learning for Risk-Sensitive Policies | We address the problem of learning a risk-sensitive policy based on the CVaR risk measure using distributional reinforcement learning. In particular, we show that the standard action-selection strategy when applying the distributional Bellman optimality operator can result in convergence to neither the dynamic, Markovian CVaR nor the static, non-Markovian CVaR. We propose modifications to the existing algorithms that include a new distributional Bellman operator and show that the proposed strategy greatly expands the utility of distributional RL in learning and representing CVaR-optimized policies. Our proposed approach is a simple extension of standard distributional RL algorithms and can therefore take advantage of many of the recent advances in deep RL. On both synthetic and real data, we empirically show that our proposed algorithm is able to learn better CVaR-optimized policies. | Accept | This paper proposes a new action selection approach for risk-averse distributional reinforcement learning optimizing CVaR. It first shows that the action selection schemes used in existing approaches do not converge to the desired policies and subsequently shows that the fixed-point of the Bellman operator with the new action selection scheme is the desired optimal CVaR policy as long as it is stationary. It finally provides empirical results showcasing the benefits of the proposed approach.
The reviewers had mixed initial views on this paper. On the positive side, they found the paper to be well written and appreciated the new insights into the convergence of the existing action selection scheme as well as the more principled proposed scheme. On the negative side, there were concerns that (1) the paper does not actually convergence of the algorithm, only a fixed point, (2) that the paper does not provide sufficient discussion of the implications of the presented results, e.g. in the form of a conclusion and (3) that a comparison to CVaR optimization approaches that are not based on distributional RL is missing.
The authors' response addressed serval of these concerns so that all reviewers view this paper positively. Although, this paper still remains borderline and some concerns remain, the AC concurs with the reviewers that this paper has sufficient merits to be accepted, hence a recommendation for acceptance. | train | [
"StGPrg__1p",
"wKGspjCm9Ci",
"Wlad5ieUgWNi",
"_zSHlqVX5Y7",
"ekcbSu9F-f",
"a8PnaZadxo",
"JewpRpxfK9U",
"7T2NAUkLiZu",
"obbB_RLZf6j",
"g1vmJHExwiz"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your further questions!\n\n__What is exactly that this method achieves that the current methods cannot (irrespective of focussing on distributional RL)?__\n\nWe address the problem of finding a CVaR-optimal policy with distributional RL -- which belongs to the family of \"value-based\" RL.\nThe appr... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"wKGspjCm9Ci",
"ekcbSu9F-f",
"g1vmJHExwiz",
"obbB_RLZf6j",
"7T2NAUkLiZu",
"JewpRpxfK9U",
"nips_2022_wSVEd3Ta42m",
"nips_2022_wSVEd3Ta42m",
"nips_2022_wSVEd3Ta42m",
"nips_2022_wSVEd3Ta42m"
] |
nips_2022_vriLTB2-O0G | Pareto Set Learning for Expensive Multi-Objective Optimization | Expensive multi-objective optimization problems can be found in many real-world applications, where their objective function evaluations involve expensive computations or physical experiments. It is desirable to obtain an approximate Pareto front with a limited evaluation budget. Multi-objective Bayesian optimization (MOBO) has been widely used for finding a finite set of Pareto optimal solutions. However, it is well-known that the whole Pareto set is on a continuous manifold and can contain infinite solutions. The structural properties of the Pareto set are not well exploited in existing MOBO methods, and the finite-set approximation may not contain the most preferred solution(s) for decision-makers. This paper develops a novel learning-based method to approximate the whole Pareto set for MOBO, which generalizes the decomposition-based multi-objective optimization algorithm (MOEA/D) from finite populations to models. We design a simple and powerful acquisition search method based on the learned Pareto set, which naturally supports batch evaluation. In addition, with our proposed model, decision-makers can readily explore any trade-off area in the approximate Pareto set for flexible decision-making. This work represents the first attempt to model the Pareto set for expensive multi-objective optimization. Experimental results on different synthetic and real-world problems demonstrate the effectiveness of our proposed method. | Accept | This paper studied the problem of (batch) multi-objective Bayesian optimization (BO). It considers a novel perspective to solve problems with an infinite size pareto optimal set by finding a pareto manifold of solutions. The acquisition strategy uses Chebyshev scalarization. The key idea is to learn a mapping from preferences (i.e., scalarization parameters) to the Pareto optimal solution and use it to guide the acquisition strategy and BO process to approximate the Pareto set. Experimental results demonstrate the effectiveness of the proposed approach.
All reviewers agreed about the novel perspective from which the multi-objective BO problem was studied, but also raised some concerns and questions. The authors gave satisfactory responses to most of the review comments and revised the paper to improve it. Two reviewers strongly supported accepting the paper and two of them gave borderline accept. Authors satisfactorily addressed the main concern of one of them (i.e., test problems are too simple). Some of the comments from Reviewer MK65 needs further work, which is acknowledged by the authors.
The overall approach is novel, advances scalarization based multi-objective BO, produced good results, and has the potential to generate good interest in the BO community. Therefore, I recommend acceptance. | val | [
"wt_TGNeuaZ-",
"ihIBMsV4ElK",
"UDzhSnJJ5fS",
"jewi_4l5NUk",
"vFCU6qJROCS",
"oGzofXOtNl_",
"xdqkHmC9VfS",
"KU1KnYjRyVE",
"JlwJY-vwbst",
"rs8MduUv3Pp",
"apizcmH2GUlF",
"JLKjpucS5RG",
"ZHOHfekkvLS",
"pEb0tB79C-N",
"qklu94nYFs7",
"JWkpxcqkoZNI",
"7wYYLkQJK0r",
"pyy4mQM4yNqz",
"UleKD7... | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" Thank you for your further response and the increased score. We will further elaborate the discussion on the practicality of the approximate Pareto set in the final version (revision is now not allowed).\n\nWe agree with the reviewer that making decisions/comparisons could be difficult with many conflicting objec... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
5
] | [
"ihIBMsV4ElK",
"KU1KnYjRyVE",
"MV1fZT-qbRF",
"rs8MduUv3Pp",
"JlwJY-vwbst",
"apizcmH2GUlF",
"apizcmH2GUlF",
"apizcmH2GUlF",
"pyy4mQM4yNqz",
"JLKjpucS5RG",
"JWkpxcqkoZNI",
"E7WSLk2kVA-",
"MV1fZT-qbRF",
"xhO79u5oQ8T",
"xhO79u5oQ8T",
"xhO79u5oQ8T",
"kc_Rbldz-Pe",
"kc_Rbldz-Pe",
"nips... |
nips_2022_7fU8UPo875w | Tracking Functional Changes in Nonstationary Signals with Evolutionary Ensemble Bayesian Model for Robust Neural Decoding | Neural signals are typical nonstationary data where the functional mapping between neural activities and the intentions (such as the velocity of movements) can occasionally change. Existing studies mostly use a fixed neural decoder, thus suffering from an unstable performance given neural functional changes. We propose a novel evolutionary ensemble framework (EvoEnsemble) to dynamically cope with changes in neural signals by evolving the decoder model accordingly. EvoEnsemble integrates evolutionary computation algorithms in a Bayesian framework where the fitness of models can be sequentially computed with their likelihoods according to the incoming data at each time slot, which enables online tracking of time-varying functions. Two strategies of evolve-at-changes and history-model-archive are designed to further improve efficiency and stability. Experiments with simulations and neural signals demonstrate that EvoEnsemble can track the changes in functions effectively thus improving the accuracy and robustness of neural decoding. The improvement is most significant in neural signals with functional changes. | Accept | The review ratings/confidences were 5/2, 6/2, 4/4, and 5/2. Although the average rating of 5 was just above the acceptance threshold, I think that it should somehow be discounted by the lower confidence levels. Although I myself does not have expertise in the field of BCI, as for the reviewers' evaluation, I think that they basically agreed on the following points:
- The problem is well motivated.
- The proposed method was built on DyEnsemble with some empirically-motivated extensions to have decoder models dynamically evolving. One could then argue that the proposal is not groundbreaking but somehow incremental.
- The authors showed experimentally that the proposed method works well compared with a number of other existing methods.
I also noticed that the authors made revision (adding a paragraph at the end of Section 2: It can be observed in the August 10 revision), which would have improved readability of this paper. I would thus recommend acceptance of this paper, provided that there is room for it. | train | [
"vB3ejHx2oFC",
"4sb4nXI3JLQ",
"KSIoe7B-aVX",
"zqW-uKXFWC",
"de_1-yUPxUjF",
"tatZ8qkc-tS",
"dhuwR0_6oA",
"TYvMRqlf8B1",
"75ARG_xzOVV",
"_tvNdMSkImN",
"-ZeT6ahuRGl",
"vTpdmUkSkAo",
"s60qjuQwi8",
"fRHqPz0e6eD",
"44NDgWriY1Z"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your advice! We have put these clarifications into the original manuscript as much as possible. Due to space limitations, we put the remaining parts in the appendix. For details, please refer to the Appendix D. ",
" I appreciate the author's responses. If these clarifications are included into the... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
2,
4,
2
] | [
"4sb4nXI3JLQ",
"zqW-uKXFWC",
"de_1-yUPxUjF",
"de_1-yUPxUjF",
"dhuwR0_6oA",
"dhuwR0_6oA",
"44NDgWriY1Z",
"75ARG_xzOVV",
"fRHqPz0e6eD",
"s60qjuQwi8",
"vTpdmUkSkAo",
"nips_2022_7fU8UPo875w",
"nips_2022_7fU8UPo875w",
"nips_2022_7fU8UPo875w",
"nips_2022_7fU8UPo875w"
] |
nips_2022_4rm6tzBjChe | Simultaneous Missing Value Imputation and Structure Learning with Groups | Learning structures between groups of variables from data with missing values is an important task in the real world, yet difficult to solve. One typical scenario is discovering the structure among topics in the education domain to identify learning pathways. Here, the observations are student performances for questions under each topic which contain missing values. However, most existing methods focus on learning structures between a few individual variables from the complete data. In this work, we propose VISL, a novel scalable structure learning approach that can simultaneously infer structures between groups of variables under missing data and perform missing value imputations with deep learning. Particularly, we propose a generative model with a structured latent space and a graph neural network-based architecture, scaling to a large number of variables. Empirically, we conduct extensive experiments on synthetic, semi-synthetic, and real-world education data sets. We show improved performances on both imputation and structure learning accuracy compared to popular and recent approaches. | Accept | In this paper, the authors propose VISL, a structure learning method that simultaneously infers structures between groups of variables under missing data and performs missing value imputations. The authors conduct extensive experiments on synthetic, semi-synthetic, and real-world education data sets and they show improved performances on both imputation and structure learning.
The reviewers overall agree that this is a strong and acceptable contribution. 7oh7 has some remaining concerns about the novelty of the proposed approach as the individual components used in the approach "are sort of well-known technics and not hard to address and implement". I nevertheless think that the proposed approach is a worthwhile and powerful combination for Missing Value Imputation. Two reviewers bhzA and 9NtP are fairly confident to accept the manuscript.
Overall, this work is an important step towards better missing value imputation in more challenging settings and I support the acceptance of the manuscript. | test | [
"TAZTTZnZ6oT",
"ppFn_Fe2jS",
"YLKvqmuyJD",
"cCc5eSXfVMR",
"iSQ59gxKMUb",
"Wkjjp7A5NTL",
"_scZ3E-KkS5",
"RLTBaPg5MXL",
"IYK7zc6cvFq",
"-ad9Cn20mFz"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The authors addressed some issues. \n\nHowever, the technical novelty and the contributions of this paper are still insignificant. Message passing with GNNs, and missing value imputation with structure learning are sort of well-known technics and not hard to address and implement. \n\nI will add 1 to the score. ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"ppFn_Fe2jS",
"iSQ59gxKMUb",
"nips_2022_4rm6tzBjChe",
"-ad9Cn20mFz",
"IYK7zc6cvFq",
"_scZ3E-KkS5",
"RLTBaPg5MXL",
"nips_2022_4rm6tzBjChe",
"nips_2022_4rm6tzBjChe",
"nips_2022_4rm6tzBjChe"
] |
nips_2022_ofRmFwBvvXh | Approximation with CNNs in Sobolev Space: with Applications to Classification | We derive a novel approximation error bound with explicit prefactor for Sobolev-regular functions using deep convolutional neural networks (CNNs). The bound is non-asymptotic in terms of the network depth and filter lengths, in a rather flexible way. For Sobolev-regular functions which can be embedded into the H\"older space, the prefactor of our error bound depends on the ambient dimension polynomially instead of exponentially as in most existing results, which is of independent interest. We also establish a new approximation result when the target function is supported on an approximate lower-dimensional manifold. We apply our results to establish non-asymptotic excess risk bounds for classification using CNNs with convex surrogate losses, including the cross-entropy loss, the hinge loss (SVM), the logistic loss, the exponential loss and the least squares loss. We show that the classification methods with CNNs can circumvent the curse of dimensionality if input data is supported on a neighborhood of a low-dimensional manifold. | Accept | This paper provides an approximation error for Sobolev type functions by using deep CNNs. The approximation error achieves the optimal rate and has adaptivity to the low dimensionality of the support of the input data distribution. They also derive a classification error bound using the approximation error result.
This paper gives a novel and important theoretical result. Due to the CNN structure, it requires a different techniques from that for FNNs and thus the analysis is not trivial. This gives an important contribution to the literature. Thus, I recommend acceptance for this paper. | test | [
"MGn-X52Y8Q1",
"3e7rvSrPa07",
"vcXH31FXwqL",
"6qm1cKqhWba",
"CvjwJA0J5sT",
"jD1oS5V67_9c",
"_o-nR4YMsgM",
"q8jfo3BPqR8",
"-vIM6P8_Jpf",
"4kClg-Kd31d",
"Mhm3Yau5do0",
"4ZaG-l_HZzC",
"8X9CrlPCwD"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you so much for your positive feedback on our response and additional numerical experiment.\n\nWe are very grateful to you for your work reviewing our paper and really appreciate your \nhelpful comments and constructive suggestions that helped us improve and strengthen our paper.",
" I'd like to thank the... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
9
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
2,
5
] | [
"3e7rvSrPa07",
"-vIM6P8_Jpf",
"_o-nR4YMsgM",
"CvjwJA0J5sT",
"q8jfo3BPqR8",
"_o-nR4YMsgM",
"4kClg-Kd31d",
"8X9CrlPCwD",
"4ZaG-l_HZzC",
"Mhm3Yau5do0",
"nips_2022_ofRmFwBvvXh",
"nips_2022_ofRmFwBvvXh",
"nips_2022_ofRmFwBvvXh"
] |
nips_2022_nSe94hrIWhb | Reduction Algorithms for Persistence Diagrams of Networks: CoralTDA and PrunIT | Topological data analysis (TDA) delivers invaluable and complementary information on the intrinsic properties of data inaccessible to conventional methods. However, high computational costs remain the primary roadblock hindering the successful application of TDA in real-world studies, particularly with machine learning on large complex networks.
Indeed, most modern networks such as citation, blockchain, and online social networks often have hundreds of thousands of vertices, making the application of existing TDA methods infeasible. We develop two new, remarkably simple but effective algorithms to compute the exact persistence diagrams of large graphs to address this major TDA limitation. First, we prove that $(k+1)$-core of a graph $G$ suffices to compute its $k^{th}$ persistence diagram, $PD_k(G)$. Second, we introduce a pruning algorithm for graphs to compute their persistence diagrams by removing the dominated vertices. Our experiments on large networks show that our novel approach can achieve computational gains up to 95%.
The developed framework provides the first bridge between the graph theory and TDA, with applications in machine learning of large complex networks. Our implementation is available at https://github.com/cakcora/PersistentHomologyWithCoralPrunit.
| Accept | This paper proposes a method of reducing the size of graphs that are commonly used as inputs for persistent homology algorithms. This addresses a fundamental scalability problem in the case of graphs, and is likely to enable further work in the area.
On the negative side, the paper is similar to the prior work on strong collapses. Also, the experimental results do not make a strong case that the proposed algorithms are actually effective in reducing the running time for computing persistent homology at dimensions more than 0. There is only one plot on time reduction improvement (Fig 4b), which provides results for only two datasets in homology dimension 0, which is not satisfactory given that 0-dimensional case is already known to be very efficient.
Given the above, my recommendation is a weak accept, and I urge the authors to address these issues in their final version: better clarify the connection with the prior work, and provide experimental evidence that the algorithm improves the efficiency of persistent homology computation at dimensions more than 0. | train | [
"jMv4FOPboO8",
"cjXmafKbbLr",
"S5hpIBO3m3",
"C-NtcZ5RNas",
"QeNMRcFPHCH",
"9RLuQFRn-MY",
"NxcNmEDRdl9",
"Z7P_hq0nFtl1",
"vaiTd4lij_",
"HKiVR3-o44R",
"3wzutVtEgup",
"3rDWvYyPP6l",
"tomzfzIFyLPv",
"GHK0FerFW8i",
"epkWT8vahTf",
"xTEE4jS1PFz"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much again for your valuable time and feedback. If you have further questions, we’d be happy to answer.\n\n",
" Thank you very much again for your valuable time and feedback. If you have further questions, we’d be happy to answer.",
" Thank you very much for your suggestion. We run the experime... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
"GHK0FerFW8i",
"epkWT8vahTf",
"9RLuQFRn-MY",
"QeNMRcFPHCH",
"NxcNmEDRdl9",
"3wzutVtEgup",
"Z7P_hq0nFtl1",
"vaiTd4lij_",
"xTEE4jS1PFz",
"epkWT8vahTf",
"3rDWvYyPP6l",
"tomzfzIFyLPv",
"GHK0FerFW8i",
"nips_2022_nSe94hrIWhb",
"nips_2022_nSe94hrIWhb",
"nips_2022_nSe94hrIWhb"
] |
nips_2022_G3fswMh9P8y | FedAvg with Fine Tuning: Local Updates Lead to Representation Learning | The Federated Averaging (FedAvg) algorithm, which consists of alternating between a few local stochastic gradient updates at client nodes, followed by a model averaging update at the server, is perhaps the most commonly used method in Federated Learning. Notwithstanding its simplicity, several empirical studies have illustrated that the model output by FedAvg leads to a model that generalizes well to new unseen tasks after a few fine-tuning steps. This surprising performance of such a simple method, however, is not fully understood from a theoretical point of view. In this paper, we formally investigate this phenomenon in the multi-task linear regression setting. We show that the reason behind the generalizability of the FedAvg output is FedAvg’s power in learning the common data representation among the clients’ tasks, by leveraging the diversity among client data distributions via multiple local updates between communication rounds. We formally establish the iteration complexity required by the clients for proving such result in the setting where the underlying shared representation is a linear map. To the best of our knowledge, this is the first result showing that FedAvg learns an expressive representation in any setting. Moreover, we show that multiple local updates between communication rounds are necessary for representation learning, as distributed gradient methods that make only one local update between rounds provably cannot recover the ground-truth representation in the linear setting, and empirically yield neural network representations that generalize drastically worse to new clients than those learned by FedAvg trained on heterogeneous image classification datasets. | Accept | This work provides an analysis explaining why FedAvg can produce more generalizable representations than distributed SGD. Theoretical guarantees are presented for a multi-task linear regression setting and further empirical results demonstrate the effectiveness of learning representations with image classification tasks. The theoretical analysis presented can be an important building block for the study of more complex settings in federated optimization. All reviewers recommend acceptance.
Please take the (few) suggestions by the reviewer into account, and also incorporate the explanations and clarifications provided during the rebuttal in the camera ready version.
| test | [
"UbU5UBe255U",
"6pfJc8QOn1v",
"1Y6q9YM4N13",
"EsIL6FbtSi2",
"8E1ham6TqC",
"G-_7VFsMQP3",
"QzL7bgaBdTj",
"Yc4TRwxd5WA",
"gGsHugzF0Bk"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their response to mine and other reviewers' comments and for the changes and added discussion on the paper. They address my comments adequately and add to the quality of the paper. I would recommend to accept this paper and I've increased my score to reflect that.",
" Thank you very much... | [
-1,
-1,
-1,
-1,
-1,
7,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
2,
3
] | [
"8E1ham6TqC",
"Yc4TRwxd5WA",
"gGsHugzF0Bk",
"QzL7bgaBdTj",
"G-_7VFsMQP3",
"nips_2022_G3fswMh9P8y",
"nips_2022_G3fswMh9P8y",
"nips_2022_G3fswMh9P8y",
"nips_2022_G3fswMh9P8y"
] |
nips_2022_U_YPSEyN2ls | Improved Regret Analysis for Variance-Adaptive Linear Bandits and Horizon-Free Linear Mixture MDPs | In online learning problems, exploiting low variance plays an important role in obtaining tight performance guarantees yet is challenging because variances are often not known a priori.
Recently, considerable progress has been made by Zhang et al. (2021) where they obtain a variance-adaptive regret bound for linear bandits without knowledge of the variances and a horizon-free regret bound for linear mixture Markov decision processes (MDPs).
In this paper, we present novel analyses that improve their regret bounds significantly.
For linear bandits, we achieve $\tilde O(\min\{d\sqrt{K}, d^{1.5}\sqrt{\sum_{k=1}^K \sigma_k^2}\} + d^2)$ where $d$ is the dimension of the features, $K$ is the time horizon, and $\sigma_k^2$ is the noise variance at time step $k$, and $\tilde O$ ignores polylogarithmic dependence, which is a factor of $d^3$ improvement.
For linear mixture MDPs with the assumption of maximum cumulative reward in an episode being in $[0,1]$, we achieve a horizon-free regret bound of $\tilde O(d \sqrt{K} + d^2)$ where $d$ is the number of base models and $K$ is the number of episodes.
This is a factor of $d^{3.5}$ improvement in the leading term and $d^7$ in the lower order term.
Our analysis critically relies on a novel peeling-based regret analysis that leverages the elliptical potential `count' lemma. | Accept | This paper gives the first minimax optimal (up to log factors) and horizon-free for linear mixture MDP and improved variance-dependent bound for linear bandits. Furthermore, the paper developed a new peeling-based analysis that can be useful for other problems. These contributions make this paper a strong paper in the theoretical RL community. The AC thus recommends acceptance. | train | [
"FRun5YyJm7K",
"4HOsGljeT7a",
"Gv_mBZ3k-zd",
"QXr4uhsT7h5",
"FgFhd87p62M",
"vn1xohzT2RD",
"a_VqTtlMfPf",
"kBinYK4TXDm",
"MItoiZLpzUU"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer,\n\nWe wonder if our response has addressed your concern. Particularly we believe that the weaknesses you mentioned are extremely important and are happy to have a further discussion if you have more questions on those issues.",
" Dear reviewer,\n\nWe wonder if our response has addressed your conc... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"Gv_mBZ3k-zd",
"QXr4uhsT7h5",
"kBinYK4TXDm",
"MItoiZLpzUU",
"a_VqTtlMfPf",
"nips_2022_U_YPSEyN2ls",
"nips_2022_U_YPSEyN2ls",
"nips_2022_U_YPSEyN2ls",
"nips_2022_U_YPSEyN2ls"
] |
nips_2022_wfKbtSjHA6F | Sparse Winning Tickets are Data-Efficient Image Recognizers | Improving the performance of deep networks in data-limited regimes has warranted much attention. In this work, we empirically show that “winning tickets” (small sub-networks) obtained via magnitude pruning based on the lottery ticket hypothesis, apart from being sparse are also effective recognizers in data-limited regimes. Based on extensive experiments, we find that in low data regimes (datasets of 50-100 examples per class), sparse winning tickets substantially outperform the original dense networks. This approach, when combined with augmentations or fine-tuning from a self-supervised backbone network, shows further improvements in performance by as much as 16% (absolute) on low-sample datasets and long-tailed classification. Further, sparse winning tickets are more robust to synthetic noise and distribution shifts compared to their dense counterparts. Our analysis of winning tickets on small datasets indicates that, though sparse, the networks retain density in the initial layers and their representations are more generalizable. Code is available at https://github.com/VITA-Group/DataEfficientLTH. | Accept | The paper has received borderline and positive reviews. Overall, the reviewers find the empirical contribution of the paper to be interesting and solid enough (even though one reviewer find the explanations given in the paper to be a bit shallow). The rebuttal was nevertheless convincing. The area chair agrees with the reviewers' assessment and follows their recommendation. | train | [
"3nXPi-Dy9Xm",
"sH413NL5cQP",
"Bcecb67qpj",
"iwFC5EnPuUL",
"Ko8xByJWZYl",
"stTSPJFvbz7",
"Z3nqMkIqGaS",
"8BrcGNDyCl",
"HMGQ1Ksuumz",
"KrPrJ-pxfB",
"cc7dDYoKkgd",
"om3VTfruO_",
"LwUL9RpB7b"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for their reply.",
" We have updated our draft to clarify and include many of the changes requested by the reviewers (indicated in blue). We apologize for the delay and once again thank the reviewers for their time and feedback. We would be happy to provide any further clarific... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"HMGQ1Ksuumz",
"iwFC5EnPuUL",
"stTSPJFvbz7",
"nips_2022_wfKbtSjHA6F",
"cc7dDYoKkgd",
"cc7dDYoKkgd",
"LwUL9RpB7b",
"om3VTfruO_",
"KrPrJ-pxfB",
"nips_2022_wfKbtSjHA6F",
"nips_2022_wfKbtSjHA6F",
"nips_2022_wfKbtSjHA6F",
"nips_2022_wfKbtSjHA6F"
] |
nips_2022_ldxUm0mmhl8 | Counterfactual Temporal Point Processes | Machine learning models based on temporal point processes are the state of the art in a wide variety of applications involving discrete events in continuous time. However, these models lack the ability to answer counterfactual questions, which are increasingly relevant as these models are being used to inform targeted interventions. In this work, our goal is to fill this gap. To this end, we first develop a causal model of thinning for temporal point processes that builds upon the Gumbel-Max structural causal model. This model satisfies a desirable counterfactual monotonicity condition, which is sufficient to identify counterfactual dynamics in the process of thinning. Then, given an observed realization of a temporal point process with a given intensity function, we develop a sampling algorithm that uses the above causal model of thinning and the superposition theorem to simulate counterfactual realizations of the temporal point process under a given alternative intensity function. Simulation experiments using synthetic and real epidemiological data show that the counterfactual realizations provided by our algorithm may give valuable insights to enhance targeted interventions. | Accept | This paper proposed a simple yet sensible method to answer counterfactual questions for temporal point processes. Specifically, the authors focus on the counterfactual question of whether a historical event would have happened if the corresponding intensity had been changed. Reviewers agree that the idea is clearly presented and theoretically plausible, with interesting and important epidemiological applications. Please also pay attention to the reviewers' concerns about the paper title and the presentation of some specific parts of the paper. | val | [
"bSFrYPkPKaR",
"HWVjEY7YLN",
"xMj4rFptWfU",
"F8nhLnkkeWB",
"8AyNjgHDsL",
"WMTNsL9mnrc",
"2looUjA6_A",
"fN8xwTbP6RT",
"n5EU67awxAu",
"OYwAAPNKTZ",
"euZCsIgReT",
"DlIsCKjFEO0",
"2Ze42mD4Z5x"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Many thanks for taking the time and effort to engage with us. \n\nIn our response, we did not mean to neglect the importance of unmeasured confounding in the field of causal inference and we do appreciate that you point us to recent advances in counterfactual reasoning in the presence of unmeasured confounding in... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"HWVjEY7YLN",
"n5EU67awxAu",
"nips_2022_ldxUm0mmhl8",
"WMTNsL9mnrc",
"2Ze42mD4Z5x",
"2looUjA6_A",
"DlIsCKjFEO0",
"euZCsIgReT",
"OYwAAPNKTZ",
"nips_2022_ldxUm0mmhl8",
"nips_2022_ldxUm0mmhl8",
"nips_2022_ldxUm0mmhl8",
"nips_2022_ldxUm0mmhl8"
] |
nips_2022_tq_J_MqB3UB | Diverse Weight Averaging for Out-of-Distribution Generalization | Standard neural networks struggle to generalize under distribution shifts in computer vision. Fortunately, combining multiple networks can consistently improve out-of-distribution generalization. In particular, weight averaging (WA) strategies were shown to perform best on the competitive DomainBed benchmark; they directly average the weights of multiple networks despite their nonlinearities. In this paper, we propose Diverse Weight Averaging (DiWA), a new WA strategy whose main motivation is to increase the functional diversity across averaged models. To this end, DiWA averages weights obtained from several independent training runs: indeed, models obtained from different runs are more diverse than those collected along a single run thanks to differences in hyperparameters and training procedures. We motivate the need for diversity by a new bias-variance-covariance-locality decomposition of the expected error, exploiting similarities between WA and standard functional ensembling. Moreover, this decomposition highlights that WA succeeds when the variance term dominates, which we show occurs when the marginal distribution changes at test time. Experimentally, DiWA consistently improves the state of the art on DomainBed without inference overhead. | Accept | The reviewers unanimously recommend accepting the paper - congratulations!
My only concern is in the related work: The submission mentions
> The recent “Model soups” by Wortsman et al. [28] developed a WA algorithm similar to Algorithm 1. However the task, the theoretical analysis and most importantly the goals of these two works are different.
This is not an accurate characterization because Wortsman et al. [28] were also interested in out-of-distribution generalization - their paper mentions "robustness" and "distribution shift" several times and contains results on multiple OOD test sets. The results in this submission and in Wortsman et al. [28] reinforce each other since the two papers evaluate on different OOD benchmarks and find that weight averaging helps in both. I encourage the authors to clarify this in their related work section so that the reader can correctly put the results in context. | train | [
"rhyFsijBweb",
"o-rWOh3qZsZ",
"XuKEruHw3jQ",
"MpyqO1Ao11",
"4wVgPD93ovf",
"zbcz5WUlxm",
"Gj2nUJ4AQG6L",
"as9B6_O62Ir",
"3NjOsV_5lAgk",
"EZlXO2N-mjB",
"0YhzpXQ1-eu",
"elf1erCJeni",
"C3TzFNAgT1",
"IlOzSzvzMU",
"TJIvTfNGLdV"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" After reading the rebuttal and the revised version of the paper, I think my concerns are all addressed properly (especially in the revised abstract and Appendix F). I will raise my score to reflect this. Thank the authors very much for the additional empirical results and the clear explanations for my questions.\... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3,
4
] | [
"as9B6_O62Ir",
"XuKEruHw3jQ",
"zbcz5WUlxm",
"as9B6_O62Ir",
"zbcz5WUlxm",
"TJIvTfNGLdV",
"IlOzSzvzMU",
"3NjOsV_5lAgk",
"C3TzFNAgT1",
"elf1erCJeni",
"nips_2022_tq_J_MqB3UB",
"nips_2022_tq_J_MqB3UB",
"nips_2022_tq_J_MqB3UB",
"nips_2022_tq_J_MqB3UB",
"nips_2022_tq_J_MqB3UB"
] |
nips_2022_RdJY39KRUCX | Vector Quantized Diffusion Model with CodeUnet for Text-to-Sign Pose Sequences Generation | Sign Language Production (SLP) aims to translate spoken languages into sign sequences automatically. The core process of SLP is to transform sign gloss sequences into their corresponding sign pose sequences (G2P). Most existing G2P models usually perform this conditional long-range generation in an autoregressive manner, which inevitably leads to an accumulation of errors. To address this issue, we propose a vector quantized diffusion method for conditional pose sequences generation, called PoseVQ-Diffusion, which is an iterative non-autoregressive method. Specifically, we first introduce a vector quantized variational autoencoder (Pose-VQVAE) model to represent a pose sequence as a sequence of latent codes. Then we model the latent discrete space by an extension of the recently developed diffusion architecture. To better leverage the spatial-temporal information, we introduce a novel architecture, namely CodeUnet, to generate higher quality pose sequence in the discrete space. Moreover, taking advantage of the learned codes, we develop a novel sequential k-nearest-neighbours method to predict the variable lengths of pose sequences for corresponding gloss sequences. Consequently, compared with the autoregressive G2P models, our
model has a faster sampling speed and produces significantly better results. Compared with previous non-autoregressive G2P methods, PoseVQ-Diffusion improves the predicted results with iterative refinements, thus achieving state-of-the-art results on the SLP evaluation benchmark. | Reject | The paper is interested to the sign language production (SLP) problem. A vector quantized conditional diffusion model is proposed for the pose generation. The proposed method achieves state-of-the-art results on the SLP evaluation benchmark (PHOENIX dataset).
Reviewers all agree that the key contribution is very interesting -- making VQ-diffusion work on SLP. However, the technical novelty is low for NeurIPS. In that respect, reviewers agree that given VQ-diffusion has been shown (in past work) to perform quite well on text-to-image generation, the technical novelty is low here.
Several reviewers felt also the experimental section is a bit weak, mostly because PHOENIX is the only available benchmark for SLP.
Most other concerns have been fixed during the rebuttal phase. Following reviewers, the rejection is based on the limited technical novelty for NeurIPS. | train | [
"uXyd7irQYAP",
"Zenq5OiGyl7",
"hLCBU9oScYD",
"Pd8lbK-uN9V",
"nMEFrIAnqpw",
"xGtQyqnHSA8",
"KRgGcKExyZ",
"JmwYq49RVt4",
"naqm7MCeKED",
"cDyM7jT5tCK"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer vAR2\n\nMany thanks for your constructive comments. You are welcome to provide us feedback if any before the open discussion phase ends. We are glad to answer any follow-up questions. \n\nMany thanks,\n\nAuthors",
" Dear reviewer Ljyz\n\nMany thanks for your constructive comments. You are welcome ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4
] | [
"nMEFrIAnqpw",
"xGtQyqnHSA8",
"Pd8lbK-uN9V",
"KRgGcKExyZ",
"JmwYq49RVt4",
"naqm7MCeKED",
"cDyM7jT5tCK",
"nips_2022_RdJY39KRUCX",
"nips_2022_RdJY39KRUCX",
"nips_2022_RdJY39KRUCX"
] |
nips_2022_x7S1NsUdKZ | Adaptive Sampling for Discovery | In this paper, we study a sequential decision-making problem, called Adaptive Sampling for Discovery (ASD). Starting with a large unlabeled dataset, algorithms for ASD adaptively label the points with the goal to maximize the sum of responses.
This problem has wide applications to real-world discovery problems, for example drug discovery with the help of machine learning models. ASD algorithms face the well-known exploration-exploitation dilemma. The algorithm needs to choose points that yield information to improve model estimates but it also needs to exploit the model. We rigorously formulate the problem and propose a general information-directed sampling (IDS) algorithm. We provide theoretical guarantees for the performance of IDS in linear, graph and low-rank models. The benefits of IDS are shown in both simulation experiments and real-data experiments for discovering chemical reaction conditions. | Accept | This was a boredline paper.
However, the reviewers like it, and it seems that the authors answered all the concerns of Reviewer LiPB and myself.
Please add the comparison of IDS and ENS to the final version.
| train | [
"ZbWVdq_1wEJ",
"o03RaB7EO_f",
"LgNVxZPwctvL",
"bo8IJxS1H1c",
"TapVZDwUui",
"PpUCGMnP-Z",
"135ZTfnaikU",
"Lc95sE0Q8Hx",
"xRmuwcIaF9z"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Okay, thank you for clarifying.",
" Thank you for the responses, especially the detailed discussion of the differences between IDS and ENS.",
" > I am a bit fuzzy on the real-world application which could be better explained in the supplementary material if not the main text. Per my understanding, the dataset... | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"bo8IJxS1H1c",
"PpUCGMnP-Z",
"xRmuwcIaF9z",
"Lc95sE0Q8Hx",
"PpUCGMnP-Z",
"135ZTfnaikU",
"nips_2022_x7S1NsUdKZ",
"nips_2022_x7S1NsUdKZ",
"nips_2022_x7S1NsUdKZ"
] |
nips_2022_VRvMQq3d1l0 | Learned Index with Dynamic $\epsilon$ | Index structure is a fundamental component in database and facilitates broad data retrieval applications. Recent learned index methods show superior performance by learning hidden yet useful data distribution with the help of machine learning, and provide a guarantee that the prediction error is no more than a pre-defined $\epsilon$. However, existing learned index methods adopt a fixed $\epsilon$ for all the learned segments, neglecting the diverse characteristics of different data localities. In this paper, we propose a mathematically-grounded learned index framework with dynamic $\epsilon$, which is efficient and pluggable to existing learned index methods. We theoretically analyze prediction error bounds that link $\epsilon$ with data characteristics for an illustrative learned index method. Under the guidance of the derived bounds, we learn how to vary $\epsilon$ and improve the index performance with a better space-time trade-off. Experiments with real-world datasets and several state-of-the-art methods demonstrate the efficiency, effectiveness and usability of the proposed framework. | Reject | All of the reviewers recommended acceptance, but the support was lukewarm, with the maximum score being “Weak Accept”. There were concerns about the limited applicability of the proposed method and lack of clarity of some of the arguments in the paper. Although the reviewers appreciated the mathematical foundations and experimental results, the negatives outweighed the positives. | train | [
"fHHAUXcHsug",
"ENXJl4kk7Dw",
"0chpIz0_H3J",
"SmJQu6AP0r3",
"IQT0HZYIM0",
"LLJumCg4wDC",
"lc71qQESD4Z",
"Ato_frqt9j",
"Ee3B8MNsNB8",
"zP7icpSaNow",
"bxKCF8CdckG",
"9P81MOP0r5A",
"IEr3YLevKrL",
"rwDrLdCafUX",
"SBgDRYMO8Mh",
"HDnOapxtKjC"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the prompt responses. And I should apologize for my late comments before.\n\n1. For the motivation part, the worst case guarantee is useful. I focused mainly on the efficiency-oriented learned index rather than the analysis part. Thanks for noticing this point.\n2. For the index size problem, hitting t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"0chpIz0_H3J",
"rwDrLdCafUX",
"rwDrLdCafUX",
"rwDrLdCafUX",
"rwDrLdCafUX",
"rwDrLdCafUX",
"Ato_frqt9j",
"Ee3B8MNsNB8",
"HDnOapxtKjC",
"SBgDRYMO8Mh",
"SBgDRYMO8Mh",
"rwDrLdCafUX",
"rwDrLdCafUX",
"nips_2022_VRvMQq3d1l0",
"nips_2022_VRvMQq3d1l0",
"nips_2022_VRvMQq3d1l0"
] |
nips_2022_PtbGae6Eauy | Smoothed Online Convex Optimization Based on Discounted-Normal-Predictor | In this paper, we investigate an online prediction strategy named as Discounted-Normal-Predictor [Kapralov and Panigrahy, 2010] for smoothed online convex optimization (SOCO), in which the learner needs to minimize not only the hitting cost but also the switching cost. In the setting of learning with expert advice, Daniely and Mansour [2019] demonstrate that Discounted-Normal-Predictor can be utilized to yield nearly optimal regret bounds over any interval, even in the presence of switching costs. Inspired by their results, we develop a simple algorithm for SOCO: Combining online gradient descent (OGD) with different step sizes sequentially by Discounted-Normal-Predictor. Despite its simplicity, we prove that it is able to minimize the adaptive regret with switching cost, i.e., attaining nearly optimal regret with switching cost on every interval. By exploiting the theoretical guarantee of OGD for dynamic regret, we further show that the proposed algorithm can minimize the dynamic regret with switching cost in every interval. | Accept | All the reviewers were happy with this paper. There were some comments about experiments and additional results (e.g. a lower bound), but the reviewers generally thought the work in the paper is solid enough to merit acceptance. I encourage the authors to incorporate the discussions that clarify various points in the final manuscript. It would also be nice to have some (at least toy) experiments to corroborate the theory.
| train | [
"dPsLULTAQaQ9",
"uhVfbg70W5U",
"WZtfSS7CxjZ",
"Sjs34OYlRg",
"bV1UCYWqN1U",
"zsNhPn4YY2i",
"CFx8m28YsCc",
"QsfBA6LavQk",
"AgQgkZ_6mWP",
"ojoyrC9PYb7"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer m3LC,\n\n\nThanks for your kind reply! All the reviews are very helpful, and we will improve our paper accordingly.\n\n\nBest\n\nAuthors\n",
" I'd like to thank the authors for their thorough replies (not only to me but to the other reviewers, it does help with understanding the paper in spite of ... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4,
3
] | [
"uhVfbg70W5U",
"WZtfSS7CxjZ",
"ojoyrC9PYb7",
"AgQgkZ_6mWP",
"QsfBA6LavQk",
"CFx8m28YsCc",
"nips_2022_PtbGae6Eauy",
"nips_2022_PtbGae6Eauy",
"nips_2022_PtbGae6Eauy",
"nips_2022_PtbGae6Eauy"
] |
nips_2022_ju38DG3sbg6 | Learning Expressive Meta-Representations with Mixture of Expert Neural Processes | Neural processes (NPs) formulate exchangeable stochastic processes and are promising models for meta learning that do not require gradient updates during the testing phase.
However, most NP variants place a strong emphasis on a global latent variable.
This weakens the approximation power and restricts the scope of applications using NP variants, especially when data generative processes are complicated.
To resolve these issues, we propose to combine the Mixture of Expert models with Neural Processes to develop more expressive exchangeable stochastic processes, referred to as Mixture of Expert Neural Processes (MoE-NPs).
Then we apply MoE-NPs to both few-shot supervised learning and meta reinforcement learning tasks.
Empirical results demonstrate MoE-NPs' strong generalization capability to unseen tasks in these benchmarks. | Accept | The paper introduces a mixture of expert prior in the neural processes (NP) models as a way of improving the expressibility of the prior and posterior. The paper is well motivated, written well, and the method is reasonable and sound. The experiment results are also comprehensive and convincing. The authors well addressed the reviewer's questions in the rebuttal. All reviewers agreed on accepting this paper. | train | [
"oLr8BTT2dh",
"GFqbYWxKDLp",
"4fMP0DJ41xT",
"7jg31XCgSP",
"4r_zJiymrcg",
"NLt-ugtqag1",
"kAkBSDNHILA",
"HENRZLOXCuY",
"VgEDV5Uf19b"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear All Reviewers and Area Chairs,\n\nThank you all for your comments and constructive suggestions for our manuscript. Your engagement in the review/rebuttal period helps improve our manuscript a lot. Finally, we express our gratitude for your efforts in helpful reviews and discussions. \n",
" I have read the ... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"nips_2022_ju38DG3sbg6",
"NLt-ugtqag1",
"7jg31XCgSP",
"VgEDV5Uf19b",
"HENRZLOXCuY",
"kAkBSDNHILA",
"nips_2022_ju38DG3sbg6",
"nips_2022_ju38DG3sbg6",
"nips_2022_ju38DG3sbg6"
] |
nips_2022_kFRCvpubDJo | Causal Discovery in Heterogeneous Environments Under the Sparse Mechanism Shift Hypothesis | Machine learning approaches commonly rely on the assumption of independent and identically distributed (i.i.d.) data. In reality, however, this assumption is almost always violated due to distribution shifts between environments. Although valuable learning signals can be provided by heterogeneous data from changing distributions, it is also known that learning under arbitrary (adversarial) changes is impossible. Causality provides a useful framework for modeling distribution shifts, since causal models encode both observational and interventional distributions. In this work, we explore the sparse mechanism shift hypothesis which posits that distribution shifts occur due to a small number of changing causal conditionals. Motivated by this idea, we apply it to learning causal structure from heterogeneous environments, where i.i.d. data only allows for learning an equivalence class of graphs without restrictive assumptions. We propose the Mechanism Shift Score (MSS), a score-based approach amenable to various empirical estimators, which provably identifies the entire causal structure with high probability if the sparse mechanism shifts hypothesis holds. Empirically, we verify behavior predicted by the theory and compare multiple estimators and score functions to identify the best approaches in practice. Compared to other methods, we show how MSS bridges a gap by both being nonparametric as well as explicitly leveraging sparse changes. | Accept | The decision is to accept the paper.
The paper proposes a method for causal structure discovery that leverages an assumption about sparse mechanism shifts across multiple environments. The authors show that access to multi-environment data that satisfy this assumption can provide identification beyond standard equivalence classes. Based on reviewer / author discussions, the method seems novel, and is now well-contextualized within other literature in this area. The authors have thoroughly investigated near-alternatives, and while there are not direct comparisons in the paper, the authors cite good reasons for why this is the case. The paper is a solid contribution. | train | [
"igaoCOhUi6d",
"v801nxzZrW",
"0qk78dELt9K",
"2m9a-vZsboy",
"A-bsuvtnrco",
"-vMGl2djJn",
"9MfX4gQLyU9",
"Y0toTd7y9qQ",
"EUsIUTcRRZ",
"ZoA83-3CNHG",
"kG3QFwCJO47",
"2xLGUcXNfJI",
"IIP81HpTiu",
"ytThIzv3T1O"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. We are glad to hear that \"the result in the paper is now stronger\" and that you have increased your score accordingly.\n\nRegarding the first point from your last comment: you wrote that \n\n> in [1], there exist some identifiability results in Section 4.3. Although it is just for le... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"v801nxzZrW",
"2m9a-vZsboy",
"A-bsuvtnrco",
"EUsIUTcRRZ",
"9MfX4gQLyU9",
"Y0toTd7y9qQ",
"ytThIzv3T1O",
"IIP81HpTiu",
"ZoA83-3CNHG",
"2xLGUcXNfJI",
"nips_2022_kFRCvpubDJo",
"nips_2022_kFRCvpubDJo",
"nips_2022_kFRCvpubDJo",
"nips_2022_kFRCvpubDJo"
] |
nips_2022_5wI7gNopMHW | Neural Stochastic Control | Control problems are always challenging since they arise from the real-world systems where stochasticity and randomness are of ubiquitous presence. This naturally and urgently calls for developing efficient neural control policies for stabilizing not only the deterministic equations but the stochastic systems as well. Here, in order to meet this paramount call, we propose two types of controllers, viz., the exponential stabilizer (ES) based on the stochastic Lyapunov theory and the asymptotic stabilizer (AS) based on the stochastic asymptotic stability theory. The ES can render the controlled systems exponentially convergent but it requires a long computational time; conversely, the AS makes the training much faster but it can only assure the asymptotic (not the exponential) attractiveness of the control targets. These two stochastic controllers thus are complementary in applications. We also investigate rigorously the linear control in both convergence time and energy cost and numerically compare it with the proposed controllers in these terms. More significantly, we use several representative physical systems to illustrate the usefulness of the proposed controllers in stabilization of dynamical systems. | Accept | The paper proposes two new frameworks for stochastic neural control. The methods have both theoretical and experimental proofs.
The extensive replies and additional material/experiments managed to address most concerns of the reviewers (I am also satisfied by the replies to 7JKN). As the authors pointed out, ideally some of the new material should be incorporated in the main paper rather than in the appendix.
Feedback linearization (a1wg) is indeed a powerful technique, but as the authors point out cannot be applied (completely) in all situations. Having an additional method available is valuable either way. | train | [
"tgdrxS16CbF",
"f1xF4XPzMND",
"7MxpdoXQh_",
"mBl7f5eSRmPB",
"2mBEvjvy6qPY",
"e6yTrjSbG6",
"e2vn9trvpNS",
"CvCeyoHnr3S",
"K60AyAegwnR",
"AW8n-XqgXJT",
"LEvXGZvgKlK",
"UvQhPjhkhVs",
"41GDe2pYS_r",
"ALfroQiUWB",
"UtzjKvCJJ7R",
"iq68fX1jewc"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your revisions and clarifications. I will take these into account when discussing the paper with the other reviewers.",
" Thanks for your support and constructive comments. \n\nWe put some revised parts in appendix due to the page limit, and we can put them in the main text if the paper is accepte... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
3
] | [
"e2vn9trvpNS",
"7MxpdoXQh_",
"K60AyAegwnR",
"2mBEvjvy6qPY",
"e6yTrjSbG6",
"iq68fX1jewc",
"AW8n-XqgXJT",
"K60AyAegwnR",
"UtzjKvCJJ7R",
"ALfroQiUWB",
"41GDe2pYS_r",
"nips_2022_5wI7gNopMHW",
"nips_2022_5wI7gNopMHW",
"nips_2022_5wI7gNopMHW",
"nips_2022_5wI7gNopMHW",
"nips_2022_5wI7gNopMHW"... |
nips_2022_UvQgwhYi7QM | Beyond L1: Faster and Better Sparse Models with skglm | We propose a new fast algorithm to estimate any sparse generalized linear model with convex or non-convex separable penalties. Our algorithm is able to solve problems with millions of samples and features in seconds, by relying on coordinate descent, working sets and Anderson acceleration. It handles previously unaddressed models, and is extensively shown to improve state-of-art algorithms. We provide a flexible, scikit-learn compatible package, which easily handles customized datafits and penalties. | Accept | The paper proposed a fast coordinate descent algorithm for sparse linear models with \alpha-semi-convex penalties. The algorithm includes two key steps: to introduce a score to rank the variables for obtaining a working set, and to use Anderson acceleration in the inner-loop. The theoretical analysis and the numerical experiments show the effectiveness of the proposed algorithm. The reviewers raised several concerns, which should be addressed in the final version. | train | [
"L8Ti-1VB3Ll",
"A3AQHkAszt3",
"tcopTrUL42",
"mtqC19WRrnW",
"kk0Aey5b0Yg",
"EN_ZN83u3tX",
"63Ir25EljI-",
"VvzaAcbf2Fj",
"PcGy10Mq_Pc",
"QILO_2fYNmC",
"B4vdDshVC09"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear authors,\n\nThank you for the response for clarifications. The response makes sense to me. I have increased the score to 7. \n\nBest, \nReviewer",
" Dear reviewer,\nWe hope that you had the opportunity to check the revised version we submitted, where we have improved the presentation thanks to your thought... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
2
] | [
"tcopTrUL42",
"63Ir25EljI-",
"EN_ZN83u3tX",
"kk0Aey5b0Yg",
"B4vdDshVC09",
"QILO_2fYNmC",
"PcGy10Mq_Pc",
"nips_2022_UvQgwhYi7QM",
"nips_2022_UvQgwhYi7QM",
"nips_2022_UvQgwhYi7QM",
"nips_2022_UvQgwhYi7QM"
] |
nips_2022_DzPWTwfby5d | Promising or Elusive? Unsupervised Object Segmentation from Real-world Single Images | In this paper, we study the problem of unsupervised object segmentation from single images. We do not introduce a new algorithm, but systematically investigate the effectiveness of existing unsupervised models on challenging real-world images. We firstly introduce four complexity factors to quantitatively measure the distributions of object- and scene-level biases in appearance and geometry for datasets with human annotations. With the aid of these factors, we empirically find that, not surprisingly, existing unsupervised models catastrophically fail to segment generic objects in real-world images, although they can easily achieve excellent performance on numerous simple synthetic datasets, due to the vast gap in objectness biases between synthetic and real images. By conducting extensive experiments on multiple groups of ablated real-world datasets, we ultimately find that the key factors underlying the colossal failure of existing unsupervised models on real-world images are the challenging distributions of object- and scene-level biases in appearance and geometry. Because of this, the inductive biases introduced in existing unsupervised models can hardly capture the diverse object distributions. Our research results suggest that future work should exploit more explicit objectness biases in the network design. | Accept | This paper conducts a systematic evaluation of existing unsupervised object segmentation methods on real-world images. Using ablated real-world datasets, the authors identify factors causing the failures of existing methods on real images.
All three reviewers find the study valuable and creative. This type of studies should be encouraged.
To address Reviewer 5Tue's concerns, the authors added an additional section (Section 4.5) to analyze the sensitivity of different models on different dataset factors. They also added comparison results. Reviewer Zitt felt that the clarity of the paper has been clearly improved. This reviewer was convinced that this study is valuable for the field. Reviewer 5Tue felt that the changes to the figures and tables improve the readability and the additional experiments are convincing. This reviewer did not find any unadressed major concerns.
| train | [
"9J0c6F6h7v",
"KOzMx6JBNVO",
"_PlRHnE8JeK",
"RR3apJHVxd",
"qf_1O2lTYuhg",
"pDH235LfTvT",
"RSUoGy55RTV",
"0UJte4Jfzb",
"gmvleETDTJc",
"bZ6WMyf2NLd",
"YrwBrpkasnO",
"NaB5imi9p3e"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your detailed response and the revision.\n\nIn my view the clarity of the paper has been clearly improved. The new Figure 6 and the model-wise discussion make it easier to get a general impression of the influence of different factors on the model performances. Overall I think there is still room fo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
"0UJte4Jfzb",
"_PlRHnE8JeK",
"qf_1O2lTYuhg",
"nips_2022_DzPWTwfby5d",
"pDH235LfTvT",
"RSUoGy55RTV",
"NaB5imi9p3e",
"YrwBrpkasnO",
"bZ6WMyf2NLd",
"nips_2022_DzPWTwfby5d",
"nips_2022_DzPWTwfby5d",
"nips_2022_DzPWTwfby5d"
] |
nips_2022_R5pVDJ4FNoc | Mining Multi-Label Samples from Single Positive Labels | Conditional generative adversarial networks (cGANs) have shown superior results in class-conditional generation tasks. To simultaneously control multiple conditions, cGANs require multi-label training datasets, where multiple labels can be assigned to each data instance. Nevertheless, the tremendous annotation cost limits the accessibility of multi-label datasets in real-world scenarios. Therefore, in this study we explore the practical setting called the single positive setting, where each data instance is annotated by only one positive label with no explicit negative labels. To generate multi-label data in the single positive setting, we propose a novel sampling approach called single-to-multi-label (S2M) sampling, based on the Markov chain Monte Carlo method. As a widely applicable “add-on” method, our proposed S2M sampling method enables existing unconditional and conditional GANs to draw high-quality multi-label data with a minimal annotation cost. Extensive experiments on real image datasets verify the effectiveness and correctness of our method, even when compared to a model trained with fully annotated datasets. | Accept | This paper addresses the GAN generation with multi-label condition variables problem by a single-to-multi-label generation method. More specifically, given only one positive label for each image, this paper proposes a MCMC sampling method to generate combinations of labels and reduce the cost for data annotation. This is a challenging problem and the authors have made clear assumptions under which the sampling can be successful. The experimental results demonstrate the effectiveness of the proposed method.
Overall, the paper is novel and interesting. I would recommend acceptance of this paper given the novelty of the idea and the technical soundness. However, I would suggest that the authors could narrow the scope of “multi-label” to multiple attributes. It is unclear whether the proposed method can handle the general multi-label problem in which multiple objects could appear in an image.
| train | [
"J_jPipkXvxN",
"Xm6I2J0v0Og",
"YspJ7w-tyLFP",
"ccxaNljUGaa",
"2i2-oCzKger",
"bIKWjlOdC-",
"X1rZgkLI0pe",
"1qT_m2dp2S",
"eO-CxyJ-L31c",
"kyhWIbX7I97",
"7OZeqCjSt_h",
"lMkUCIXflPX",
"wNDOSkjosM",
"g1VgaUqcTLY"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewers,\n\n\nWe greatly appreciate your efforts in reviewing our manuscript.\n\nWe hope that our responses and discussions have addressed the reviewers’ concerns.\n\nIn case of any further issues, we will do our best to address them as soon as possible.\n\nThank you very much again for your time and effor... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2,
3
] | [
"nips_2022_R5pVDJ4FNoc",
"eO-CxyJ-L31c",
"1qT_m2dp2S",
"bIKWjlOdC-",
"g1VgaUqcTLY",
"g1VgaUqcTLY",
"wNDOSkjosM",
"lMkUCIXflPX",
"7OZeqCjSt_h",
"nips_2022_R5pVDJ4FNoc",
"nips_2022_R5pVDJ4FNoc",
"nips_2022_R5pVDJ4FNoc",
"nips_2022_R5pVDJ4FNoc",
"nips_2022_R5pVDJ4FNoc"
] |
nips_2022_SLA4t66xln9 | Unsupervised Domain Adaptation for Semantic Segmentation using Depth Distribution | Recent years have witnessed significant advancements made in the field of unsupervised domain adaptation for semantic segmentation. Depth information has been proved to be effective in building a bridge between synthetic datasets and real-world datasets. However, the existing methods may not pay enough attention to depth distribution in different categories, which makes it possible to use them for further improvement. Besides the existing methods that only use depth regression as an auxiliary task, we propose to use depth distribution density to support semantic segmentation. Therefore, considering the relationship among depth distribution density, depth and semantic segmentation, we also put forward a branch balance loss for these three subtasks in multi-task learning schemes. In addition, we also propose a spatial aggregation priors of pixels in different categories, which is used to refine the pseudo-labels for self-training, thus further improving the performance of the prediction model. Experiments on SYNTHIA-to-Cityscapes and SYNTHIA-to-Mapillary benchmarks show the effectiveness of our proposed method. | Accept | The paper proposes a depth-aware segmentation framework that leverages unsupervised domain adaptation by segmenting, regressing the depth, and estimating the depth density distribution. The reviewers' doubts about the paper were addressed during the rebuttal, and the reviewers agree that the authors answered most of their concerns, and that the remaining problems are not an impediment for publication. While the paper doesn't achieve SotA, the new ideas and the shown experiments are enough to validate the proposal, and its ideas will be of interest to the community.
I recommend the paper for publication given the contributions on the multi-task setup and the mixture of domains, and that the experiments validate the proposal. | train | [
"icoIA1XvUx",
"7-hWmSW-VaD",
"qgziCmIrjfc",
"0ifZqYz2LVn",
"m11_UB428aq",
"q4gc0f22XUU",
"d1eCeinpX5h",
"_oaZTRUtFP",
"tCcdE7Fnck",
"U8dAk1_mNsy",
"b3OcmUCICUT",
"180p7kTmECn"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your careful review. We will put the grayscale image of density map and the above analysis in the final paper!",
" Thanks the authors for the prompt answers that clear my concerns.",
" I appreciate the author's feedback, which answers many of my questions and concerns. Overall, I'm still leaning... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"7-hWmSW-VaD",
"0ifZqYz2LVn",
"_oaZTRUtFP",
"m11_UB428aq",
"q4gc0f22XUU",
"180p7kTmECn",
"tCcdE7Fnck",
"U8dAk1_mNsy",
"nips_2022_SLA4t66xln9",
"nips_2022_SLA4t66xln9",
"nips_2022_SLA4t66xln9",
"nips_2022_SLA4t66xln9"
] |
nips_2022_rjDziEPQLQs | A Damped Newton Method Achieves Global $\mathcal O \left(\frac{1}{k^2}\right)$ and Local Quadratic Convergence Rate | In this paper, we present the first stepsize schedule for Newton method resulting in fast global and local convergence guarantees. In particular, we a) prove an $\mathcal O \left( 1/{k^2} \right)$ global rate, which matches the state-of-the-art global rate of cubically regularized Newton method of Polyak and Nesterov (2006) and of regularized Newton method of Mishchenko (2021), and the later variant of Doikov and Nesterov (2021), b) prove a local quadratic rate, which matches the best-known local rate of second-order methods, and c) our stepsize formula is simple, explicit, and does not require solving any subproblem. Our convergence proofs hold under affine-invariant assumptions closely related to the notion of self-concordance. Finally, our method has competitive performance when compared to existing baselines which share the same fast global convergence guarantees. | Accept | There is general agreement that this paper should be accepted. | train | [
"VinrrQJ9s_J",
"deHfkS8GoD",
"U_wvZV7dwajl",
"v_Wzs9oNhZ_",
"RR4Odt3y4i2",
"aYl5PAlDpnn",
"Qa-DKV04H7G",
"HzNi6rCHdOE",
"JECfs3K7f78",
"JSYpeNT0il",
"InzxISnwU7",
"sduZmsE_HiT"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciated the authors for their detailed response and clarification on each comments. I remained my point of view. Thanks.",
" I thank the authors for their detailed response and clarification on contributions. I have raised my score to 6. ",
" Reviewer 42nd, \n\nPlease could you let us know whether we m... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"v_Wzs9oNhZ_",
"U_wvZV7dwajl",
"InzxISnwU7",
"JECfs3K7f78",
"nips_2022_rjDziEPQLQs",
"sduZmsE_HiT",
"InzxISnwU7",
"InzxISnwU7",
"JSYpeNT0il",
"nips_2022_rjDziEPQLQs",
"nips_2022_rjDziEPQLQs",
"nips_2022_rjDziEPQLQs"
] |
nips_2022_Cp9sWmkd1H0 | Improving GANs with A Dynamic Discriminator | Discriminator plays a vital role in training generative adversarial networks (GANs) via distinguishing real and synthesized samples. While the real data distribution remains the same, the synthesis distribution keeps varying because of the evolving generator, and thus effects a corresponding change of the bi-classification task assigned to the discriminator. We argue that a discriminator with an on-the-fly adjustment on its capacity can better accommodate such a time-varying task. A comprehensive empirical study confirms that the proposed training strategy, termed as DynamicD, improves the synthesis performance without incurring any additional computation cost or training objectives. Two capacity adjusting schemes are developed for training GANs under different data regimes: i) given a sufficient amount of training data, the discriminator benefits from a progressively increased learning capacity, and ii) when the training data is limited, gradually decreasing the layer width mitigates the over-fitting issue of the discriminator. Experiments on both 2D and 3D-aware image synthesis tasks conducted on a range of datasets substantiate the generalizability of our DynamicD as well as its substantial improvement over the baselines. Furthermore, DynamicD is synergistic to other discriminator-improving approaches (including data augmentation, regularizers, and pre-training), and brings continuous performance gain when combined with them for learning GANs. Code will be made publicly available. | Accept | This method uses a dynamic-capacity discriminator to improve GAN training, improving perforation eg in limited data settings. The method interested the reviewers, though there were common concerns about how the work was presented, both in what it is doing precisely as well as how it presents itself relative to prior works. This concern I share as well reading the reviews and glancing at the paper, yet this seems to have satisfied the reviewers, so I will go with their consensus though lower my confidence:
I therefore recommend (with uncertainty) that this paper is accepted to NeurIPS.
Reviewer h4W2 participated the most in the discussion, but unfortunately discussion was overall light. | train | [
"i8bfHBqLn1b",
"aTORb8RqVOo",
"DQxbIFPoHGE",
"X9NMG2N3wLj",
"QvRAZ-axJBI",
"4LInuhW5E5D",
"fScZGMl7yIu",
"EcBuN6TVB7N",
"dJM70Eb35aw",
"p6RRJkXnHZO",
"Mx78q4WuouZ",
"fzL2ePVAXFd"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you again for your review and discussion. Here, we would like to clarify some misunderstandings.\n\n- You are correct that “it takes effort to tune the hyper-parameters to get the best possible model for a new dataset”. However, those hyper-parameters (like gradient penalty and model capacity in StyleGAN2) ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
4
] | [
"aTORb8RqVOo",
"QvRAZ-axJBI",
"nips_2022_Cp9sWmkd1H0",
"QvRAZ-axJBI",
"fzL2ePVAXFd",
"Mx78q4WuouZ",
"p6RRJkXnHZO",
"dJM70Eb35aw",
"nips_2022_Cp9sWmkd1H0",
"nips_2022_Cp9sWmkd1H0",
"nips_2022_Cp9sWmkd1H0",
"nips_2022_Cp9sWmkd1H0"
] |
nips_2022_ldRyJb_cjXa | Star Temporal Classification: Sequence Modeling with Partially Labeled Data | We develop an algorithm which can learn from partially labeled and unsegmented sequential data. Most sequential loss functions, such as Connectionist Temporal Classification (CTC), break down when many labels are missing. We address this problem with Star Temporal Classification (STC) which uses a special star token to allow alignments which include all possible tokens whenever a token could be missing. We express STC as the composition of weighted finite-state transducers (WFSTs) and use GTN (a framework for automatic differentiation with WFSTs) to compute gradients. We perform extensive experiments on automatic speech recognition. These experiments show that STC can close the performance gap with supervised baseline to about 1% WER when up to 70% of the labels are missing. We also perform experiments in handwriting recognition to show that our method easily applies to other temporal classification tasks. | Accept | This paper is right on the border. I'm going to mark this as accept as the only reviewer marking reject is due to limited evaluation, but i believe the evaluation is ok (as do the other two reviewers). The idea is interesting and novel, and the paper is well written. | train | [
"TpoLATM0J-H",
"4dX6EbPQsDy",
"EMRrJaD2xaz",
"jpjtbYzEC",
"YoS_EJDm5t3",
"YQVbAVzOjgX",
"m9MsyZDBPYB",
"I14Og9e0BdK",
"k8M7qpXHkay",
"p-be0TxULR"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer,\n \nThank you again for all the valuable comments and suggestions. We answered the questions and provided additional materials as above. We hope these address your concerns and please take them into consideration for the final scores. We are happy to answer additional questions.\n",
" Dear rev... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"jpjtbYzEC",
"YoS_EJDm5t3",
"YQVbAVzOjgX",
"p-be0TxULR",
"k8M7qpXHkay",
"I14Og9e0BdK",
"nips_2022_ldRyJb_cjXa",
"nips_2022_ldRyJb_cjXa",
"nips_2022_ldRyJb_cjXa",
"nips_2022_ldRyJb_cjXa"
] |
nips_2022_Q9dj3MzY1o7 | PKD: General Distillation Framework for Object Detectors via Pearson Correlation Coefficient | Knowledge distillation(KD) is a widely-used technique to train compact models in object detection. However, there is still a lack of study on how to distill between heterogeneous detectors. In this paper, we empirically find that better FPN features from a heterogeneous teacher detector can help the student although their detection heads and label assignments are different. However, directly aligning the feature maps to distill detectors suffers from two problems. First, the difference in feature magnitude between the teacher and the student could enforce overly strict constraints on the student. Second, the FPN stages and channels with large feature magnitude from the teacher model could dominate the gradient of distillation loss, which will overwhelm the effects of other features in KD and introduce much noise. To address the above issues, we propose to imitate features with Pearson Correlation Coefficient to focus on the relational information from the teacher and relax constraints on the magnitude of the features. Our method consistently outperforms the existing detection KD methods and works for both homogeneous and heterogeneous student-teacher pairs. Furthermore, it converges faster. With a powerful MaskRCNN-Swin detector as the teacher, ResNet-50 based RetinaNet and FCOS achieve 41.5% and 43.9% $mAP$ on COCO2017, which are 4.1% and 4.8% higher than the baseline, respectively. | Accept | This paper proposes a novel knowledge distillation method for object detection. As the feature value magnitude of the teacher and student is different, a new loss function for feature imitation is introduced by conducting feature standardization and calculating the MSE loss, which is equivalent to calculate the Pearson Correlation Coefficient between two features. Experimental results demonstrate the effectiveness of the proposed method. After an in-depth discussion between the authors and reviewers, the concerns have been well addressed. All the reviewers recommend acceptance. Considering that the overall quality is clearly above the bar, the paper should be accepted for publication. The AC strongly urges the authors to consider all the comments in preparing the final version. | train | [
"XvQ0BIm94Co",
"ePa7-gGlkly",
"ZpwR0sDa8Yd",
"1_GCneFEWvU",
"b0ol02LmHW8",
"uv9ROh6Z6uZ",
"mqDcoTiBGN",
"F4WgvlrJrTD",
"sbItqGWWgXB",
"h8mviSs_XEW",
"_aL4jP7CWo4",
"CXMlfD3w7Zg",
"Ctu8it1LRYH",
"aaiE6x90aY_",
"Zgb_p6rheF8",
"FF39nVLh2iS"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response. My concerns are well addressed in this response. Thus, I increase my score to accept this paper.",
" Thank you for the reply and additional experiments. The paper is well-motivated, thus I change the score.",
" Dear Reviewers aChz and 96XY,\n\nWe were wondering if our response and re... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"_aL4jP7CWo4",
"CXMlfD3w7Zg",
"nips_2022_Q9dj3MzY1o7",
"b0ol02LmHW8",
"FF39nVLh2iS",
"mqDcoTiBGN",
"h8mviSs_XEW",
"sbItqGWWgXB",
"Ctu8it1LRYH",
"FF39nVLh2iS",
"Zgb_p6rheF8",
"aaiE6x90aY_",
"nips_2022_Q9dj3MzY1o7",
"nips_2022_Q9dj3MzY1o7",
"nips_2022_Q9dj3MzY1o7",
"nips_2022_Q9dj3MzY1o7... |
nips_2022_K2QGzyLwpYG | Data-Efficient Structured Pruning via Submodular Optimization | Structured pruning is an effective approach for compressing large pre-trained neural networks without significantly affecting their performance. However, most current structured pruning methods do not provide any performance guarantees, and often require fine-tuning, which makes them inapplicable in the limited-data regime.We propose a principled data-efficient structured pruning method based on submodular optimization. In particular, for a given layer, we select neurons/channels to prune and corresponding new weights for the next layer, that minimize the change in the next layer's input induced by pruning. We show that this selection problem is a weakly submodular maximization problem, thus it can be provably approximated using an efficient greedy algorithm. Our method is guaranteed to have an exponentially decreasing error between the original model and the pruned model outputs w.r.t the pruned size, under reasonable assumptions. It is also one of the few methods in the literature that uses only a limited-number of training data and no labels. Our experimental results demonstrate that our method outperforms state-of-the-art methods in the limited-data regime. | Accept | The paper proposes a data-efficient structured pruning method, that for a given layer finds neurons/channels to prune with corresponding new weights for the next layer, that minimize the change in the next layer's input induced by pruning. This selection problem is formulated as a weakly submodular maximization problem, thus it can be provably approximated using the greedy algorithm. The proposed solution is interesting and practical as it requires limited-number of training data and no labels. The reviewers found the authors' response convincing, however the authors are strongly encouraged to incorporate the clarifications provided in the rebuttal into the final version. | train | [
"6OVfhkaReUM",
"ZIewf1LoOlY",
"W2PVGeVaKXr",
"fpibE9PdyHX",
"2sypUxpc0P",
"yRKEHthYw_AF",
"ckZp1200otu",
"t5b0oXINWAK",
"aZ3zsyc3zZz",
"pFfjvqArG2k",
"03JakGViR7",
"hNvFdOJVlM",
"hSqN-Vyj3E6",
"hI64rsShb4j"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I agree that the focus of the paper isn't on speed, but it's always nice to get a sense of the time scales involved.",
" I read the author response and find it properly addresses my concerns. I recommend the author to include some important part in to the next version. I raise my score by 1 and recommend an acc... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5,
3
] | [
"pFfjvqArG2k",
"t5b0oXINWAK",
"fpibE9PdyHX",
"2sypUxpc0P",
"nips_2022_K2QGzyLwpYG",
"hI64rsShb4j",
"hSqN-Vyj3E6",
"aZ3zsyc3zZz",
"hNvFdOJVlM",
"03JakGViR7",
"nips_2022_K2QGzyLwpYG",
"nips_2022_K2QGzyLwpYG",
"nips_2022_K2QGzyLwpYG",
"nips_2022_K2QGzyLwpYG"
] |
nips_2022_YODI3TcLX | Blessing of Depth in Linear Regression: Deeper Models Have Flatter Landscape Around the True Solution | This work characterizes the effect of depth on the optimization landscape of linear regression, showing that, despite their nonconvexity, deeper models have more desirable optimization landscape. We consider a robust and over-parameterized setting, where a subset of measurements are grossly corrupted with noise, and the true linear model is captured via an $N$-layer diagonal linear neural network. On the negative side, we show that this problem does not have a benign landscape: given any $N\geq 1$, with constant probability, there exists a solution corresponding to the ground truth that is neither local nor global minimum. However, on the positive side, we prove that, for any $N$-layer model with $N\geq 2$, a simple sub-gradient method becomes oblivious to such “problematic” solutions; instead, it converges to a balanced solution that is not only close to the ground truth but also enjoys a flat local landscape, thereby eschewing the need for “early stopping”. Lastly, we empirically verify that the desirable optimization landscape of deeper models extends to other robust learning tasks, including deep matrix recovery and deep ReLU networks with $\ell_1$-loss. | Accept | This paper studies a linear network for a regression problem. The main objective is to provide more understanding of the optimization landscape and characterizes the effect of the structure (depth) on the neural network in the oper-parametrized setting. The paper balance and shows both encouraging results but also addresses the weaknesses of the optimization landscape. The paper is concluded with interesting numerical experiments that support the claims in the paper. Let me also highlight that the theoretical analysis includes many novel parts (in the appendix).
| train | [
"82AdDK1wCvZ",
"ArMlyMDC9y7",
"pFWbK_RL9p",
"sY73qZ_P1DX",
"BgrYxDXnHg",
"-s9NMq_OQLC",
"pF7GAbAYcYoJ",
"RVqYcbLfJh2",
"mllJ0vhE9aO",
"z2aej8XC1tyO",
"T7S2evoKn0H",
"OjyF9Sxa2OB",
"qol0TUVnHW",
"9eKOmK3htW_",
"5F2xrc8FjAg",
"otI8DLHeM7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for the detailed response, which has addressed my concerns well. ",
" The rebuttal answers most of my questions and I decide to change my score to 5. But I think the author should claim that the focus on the diagonal linear model in the main boy to avoid confusion.",
" I tota... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
5,
9,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
5,
3
] | [
"BgrYxDXnHg",
"RVqYcbLfJh2",
"mllJ0vhE9aO",
"T7S2evoKn0H",
"otI8DLHeM7",
"5F2xrc8FjAg",
"9eKOmK3htW_",
"9eKOmK3htW_",
"qol0TUVnHW",
"OjyF9Sxa2OB",
"OjyF9Sxa2OB",
"nips_2022_YODI3TcLX",
"nips_2022_YODI3TcLX",
"nips_2022_YODI3TcLX",
"nips_2022_YODI3TcLX",
"nips_2022_YODI3TcLX"
] |
nips_2022_vy7B8z0-4D | Aligning individual brains with fused unbalanced Gromov Wasserstein | Individual brains vary in both anatomy and functional organization, even within a given species. Inter-individual variability is a major impediment when trying to draw generalizable conclusions from neuroimaging data collected on groups of subjects. Current co-registration procedures rely on limited data, and thus lead to very coarse inter-subject alignments.
In this work, we present a novel method for inter-subject alignment based on Optimal Transport, denoted as Fused Unbalanced Gromov Wasserstein (FUGW). The method aligns two cortical surfaces based on the similarity of their functional signatures in response to a variety of stimuli, while penalizing large deformations of individual topographic organization.
We demonstrate that FUGW is suited for whole-brain landmark-free alignment. The unbalanced feature allows to deal with the fact that functional areas vary in size across subjects. Results show that FUGW alignment significantly increases between-subject correlation of activity during new independent fMRI tasks and runs, and leads to more precise maps of fMRI results at the group level. | Accept | This paper uses optimal transport for aligning cortical surfaces based on the similarity of their functional signatures under different stimulations. The paper is well written, and the experimental setup is sound. The authors added experiments and clarifications to address reviewers' comments and concerns. The reviewers provided a consensus accept rating for this paper.
| test | [
"-rSffR03XY",
"_idYzjKRZ3",
"-1VkZoOUyO2s",
"eIQ8V93RCne",
"rkuCAG3jw0S",
"4GeEmhC5Sbg"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the time your spent reviewing this work and for your comments on the manuscript.\n\n> I would also be interested in computational comparisons; how much more difficult is it to do this vs MSM?\n\n- Timings for MSM were added line 219. Generally speaking, depending on the configuration we use, timings... | [
-1,
-1,
-1,
6,
7,
7
] | [
-1,
-1,
-1,
5,
4,
3
] | [
"4GeEmhC5Sbg",
"rkuCAG3jw0S",
"eIQ8V93RCne",
"nips_2022_vy7B8z0-4D",
"nips_2022_vy7B8z0-4D",
"nips_2022_vy7B8z0-4D"
] |
nips_2022_EgMbj9yWrMI | Minimax Regret for Cascading Bandits | Cascading bandits is a natural and popular model that frames the task of learning to rank from Bernoulli click feedback in a bandit setting. For the case of unstructured rewards, we prove matching upper and lower bounds for the problem-independent (i.e., gap-free) regret, both of which strictly improve the best known. A key observation is that the hard instances of this problem are those with small mean rewards, i.e., the small click-through rates that are most relevant in practice. Based on this, and the fact that small mean implies small variance for Bernoullis, our key technical result shows that variance-aware confidence sets derived from the Bernstein and Chernoff bounds lead to optimal algorithms (up to log terms), whereas Hoeffding-based algorithms suffer order-wise suboptimal regret. This sharply contrasts with the standard (non-cascading) bandit setting, where the variance-aware algorithms only improve constants. In light of this and as an additional contribution, we propose a variance-aware algorithm for the structured case of linear rewards and show its regret strictly improves the state-of-the-art. | Accept | All the reviewers were generally happy with this paper. There were some comments about a better experimental section and maybe a better discussion of results and extensions (e.g. gap-dependent bounds, what happens in the K->L regime), but everyone felt that the manuscript as written was solid enough to merit acceptance. I encourage the authors to incorporate the discussions on these points in the final manuscript.
| train | [
"scMOnre6u6Q",
"qlUCI8OiFxv",
"Rx6He-uZHzp",
"UUyqM_2OZu",
"OAqy8f_zBm",
"3whdN9JPIn_",
"qJEzhn-ZKf-",
"oK2aNLp57eT",
"Kw89jcJRGX",
"C2EEQjC1Pcc",
"sM4MPI5aFh",
"TLSxcoim_aV"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the authors for the clarification.",
" Thanks for your comment; the intuition is very clear. We will add a remark discussing this.",
" The authors' responses are satisfactory, so I would keep the strong acceptance recommendation. \n\nHere are some intuitions on why the integer constraint can be rem... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"OAqy8f_zBm",
"Rx6He-uZHzp",
"3whdN9JPIn_",
"qJEzhn-ZKf-",
"TLSxcoim_aV",
"sM4MPI5aFh",
"C2EEQjC1Pcc",
"Kw89jcJRGX",
"nips_2022_EgMbj9yWrMI",
"nips_2022_EgMbj9yWrMI",
"nips_2022_EgMbj9yWrMI",
"nips_2022_EgMbj9yWrMI"
] |
nips_2022_xxgp42Qz6dL | EGSDE: Unpaired Image-to-Image Translation via Energy-Guided Stochastic Differential Equations | Score-based diffusion models (SBDMs) have achieved the SOTA FID results in unpaired image-to-image translation (I2I). However, we notice that existing methods totally ignore the training data in the source domain, leading to sub-optimal solutions for unpaired I2I. To this end, we propose energy-guided stochastic differential equations (EGSDE) that employs an energy function pretrained on both the source and target domains to guide the inference process of a pretrained SDE for realistic and faithful unpaired I2I. Building upon two feature extractors, we carefully design the energy function such that it encourages the transferred image to preserve the domain-independent features and discard domain-specific ones. Further, we provide an alternative explanation of the EGSDE as a product of experts, where each of the three experts (corresponding to the SDE and two feature extractors) solely contributes to faithfulness or realism. Empirically, we compare EGSDE to a large family of baselines on three widely-adopted unpaired I2I tasks under four metrics. EGSDE not only consistently outperforms existing SBDMs-based methods in almost all settings but also achieves the SOTA realism results without harming the faithful performance. Furthermore, EGSDE allows for flexible trade-offs between realism and faithfulness and we improve the realism results further (e.g., FID of 51.04 in Cat $\to$ Dog and FID of 50.43 in Wild $\to$ Dog on AFHQ) by tuning hyper-parameters. The code is available at https://github.com/ML-GSAI/EGSDE. | Accept | The paper proposes an unpaired image-to-image translation method based on score-based diffusion models. Compared to prior works [7, 29], the paper adds two energy functions pretrained on both the source and target domains in an expert-of-product framework. The paper has received positive reviews. Reviewers found the paper well-written, the idea intuitive, and the experimental results comprehensive. The rebuttal further addressed the concerns regarding the user study, running time, and missing comparisons. The AC agreed with the reviewers’ consensus and recommended accepting the paper.
| train | [
"WtjwFWXgJ65",
"KB4X4B25soQ",
"uUE51cw1QAG",
"fQnutdU5IJg",
"Sa6Tv3cosL",
"SoW9ivLT-48",
"ToU6pXFYO-e",
"kkH6s-VD5B-",
"CMN2MmqprRE",
"06ykWlTBiEI",
"YonqV_d0_s",
"YhQv9ziVApD",
"PElGRS3G9eE",
"N2v_cHpZ0Y4",
"gtTpTfB9UHm"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. The rebuttal addresses my concerns. I lean towards weak acceptance.",
" Dear reviewers, \n\nThank you all for providing valuable comments. The authors have provided detailed responses to your comments. Has the response addressed your major concerns?\n\nI would appreciate it a lot if... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
3
] | [
"SoW9ivLT-48",
"nips_2022_xxgp42Qz6dL",
"fQnutdU5IJg",
"CMN2MmqprRE",
"nips_2022_xxgp42Qz6dL",
"gtTpTfB9UHm",
"N2v_cHpZ0Y4",
"PElGRS3G9eE",
"YhQv9ziVApD",
"nips_2022_xxgp42Qz6dL",
"nips_2022_xxgp42Qz6dL",
"nips_2022_xxgp42Qz6dL",
"nips_2022_xxgp42Qz6dL",
"nips_2022_xxgp42Qz6dL",
"nips_20... |
nips_2022_9wCQVgEWO2J | Fast Bayesian Inference with Batch Bayesian Quadrature via Kernel Recombination | Calculation of Bayesian posteriors and model evidences typically requires numerical integration.
Bayesian quadrature (BQ), a surrogate-model-based approach to numerical integration, is capable of superb sample efficiency, but its lack of parallelisation has hindered its practical applications.
In this work, we propose a parallelised (batch) BQ method, employing techniques from kernel quadrature, that possesses an empirically exponential convergence rate.
Additionally, just as with Nested Sampling, our method permits simultaneous inference of both posteriors and model evidence.
Samples from our BQ surrogate model are re-selected to give a sparse set of samples, via a kernel recombination algorithm, requiring negligible additional time to increase the batch size.
Empirically, we find that our approach significantly outperforms the sampling efficiency of both state-of-the-art BQ techniques and Nested Sampling in various real-world datasets, including lithium-ion battery analytics. | Accept | The initial round of reviews for the submitted manuscript was mostly positive in tone, but this enthusiasm was tempered by a number of deep technical issues raised by the reviewers. Fortunately, the author rebuttal and author--reviewer discussion phases went a long way toward clearing up some initial confusion and clarifying the contributions of the authors, which swayed the prevailing opinion of the reviewers toward acceptance.
I want to commend the authors for their enlightening contributions to that discussion, which assuaged most of the reviewers' initial complaints.
_However, I would also like to stress that it is critical that the fruits of this discussion be incorporated into a revised version of this manuscript._ The reviewers are unanimous in this opinion.
In particular, I direct the authors to the conversation with reviewer CnRy and the points raised about:
- the manner in which the theoretical results were initially presented in the discussion/abstract, and
- more clarity regarding the assumptions made Theorem 1 and the notation used to communicate these assumptions and the theorem.
| train | [
"uRj6fDjsTYN",
"3MmhQ9scINY",
"goI2gI9Ikpx",
"u6x4C8S3Ub8",
"pLjnMeaIfe",
"MxbNTn6FOzA",
"W2Ig2lVqs9I",
"e_TR1nmhzK",
"Ye3qeYqk8Cf",
"17qexHNVaBwX",
"2kxJXWPKb4V",
"cxH4BhlBt-X",
"ErnV55yI0SW",
"U6xXDbXe6eF",
"MXFSYjTO5ys",
"eZlSovWCTlZ"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for increasing your score. We appreciate your help to improve readability. We believe our manuscript becomes much better than before. ",
" I have updated my score to 5.",
" Thank you for being constructive and supportive to improve our paper. Yes, you are correct on all points. We have updated the t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"3MmhQ9scINY",
"goI2gI9Ikpx",
"u6x4C8S3Ub8",
"pLjnMeaIfe",
"W2Ig2lVqs9I",
"W2Ig2lVqs9I",
"2kxJXWPKb4V",
"Ye3qeYqk8Cf",
"cxH4BhlBt-X",
"nips_2022_9wCQVgEWO2J",
"eZlSovWCTlZ",
"MXFSYjTO5ys",
"U6xXDbXe6eF",
"nips_2022_9wCQVgEWO2J",
"nips_2022_9wCQVgEWO2J",
"nips_2022_9wCQVgEWO2J"
] |
nips_2022_y-E1htoQl-n | Efficient Adversarial Training without Attacking: Worst-Case-Aware Robust Reinforcement Learning | Recent studies reveal that a well-trained deep reinforcement learning (RL) policy can be particularly vulnerable to adversarial perturbations on input observations. Therefore, it is crucial to train RL agents that are robust against any attacks with a bounded budget. Existing robust training methods in deep RL either treat correlated steps separately, ignoring the robustness of long-term rewards, or train the agents and RL-based attacker together, doubling the computational burden and sample complexity of the training process. In this work, we propose a strong and efficient robust training framework for RL, named Worst-case-aware Robust RL (WocaR-RL) that directly estimates and optimizes the worst-case reward of a policy under bounded l_p attacks without requiring extra samples for learning an attacker. Experiments on multiple environments show that WocaR-RL achieves state-of-the-art performance under various strong attacks, and obtains significantly higher training efficiency than prior state-of-the-art robust training methods. The code of this work is available at https://github.com/umd-huang-lab/WocaR-RL. | Accept | This paper introduces a novel adversarial training method that directly computes a worst-case performance under budget bounded attacks during the training process. As a result, the method is more sample efficient and achieves state-of-the-art performance across a number of test cases.
The reviewers agree that the contributions are novel and well validated, making this paper a clear acceptance.
| train | [
"WsqxmyP6J_l",
"M5wARVBa4k0",
"MSHVU5CXzRP",
"9T5IZK6ZgnM",
"2-xQL6eUtlG",
"IofneG06S7",
"R9lcjPUc3jh",
"0Wl_tvme4pZ",
"H6xokf2JHQF",
"5VCqsZaMU-h",
"X48UOEcyXy",
"LMnV-_4RDd",
"kJ54YZ8LvE5",
"O64FOhpb_H8",
"bcZQcNn22RT",
"XKxvpm_hLHH"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank the authors for the detailed responses and my questions are addressed. I will increase my score.",
" $\\newcommand{hau}{\\textcolor{red}{\\mathrm{9hAu}}}$\n\nDear Reviewer $\\hau$,\n\nThank you again for reviewing our paper! We would like to politely remind you that we have addressed all your concerns and... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"LMnV-_4RDd",
"kJ54YZ8LvE5",
"kJ54YZ8LvE5",
"nips_2022_y-E1htoQl-n",
"XKxvpm_hLHH",
"XKxvpm_hLHH",
"O64FOhpb_H8",
"O64FOhpb_H8",
"bcZQcNn22RT",
"bcZQcNn22RT",
"kJ54YZ8LvE5",
"kJ54YZ8LvE5",
"nips_2022_y-E1htoQl-n",
"nips_2022_y-E1htoQl-n",
"nips_2022_y-E1htoQl-n",
"nips_2022_y-E1htoQl-n... |
nips_2022_mjVZw5ADSbX | CoNT: Contrastive Neural Text Generation | Recently, contrastive learning attracts increasing interests in neural text generation as a new solution to alleviate the exposure bias problem. It introduces a sequence-level training signal which is crucial to generation tasks that always rely on auto-regressive decoding. However, previous methods using contrastive learning in neural text generation usually lead to inferior performance. In this paper, we analyse the underlying reasons and propose a new Contrastive Neural Text generation framework, CoNT. CoNT addresses bottlenecks that prevent contrastive learning from being widely adopted in generation tasks from three aspects -- the construction of contrastive examples, the choice of the contrastive loss, and the strategy in decoding. We validate CoNT on five generation tasks with ten benchmarks, including machine translation, summarization, code comment generation, data-to-text generation and commonsense generation. Experimental results show that CoNT clearly outperforms its baseline on all the ten benchmarks with a convincing margin. Especially, CoNT surpasses previous the most competitive contrastive learning method for text generation, by 1.50 BLEU on machine translation and 1.77 ROUGE-1 on summarization, respectively. It achieves new state-of-the-art on summarization, code comment generation (without external data) and data-to-text generation. | Accept | This paper received mostly positive ratings, but the reviewers also had some overall reservations about the novelty of this work and that makes this paper relatively borderline.
Pros:
- The paper makes three targeted contributions to contrastive learning in text generation to help mitigate exposure bias problem, and the approach of the paper outperforms SoTA contrastive learning methods.
- The work is supported by extensive experimentation, and on a wide variety of tasks (MT, summarization, code comment generation, data-to-text generation and commonsense generation).
- Experimentation appears to be solid for the most part, but reviewers expressed some minor concerns (e.g., about the summarization evaluation)
Cons:
- The reviewers’ main concern is with the relatively limited novelty of the work, as many of its ideas (e.g., self-generated negatives and pairwise loss) are well-known. That said, the application of these methods to text generation appears to be novel.
- The work contains no human evaluation, but automated evaluation is defensible for some of the tasks (MT and summarization at least).
In sum, the work is quite solid and suffers from no major flaws, but the ideas underlying the methods of the paper are not particularly surprising. | train | [
"YgH_2l6d4pD",
"FLpqSQqXp54",
"ZoLtzGPrSOm",
"lhQa_q45B6",
"YHuoeJ1yBo",
"KdP0JWvbchN1",
"BvtQXz-TdOF",
"MFim5UZGNls",
"7GDgQUWHBZ5",
"Oyh3ol10BGx"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your feedback and thanks again for the time and efforts you provided! Based on your comments, we will add a more thorough human evaluation and a new section named **Advanced Evaluation Metrics** in our experiments part where we will further validate CoNT with metrics like BLEURT, BERTScore.",
" ... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"FLpqSQqXp54",
"ZoLtzGPrSOm",
"Oyh3ol10BGx",
"7GDgQUWHBZ5",
"MFim5UZGNls",
"BvtQXz-TdOF",
"nips_2022_mjVZw5ADSbX",
"nips_2022_mjVZw5ADSbX",
"nips_2022_mjVZw5ADSbX",
"nips_2022_mjVZw5ADSbX"
] |
nips_2022_oiztwzmM9l | Sample-Then-Optimize Batch Neural Thompson Sampling | Bayesian optimization (BO), which uses a Gaussian process (GP) as a surrogate to model its objective function, is popular for black-box optimization. However, due to the limitations of GPs, BO underperforms in some problems such as those with categorical, high-dimensional or image inputs. To this end, recent works have used the highly expressive neural networks (NNs) as the surrogate model and derived theoretical guarantees using the theory of neural tangent kernel (NTK). However, these works suffer from the limitations of the requirement to invert an extremely large parameter matrix and the restriction to the sequential (rather than batch) setting. To overcome these limitations, we introduce two algorithms based on the Thompson sampling (TS) policy named Sample-Then-Optimize Batch Neural TS (STO-BNTS) and STO-BNTS-Linear. To choose an input query, we only need to train an NN (resp. a linear model) and then choose the query by maximizing the trained NN (resp. linear model), which is equivalently sampled from the GP posterior with the NTK as the kernel function. As a result, our algorithms sidestep the need to invert the large parameter matrix yet still preserve the validity of the TS policy. Next, we derive regret upper bounds for our algorithms with batch evaluations, and use insights from batch BO and NTK to show that they are asymptotically no-regret under certain conditions. Finally, we verify their empirical effectiveness using practical AutoML and reinforcement learning experiments. | Accept | The authors introduced two asymptotically no-regret neural Thompson sampling algorithms. They derived regret upper bounds and showed that they are asymptotically no-regret under certain conditions. They verified their empirical effectiveness with AutoML and reinforcement learning experiments.
All reviewers liked this paper. Please note however, that it is somewhat surprising that in some cases that standard GP-UCB and GP-TS competitors performed very badly. One of the reviewers reproduced the Lunar-Lander experiments in BoTorch and achieved much better performance for these competitor methods.
| val | [
"YcK-cloBrZJ",
"dbHYCJrz714",
"glyYyzefsWDY",
"DBBB0YCVv3We",
"AgyZHNmUVAH",
"Yv1PXlqI9I7",
"cffJjIg4KkO",
"vWYQ7wN3ry8",
"pam-URklTpq",
"mF1WCAgL9yj",
"ssmne1oT7f",
"eVL9jY_T-5D",
"UHql54ykP3E",
"EbMREVMMjle"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" > 1. theorem 2 (a), when $T$ is fixed, I don't think \"asymptotic\" would be an accurate word to describe the result since now there is no term that will go to infinity.\n\nThank you for pointing this out. We agree with you that \"asymptotic\" is inaccurate here, and we will revise the claim of \"STO-BNTS-Linear ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
2,
4
] | [
"glyYyzefsWDY",
"DBBB0YCVv3We",
"EbMREVMMjle",
"mF1WCAgL9yj",
"Yv1PXlqI9I7",
"cffJjIg4KkO",
"EbMREVMMjle",
"UHql54ykP3E",
"eVL9jY_T-5D",
"ssmne1oT7f",
"nips_2022_oiztwzmM9l",
"nips_2022_oiztwzmM9l",
"nips_2022_oiztwzmM9l",
"nips_2022_oiztwzmM9l"
] |
nips_2022_Cgmk9CicWFl | RSA: Reducing Semantic Shift from Aggressive Augmentations for Self-supervised Learning | Most recent self-supervised learning methods learn visual representation by contrasting different augmented views of images. Compared with supervised learning, more aggressive augmentations have been introduced to further improve the diversity of training pairs. However, aggressive augmentations may distort images' structures leading to a severe semantic shift problem that augmented views of the same image may not share the same semantics, thus degrading the transfer performance. To address this problem, we propose a new SSL paradigm, which counteracts the impact of semantic shift by balancing the role of weak and aggressively augmented pairs. Specifically, semantically inconsistent pairs are of minority, and we treat them as noisy pairs. Note that deep neural networks (DNNs) have a crucial memorization effect that DNNs tend to first memorize clean (majority) examples before overfitting to noisy (minority) examples. Therefore, we set a relatively large weight for aggressively augmented data pairs at the early learning stage. With the training going on, the model begins to overfit noisy pairs. Accordingly, we gradually reduce the weights of aggressively augmented pairs. In doing so, our method can better embrace aggressive augmentations and neutralize the semantic shift problem. Experiments show that our model achieves 73.1% top-1 accuracy on ImageNet-1K with ResNet-50 for 200 epochs, which is a 2.5% improvement over BYOL. Moreover, experiments also demonstrate that the learned representations can transfer well for various downstream tasks. Code is released at: https://github.com/tmllab/RSA.
| Accept | This paper aims to improve SSL pretraining by adjusting the strength of augmentations applied at different points in training, providing a large number of aggressive augmentations early in training with this rate decreasing over time to prevent the model from overfitting to noisy examples. Using this approach, the authors demonstrate substantial improvements over prior methods. All reviewers recognized the soundness of the motivation and were generally convinced by the experiments, though there were some concerns about whether the approach is too incremental since it is relatively simple. I strongly agree with the authors that simplicity is not a downside of an approach, but rather a benefit, and the fact that the approach works with such a small modification makes it more likely that this result is not caused by an obscure mix of hyperparameters. I also note that the authors engaged extensively with the reviewers, providing a number of additional experiments comparing to other approaches and providing further tests of the impact of the hyperparameter they introduce. I think this is an worthwhile paper which will have impact going forward, and I recommend acceptance. | train | [
"dWbE6jcMBWz",
"ubsN1lrBie6",
"xoXjSKyOcE",
"Z77aL1IDrvX4",
"UxJQX3h7KFwk",
"Zg_hgTYmnrv",
"iub4fvpfvCT",
"pwsM-koNCU0",
"Hgso0wSbYGQ",
"LcccBrWBtOo",
"860oYDt9Kcm",
"5d7K005DTb2",
"TSIvmzpnavs",
"XF1o9VSigxf",
"dqTEbRhvd9d",
"dZ-Cefk_5A3",
"NZ59SUDmgvy",
"OQCrzhMnja",
"34a-AEicL... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" Combining with the authors' rebuttal and other reviewers' comment,\nI have raised my score.",
" Dear Reviewer Ckh1,\n\nThe rolling discussion period will be closed soon. Can you inform us if you have any remaining concerns? Thanks very much.\n\nBest wishes,",
" Thank you so much for your prompt reply.\n\nAs y... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"Ev95-efM20L",
"Ev95-efM20L",
"Z77aL1IDrvX4",
"UxJQX3h7KFwk",
"Zg_hgTYmnrv",
"pwsM-koNCU0",
"Ev95-efM20L",
"LcccBrWBtOo",
"TSIvmzpnavs",
"860oYDt9Kcm",
"5d7K005DTb2",
"NZ59SUDmgvy",
"34a-AEicLXc",
"OQCrzhMnja",
"34a-AEicLXc",
"Ev95-efM20L",
"jSVZsS_Gjzg",
"nips_2022_Cgmk9CicWFl",
... |
nips_2022_unb1wyXf-aC | Concurrent 3D super resolution on intensity and segmentation maps improves detection of structural effects in neurodegenerative disease | We propose a new perceptual super resolution (PSR) method for 3D neuroimaging and evaluate its performance in detecting brain changes due to neurodegenerative disease. The method, concurrent super resolution and segmentation (CSRS), is trained on volumetric brain data to consistently upsample both an image intensity channel and associated segmentation labels. The simultaneous nature of the method improves not only the resolution of the images but also the resolution of associated segmentations thereby making the approach directly applicable to existing labeled datasets. One challenge to real world evaluation of SR methods such as CSRS is the lack of high resolution ground truth in the target application data: clinical neuroimages. We therefore evaluate CSRS effectiveness in an adjacent, clinically relevant signal detection problem: quantifying cross-sectional and longitudinal change across a set of phenotypically heterogeneous but related disorders that exhibit known and differentiable patterns of brain atrophy. We contrast several 3D PSR loss functions in this paradigm and show that CSRS consistently increases the ability to detect regional atrophy both longitudinally and cross-sectionally in each of five related diseases.
| Reject | This paper has mixed evaluations, with two reviewers recommending accept and three recommending reject. After carefully reading the paper and the discussion, I agree with reviewers hYgi, Ho1b, aKcw, Uzm9 in their main criticisms. The paper still requires major revisions before it can be accepted, including, but not limited to, an improvement in the clarity of the presentation and more experimental comparisons against other, perhaps, even simpler approaches. | train | [
"0zR8jn9zVHQ",
"sljjoRv8zUR",
"oChazHPoaN",
"JpcSjlL7Wef",
"g-t4eUiukAE",
"DToPx3hidrF",
"PYevvIcRqTl",
"MoOPbOa_WgS",
"wSxi1MrjEy",
"fuwrgKvtUu",
"mtNr7thIqrf",
"UP-MV6Mth_",
"s1s4idxrD-",
"VlNE9fo5iz4",
"d0QHJnW5xx4"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" we revised our contribution bullet points to read: \n\ndemonstration that loss choice impacts detection power in \\textcolor{blue}{natural history studies of neurodegenerative disease. Standard intensity similarity and segmentation overlap metrics, on the other hand, do not discriminate performance between the c... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
3,
3,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4,
4,
3
] | [
"nips_2022_unb1wyXf-aC",
"PYevvIcRqTl",
"DToPx3hidrF",
"MoOPbOa_WgS",
"d0QHJnW5xx4",
"UP-MV6Mth_",
"mtNr7thIqrf",
"fuwrgKvtUu",
"nips_2022_unb1wyXf-aC",
"nips_2022_unb1wyXf-aC",
"nips_2022_unb1wyXf-aC",
"nips_2022_unb1wyXf-aC",
"nips_2022_unb1wyXf-aC",
"nips_2022_unb1wyXf-aC",
"nips_2022... |
nips_2022_Xwz9B6LDM5c | Communication Efficient Federated Learning for Generalized Linear Bandits | Contextual bandit algorithms have been recently studied under the federated learning setting to satisfy the demand of keeping data decentralized and pushing the learning of bandit models to the client side. But limited by the required communication efficiency, existing solutions are restricted to linear models to exploit their closed-form solutions for parameter estimation. Such a restricted model choice greatly hampers these algorithms' practical utility.
In this paper, we take the first step to addressing this challenge by studying generalized linear bandit models under the federated learning setting. We propose a communication-efficient solution framework that employs online regression for local update and offline regression for global update. We rigorously proved, though the setting is more general and challenging, our algorithm can attain sub-linear rate in both regret and communication cost, which is also validated by our extensive empirical evaluations. | Accept | Federated bandits are a current area of interest within the community and the paper provides valuable contributions. In particular, the authors deal with the rather general GLM setting, provide algorithms, and study the regret. It would be useful if the authors would use the discussions with the reviewers and the reviewers' comments to improve and polish the paper. | train | [
"X8Gphzab6B",
"44ca6V6hdxD",
"gMTlOQV3lb",
"uhgvD9EWr8e6",
"p8q-cubBdmWH",
"130D6V3LdyB",
"Qzny1nJ53G",
"dPW6Txu2gN",
"m61WN3jptKs",
"kmTeKZVriV0",
"nGVi9ij1uJM",
"pPZWTtMfsPb"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate the reviewer's suggestion on our presentation in Section 4.2, and we have added more discussions to highlight our solution’s technical novelty compared with prior works in the revised version. We hope this makes the insights clearer.\n\nIf the reviewer was suggesting to have some informal notion of ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"pPZWTtMfsPb",
"nips_2022_Xwz9B6LDM5c",
"130D6V3LdyB",
"pPZWTtMfsPb",
"pPZWTtMfsPb",
"nGVi9ij1uJM",
"kmTeKZVriV0",
"kmTeKZVriV0",
"nips_2022_Xwz9B6LDM5c",
"nips_2022_Xwz9B6LDM5c",
"nips_2022_Xwz9B6LDM5c",
"nips_2022_Xwz9B6LDM5c"
] |
nips_2022_1cJ1cbA6NLN | Brain Network Transformer | Human brains are commonly modeled as networks of Regions of Interest (ROIs) and their connections for the understanding of brain functions and mental disorders. Recently, Transformer-based models have been studied over different types of data, including graphs, shown to bring performance gains widely. In this work, we study Transformer-based models for brain network analysis. Driven by the unique properties of data, we model brain networks as graphs with nodes of fixed size and order, which allows us to (1) use connection profiles as node features to provide natural and low-cost positional information and (2) learn pair-wise connection strengths among ROIs with efficient attention weights across individuals that are predictive towards downstream analysis tasks. Moreover, we propose an Orthonormal Clustering Readout operation based on self-supervised soft clustering and orthonormal projection. This design accounts for the underlying functional modules that determine similar behaviors among groups of ROIs, leading to distinguishable cluster-aware node embeddings and informative graph embeddings. Finally, we re-standardize the evaluation pipeline on the only one publicly available large-scale brain network dataset of ABIDE, to enable meaningful comparison of different models. Experiment results show clear improvements of our proposed Brain Network Transformer on both the public ABIDE and our restricted ABCD datasets. The implementation is available at https://github.com/Wayfear/BrainNetworkTransformer. | Accept | This paper was reviewed by four reviewers. Three reviewers have participated in the discussions and they are all finally convinced and provided positive recommendations. The fourth reviewer was negative about this paper with some concerns. The authors provided very detailed rebuttals, but the reviewer did not respond to rebuttals though repeated reminders. I checked the comments and rebuttals and tend to believe most of the concerns of this reviewer have been addressed, at least to a large extent. Thus I recommend this paper to be accepted at this point. | train | [
"8RIuSg1zGkr",
"DQuWNwZtvI",
"1nl5V1O1zPU",
"61rTzRQ0xKZ",
"OSdh9d5u52-",
"UGYsPfEV2ER",
"lAAjsv_WKah",
"moH-25wZ8w",
"WPeh9ZamEsG",
"uMtEiDo-1cl",
"sCnNj4jGnE7",
"NmBzhnkkUHV",
"2P6cLvREMcO",
"PiUZvbgNXqg",
"zvf_UY26_or",
"pDFWlsEfVA",
"C4qcCKLVXTT",
"LNRQKG0qYV",
"KoO4i0gSgwo",... | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",... | [
" Dear reviewer baE1,\n\nMany thanks for your positive feedback.\n\nBest,\nAuthors",
" I appreciate the authors' detailed response as well as the discussion with the other reviewers. I think the authors have addressed my concerns and I've updated my score. \n",
" Dear reviewer wuUg,\n\nMany thanks for your init... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
5,
4,
4
] | [
"DQuWNwZtvI",
"OSdh9d5u52-",
"cCtSN5Y0Qoc",
"uMl_mWVOQT9",
"b3wq_xuveLf",
"lAAjsv_WKah",
"FWwuPcaQYj",
"nips_2022_1cJ1cbA6NLN",
"2P6cLvREMcO",
"2P6cLvREMcO",
"PiUZvbgNXqg",
"zvf_UY26_or",
"FWwuPcaQYj",
"Zf1Kf9Di4lm",
"Cu4UIZgDve3",
"FWwuPcaQYj",
"FWwuPcaQYj",
"cCtSN5Y0Qoc",
"FWwu... |
nips_2022_SFeKNSxect | AZ-whiteness test: a test for signal uncorrelation on spatio-temporal graphs | We present the first whiteness hypothesis test for graphs, i.e., a whiteness test for multivariate time series associated with the nodes of a dynamic graph; as such, the test represents an important model assessment tool for graph deep learning, e.g., in forecasting setups. The statistical test aims at detecting existing serial dependencies among close-in-time observations, as well as spatial dependencies among neighboring observations given the underlying graph. The proposed AZ-test can be intended as a spatio-temporal extension of traditional tests designed for system identification to graph signals. The AZ-test is versatile, allowing the underlying graph to be dynamic, changing in topology and set of nodes over time, and weighted, thus accounting for connections of different strength, as it is the case in many application scenarios like sensor and transportation networks. The asymptotic distribution of the designed test can be derived under the null hypothesis without assuming identically distributed data. We show the effectiveness of the test on both synthetic and real-world problems, and illustrate how it can be employed to assess the quality of spatio-temporal forecasting models by analyzing the prediction residuals appended to the graph stream. | Accept | The paper studies the whitness hypothesis test for spatio-temporal graph, which is a fundamental problem and can be relevant to many machine learning tasks. The authors have done a great job in the rebuttal phase in addressing reviewers’ comments. I believe it is a worthwhile paper to be published in NeurIPS. | train | [
"SwZRweWqDjG",
"21mJB3mrdmx",
"DImRzeQigc",
"JXvf2YjHG25",
"P_IyukYYKir",
"qExTPPXO16P",
"P6ebkG7aOcU",
"YnQmX9M9f3S",
"CMF3q0Ezvh",
"1pfCiCxI5kJ",
"oZgwRcCSHNf",
"lFYOcjGrr1J",
"Pb5328epd9m"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We followed your suggestion and added the above clarifications in the revised version of the paper. We are happy to further improve the paper presentation if there is anything else you would like us to add in.\n\nWe appreciate your willingness to increase the score of our paper, and hope you will update it at the... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"DImRzeQigc",
"JXvf2YjHG25",
"CMF3q0Ezvh",
"Pb5328epd9m",
"Pb5328epd9m",
"Pb5328epd9m",
"lFYOcjGrr1J",
"oZgwRcCSHNf",
"1pfCiCxI5kJ",
"nips_2022_SFeKNSxect",
"nips_2022_SFeKNSxect",
"nips_2022_SFeKNSxect",
"nips_2022_SFeKNSxect"
] |
nips_2022_X0m9q0IcsmX | ViewFool: Evaluating the Robustness of Visual Recognition to Adversarial Viewpoints | Recent studies have demonstrated that visual recognition models lack robustness to distribution shift. However, current work mainly considers model robustness to 2D image transformations, leaving viewpoint changes in the 3D world less explored. In general, viewpoint changes are prevalent in various real-world applications (e.g., autonomous driving), making it imperative to evaluate viewpoint robustness. In this paper, we propose a novel method called ViewFool to find adversarial viewpoints that mislead visual recognition models. By encoding real-world objects as neural radiance fields (NeRF), ViewFool characterizes a distribution of diverse adversarial viewpoints under an entropic regularizer, which helps to handle the fluctuations of the real camera pose and mitigate the reality gap between the real objects and their neural representations. Experiments validate that the common image classifiers are extremely vulnerable to the generated adversarial viewpoints, which also exhibit high cross-model transferability. Based on ViewFool, we introduce ImageNet-V, a new out-of-distribution dataset for benchmarking viewpoint robustness of image classifiers. Evaluation results on 40 classifiers with diverse architectures, objective functions, and data augmentations reveal a significant drop in model performance when tested on ImageNet-V, which provides a possibility to leverage ViewFool as an effective data augmentation strategy to improve viewpoint robustness. | Accept | In this paper, the authors study the problem of robustness in image classifiers – in particular the problem of adversarial robustness. Previous work in the field of adversarial robustness focused on identifying minimal non-realistic perturbations in pixel-space that maliciously alter the classification performance of a model. In this work, the authors constrain adversarial perturbations to the space of object and camera pose that lead to poor visual recognition performance. Importantly, the space considered is constrained to be physically plausible. They leverage recent advances in Neural Rendering (NeRF) to generate realistic 3D models of objects, and optimize for non-canonical poses that lead to poor predictive performance. Finally, the authors propose a new benchmark for evaluating viewpoint robustness (ImageNet-V) which may be used to assess the general quality of any image recognition system.
The reviewers identified several notable strengths including (1) the first work to identify adversarial viewpoint as a method for assessing robustness, (2) reasonable methodology for search and optimization, (3) solid and complete experiments. The reviewers did find some weaknesses in the presentation of the material and questioning details of the experimental setup but those points were largely addressed in the responses by the authors.
Given that robustness is a very large problem much larger than the topic of image recognition, I find that the problem the authors have identified is quite important to the larger community. My only suggestion is that it would be nice if the authors showed some analysis demonstrating a positive correlation between ImageNet-V accuracy and performance on other robustness measurements. That said, given the importance of the research topic and novelty of the approach, this work will be accepted for publication at this conference.
| train | [
"TP6d211Ite7",
"4C1KqAkygl3",
"yw1H8mdw_i",
"OLzYzn7xHW-",
"7-8fWfFKRT1",
"fCGWLeiEOA_",
"dQ73wyc1jk",
"DtL-wdZSt2H",
"mkxnxr1zB-",
"62nZus3dWi5",
"cl_mN93wZC-",
"2TZzd1EwH5e",
"SGZEQq_9YDO",
"kADM5xUUxKf",
"Zef4ZbJh_Jm",
"oCAPjagtbO"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for the increase on score and valuable feedback. We will provide further clarification on the unbounded optimization and improve the paper in the final. ",
" This paper presents a new method using NeRF to generate adversarial viewpoints of real objects to evaluate viewpoint robustness of ima... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
5
] | [
"yw1H8mdw_i",
"nips_2022_X0m9q0IcsmX",
"OLzYzn7xHW-",
"mkxnxr1zB-",
"fCGWLeiEOA_",
"cl_mN93wZC-",
"nips_2022_X0m9q0IcsmX",
"oCAPjagtbO",
"Zef4ZbJh_Jm",
"kADM5xUUxKf",
"kADM5xUUxKf",
"SGZEQq_9YDO",
"nips_2022_X0m9q0IcsmX",
"nips_2022_X0m9q0IcsmX",
"nips_2022_X0m9q0IcsmX",
"nips_2022_X0m... |
nips_2022_Owz3dDKM32p | Discovery of Single Independent Latent Variable | Latent variable discovery is a central problem in data analysis with a broad range of applications in applied science.
In this work, we consider data given as an invertible mixture of two statistically independent components, and assume that one of the components is observed while the other is hidden. Our goal is to recover the hidden component.
For this purpose, we propose an autoencoder equipped with a discriminator.
Unlike the standard nonlinear ICA problem, which was shown to be non-identifiable, in the special case of ICA we consider here, we show that our approach can recover the component of interest up to entropy-preserving transformation.
We demonstrate the performance of the proposed approach in several tasks, including image synthesis, voice cloning, and fetal ECG extraction. | Accept | Thanks to the authors for this submission, which tackles an interesting and widely appearing problem with a novel approach. The reviewers agreed that the submission is very well written, motivating the problem and detailing their approach quite clearly.
We also thank the authors for their thorough responses to reviewer questions and comments. One concern described is the extent to which the fECG experiment realistically and meaningfully evaluates the author’s approach. Reviewer gAAc expressed some concerns about the quantitative metric used by the authors in the fECG experiment. Their back and forth revealed the depth of the author’s knowledge of this application area, but did leave some questions unaddressed — namely experiments using simulated ECGs and stress tests on rare observations like premature ventricular contractions, twin pregnancies, or contractions. Additionally, reviewer gAAc asks a pointed question — how does the heart rate estimate from the extracted fECG and the Doppler signal directly compare? If I’m not mistaken, this comparison is feasible with the dataset analyzed, and could even be prepared in a subsequent draft.
Despite these open questions, this work does look at a breadth of applications in their experiments, strengthening the submission greatly. While I recommend accept, I strongly urge the authors to address these last points brought up by reviewer gAAc. | train | [
"Bwh4-iIz7h5",
"zQrCz7GHNUh",
"5MiEzpMaL-N3",
"CZK7vYP0xlv",
"kQoW5Q6U9RF",
"29FSMIZB4yE",
"TJf2EtgJcL2",
"YJ-yQNGCyx",
"xvP0xSmcidW",
"w0s9g8sEMDq",
"RNlBHbTF5wU",
"I6bxeB_45lE",
"ChGRk3dgg8U"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for authors' response.\n",
" Thanks for your response. I am satisfied and updated my grade.",
" I thank the authors for their response.",
" We thank the reviewers for the time and effort that they invested into the review of our paper, and for their helpful comments and suggestions. Please find our r... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"xvP0xSmcidW",
"TJf2EtgJcL2",
"YJ-yQNGCyx",
"nips_2022_Owz3dDKM32p",
"RNlBHbTF5wU",
"RNlBHbTF5wU",
"I6bxeB_45lE",
"w0s9g8sEMDq",
"ChGRk3dgg8U",
"nips_2022_Owz3dDKM32p",
"nips_2022_Owz3dDKM32p",
"nips_2022_Owz3dDKM32p",
"nips_2022_Owz3dDKM32p"
] |
nips_2022__gA20SUfd4a | Between Stochastic and Adversarial Online Convex Optimization: Improved Regret Bounds via Smoothness | Stochastic and adversarial data are two widely studied settings in online learning. But many optimization
tasks are neither i.i.d. nor fully adversarial, which makes it of fundamental interest to get a better theoretical
understanding of the world between these extremes.
In this work we establish novel regret bounds for online convex
optimization in a setting that interpolates between stochastic
i.i.d. and fully adversarial losses. By exploiting smoothness of
the expected losses, these bounds replace a dependence on the maximum
gradient length by the variance of the gradients, which was previously
known only for linear losses. In addition, they weaken the i.i.d.
assumption by allowing, for example, adversarially poisoned rounds,
which were previously considered in the expert and bandit setting. Our results extend this to the online convex
optimization framework. In the fully i.i.d. case, our bounds match the rates one would expect
from results in stochastic acceleration, and in the fully adversarial
case they gracefully deteriorate to match the minimax regret.
We further provide lower bounds showing that our regret upper bounds are
tight for all intermediate regimes in terms of the stochastic variance and the
adversarial variation of the loss gradients. | Accept | While the reviewers had some concerns that were raised in the reviews, overall the reviewers seem to be leaning towards an accept. The setting/motivation is a nice one and the result is clean. I agree with the reviewers in that the paper should be accepted for publication. | train | [
"Lg5QB8hnup4",
"rhDzjVFHwCH",
"F6Da9njmBeD",
"vLywJ6eqJyP",
"zejBawIULK",
"zz0TLDSGnR",
"_YE0ufOgFe46",
"bdY6NCF-GyX",
"rX1lT5tcosH",
"ahaN_Qy69Az"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" You are right, the statement 'In the i.i.d. case, using the last gradient would be a better estimate of $g_t$ than the average of the observed gradients.’ is a bit vague. Please let us clarify this:\nYou are right that averaging more past gradients would reduce the variance of the estimator, potentially gaining ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4
] | [
"rhDzjVFHwCH",
"F6Da9njmBeD",
"ahaN_Qy69Az",
"rX1lT5tcosH",
"bdY6NCF-GyX",
"bdY6NCF-GyX",
"nips_2022__gA20SUfd4a",
"nips_2022__gA20SUfd4a",
"nips_2022__gA20SUfd4a",
"nips_2022__gA20SUfd4a"
] |
nips_2022_KeIuNChob1H | Pseudo-Riemannian Graph Convolutional Networks | Graph Convolutional Networks (GCNs) are powerful frameworks for learning embeddings of graph-structured data. GCNs are traditionally studied through the lens of Euclidean geometry. Recent works find that non-Euclidean Riemannian manifolds provide specific inductive biases for embedding hierarchical or spherical data. However, they cannot align well with data of mixed graph topologies. We consider a larger class of pseudo-Riemannian manifolds that generalize hyperboloid and sphere. We develop new geodesic tools that allow for extending neural network operations into geodesically disconnected pseudo-Riemannian manifolds. As a consequence, we derive a pseudo-Riemannian GCN that models data in pseudo-Riemannian manifolds of constant nonzero curvature in the context of graph neural networks. Our method provides a geometric inductive bias that is sufficiently flexible to model mixed heterogeneous topologies like hierarchical graphs with cycles. We demonstrate the representational capabilities of this method by applying it to the tasks of graph reconstruction, node classification, and link prediction on a series of standard graphs with mixed topologies. Empirical results demonstrate that our method outperforms Riemannian counterparts when embedding graphs of complex topologies. | Accept | This paper is about using certain types of non-Euclidean spaces for representations. The background context for the paper is roughly the following: hyperbolic spaces have been recently popularized for embedding tree-like data. These are one of the three types of so-called constant curvature Riemannian manifolds. More recently, to handle more types of data, products of manifolds were proposed. This is more flexible, as such spaces can represent many more flavors of data. The authors of the present work further generalize these embedding spaces by using pseudo-Riemannian manifolds, which, unlike Riemannian manifolds, do not require positive semidefinite metrics.
The hope for these types of spaces is that they are even more flexible, capable of representing potentially any type of structure. The cost is that they have weird behavior, and that the operations developed for more specific spaces do not easily lift. The authors derive these various operations and build equivalents of GNNs in these spaces, performing comparisons that are favorable against some of the existing literature.
This paper's strengths are that it does heavy technical work to get these fairly complicated spaces to work. The idea has potential, and there's plenty of additional interesting questions that result from examining these spaces and models built over them.
Most of the reviewers were in agreement about the paper's contributions. One reviewer disagreed and asked some reasonable questions, but most of these are answerable, and I encourage the authors to carefully revise some of their writing to produce additional intuition that provides these answers. Ultimately, I believe the paper clears the bar.
I will note a few downsides that I think the authors should address in their next revision:
- The authors should more carefully compare to products of Riemannian manifolds. The current motivation isn't sufficient (the authors say "the simple combination of spaces still does not accommodate topologically heterogeneous graphs very well"). Are there canonical graphs where pseudo-Riemannian manifolds provide low-distortion embeddings and no product manifold does? Or some other type of evidence?
- Similarly, in the experiments, the comparison against k-GCN, which is probably the closest competitor since it effectively generalizes hGCN, should be attempted with more combinations of spaces (i.e., instead of just H^5 x S^5, you could do H^2 x S^8, H^4 x S^6, and so on). This would make the comparison more fair, as currently the authors provide many more q-GCN implementations. More generally, not sure why there isn't a hyperparameter search over the signature rather than fixing it a priori.
| train | [
"4HxKYgE5jPV",
"JXylmQOxBtO",
"7QhuzSZ5ML",
"yTLIrbXTJ6Q",
"0jnXcJTcW6",
"NiD2eW02kk-",
"SNiaykyro7g",
"iNV8mEarxqH",
"V-hcYSIes1f",
"Okk7GWHuvvA",
"lKBh4F5I1aK",
"dBOvgbuzEz"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The authors have adequately addressed my concerns and questions.\nI wish to thank them for the refresher / clarification on the use of broken geodesics for approximation and for the insightful comment regarding Frechet means to another reviewer.\nMy rating remains unchanged after considering the discussion betwee... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
1,
4,
4
] | [
"NiD2eW02kk-",
"SNiaykyro7g",
"nips_2022_KeIuNChob1H",
"Okk7GWHuvvA",
"Okk7GWHuvvA",
"V-hcYSIes1f",
"dBOvgbuzEz",
"lKBh4F5I1aK",
"nips_2022_KeIuNChob1H",
"nips_2022_KeIuNChob1H",
"nips_2022_KeIuNChob1H",
"nips_2022_KeIuNChob1H"
] |
nips_2022_0Uejkm1GB1U | Conditional Meta-Learning of Linear Representations | Standard meta-learning for representation learning aims to find a common representation to be shared across multiple tasks. The effectiveness of these methods is often limited when the nuances of the tasks’ distribution cannot be captured by a single representation. In this work we overcome this issue by inferring a conditioning function, mapping the tasks’ side information (such as the tasks’ training dataset itself) into a representation tailored to the task at hand. We study environments in which our conditional strategy outperforms standard meta-learning, such as those in which tasks can be organized in separate clusters according to the representation they share. We then propose a meta-algorithm capable of leveraging this advantage in practice. In the unconditional setting, our method yields a new estimator enjoying faster learning rates and requiring less hyper-parameters to tune than current state-of-the-art methods. Our results are supported by preliminary experiments. | Accept | This work presents a new meta-learning algorithm to infer linear representation of task side information. It introduces modelling improvement over existing conditional meta-learning works with shared solution vectors, conducts rigorous theoretical analysis, and shows improvement performance with preliminary experiments. Its unconditional meta-learning variant has faster learning rate and requires less hyper-parameter tuning than SOTA methods.
The reviewers had concerns in the original reviews including the similarity to existing work [14], weak empirical evaluation, computation complexity, and limited linear representation. The authors' feedback addressed / should have address most concerns, and multiple reviewers increased their rating. I would encourage the authors to incorporate their feedback into the revision. | train | [
"Zad6qfc_Uj",
"bK4gixvA4rF",
"6BzeTL6YWYM",
"sWOGD75vdTV",
"ZgQelTvX4YV",
"oDbabD2gPTp",
"WnMRE-JcVi6",
"IIS5uaKzA5G",
"OaRP01M6riY",
"mpfiqeyyTEX"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your reply and time.",
" I appreciate the clarifications and thorough response, and now better understand the contrast with ref. [14]. I shall go over the paper once again in light of this new understanding, and reconsider my rating.",
" 1) __R.__ *The algorithm for updating the positive matrice... | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
2,
2
] | [
"bK4gixvA4rF",
"oDbabD2gPTp",
"mpfiqeyyTEX",
"OaRP01M6riY",
"IIS5uaKzA5G",
"WnMRE-JcVi6",
"nips_2022_0Uejkm1GB1U",
"nips_2022_0Uejkm1GB1U",
"nips_2022_0Uejkm1GB1U",
"nips_2022_0Uejkm1GB1U"
] |
nips_2022_AbLj0l8YbYt | Grounding Aleatoric Uncertainty for Unsupervised Environment Design | Adaptive curricula in reinforcement learning (RL) have proven effective for producing policies robust to discrepancies between the train and test environment. Recently, the Unsupervised Environment Design (UED) framework generalized RL curricula to generating sequences of entire environments, leading to new methods with robust minimax regret properties. Problematically, in partially-observable or stochastic settings, optimal policies may depend on the ground-truth distribution over aleatoric parameters of the environment in the intended deployment setting, while curriculum learning necessarily shifts the training distribution. We formalize this phenomenon as curriculum-induced covariate shift (CICS), and describe how its occurrence in aleatoric parameters can lead to suboptimal policies. Directly sampling these parameters from the ground-truth distribution avoids the issue, but thwarts curriculum learning. We propose SAMPLR, a minimax regret UED method that optimizes the ground-truth utility function, even when the underlying training data is biased due to CICS. We prove, and validate on challenging domains, that our approach preserves optimality under the ground-truth distribution, while promoting robustness across the full range of environment settings. | Accept | This work proposes a prioritized level replay and Bayesian inference based algorithm for better generation of curricula via unsupervised environment design. It tries to address the problem of covariate shift induced by curriculum itself with respect to the test distribution. Overall this has been well-received by reviewers. There was rich discussion about whether the assumption of a resettable controller is overly restrictive. The authors have convincingly responded that not only is it necessary but should be taken advantage of wherever available and that many popular RL environments provide reset capability to desired states. The gist of this discussion would do well to find an explicit place in the discussion section of the camera-ready version of the paper. | train | [
"bRqBkPjsg9d",
"83G-GMfs91K",
"mFfKsJ1dN8q",
"xCeiOtMaa7T",
"FTdY88zwe6",
"MLiWIeWdFBo",
"cNE7NbGtQ7",
"mnEyp32gzN",
"l9AIju6aLVz",
"AupzN_-65D",
"xecGBfH2t1_",
"1atOIxmvXPd",
"aRK8PuIN43Z",
"eJbOxsbKnznK",
"roouTy_qBmbO",
"nz1oleXw3ab",
"nGufL8XCd_M",
"jV5JI-zW2yq",
"BsA6L9odeb8... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" Thanks for the clarification. Most of my concerns are addressed. Thus, I will increase my score to 6. Below are some comments. \n\n\nI am sorry for the misleading claim that resets remove the exploration problem. My point is that the resettable simulator can largely help overcome the exploration issue. In particu... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"83G-GMfs91K",
"mFfKsJ1dN8q",
"xCeiOtMaa7T",
"aRK8PuIN43Z",
"MLiWIeWdFBo",
"AupzN_-65D",
"nips_2022_AbLj0l8YbYt",
"l9AIju6aLVz",
"jV5JI-zW2yq",
"xecGBfH2t1_",
"eJbOxsbKnznK",
"1zZPeGXDxmz",
"1zZPeGXDxmz",
"sh2RSvU0XI",
"sh2RSvU0XI",
"sh2RSvU0XI",
"BsA6L9odeb8",
"BsA6L9odeb8",
"ni... |
nips_2022_LKPtAaJcuLx | Alleviating ``Posterior Collapse'' in Deep Topic Models via Policy Gradient | Deep topic models have been proven as a promising way to extract hierarchical latent representations from documents represented as high-dimensional bag-of-words vectors.
However, the representation capability of existing deep topic models is still limited by the phenomenon of "posterior collapse", which has been widely criticized in deep generative models, resulting in the higher-level latent representations exhibiting similar or meaningless patterns.
To this end, in this paper, we first develop a novel deep-coupling generative process for existing deep topic models, which incorporates skip connections into the generation of documents, enforcing strong links between the document and its multi-layer latent representations.
After that, utilizing data augmentation techniques, we reformulate the deep-coupling generative process as a Markov decision process and develop a corresponding Policy Gradient (PG) based training algorithm, which can further alleviate the information reduction at higher layers.
Extensive experiments demonstrate that our developed methods can effectively alleviate "posterior collapse" in deep topic models, contributing to providing higher-quality latent document representations. | Accept | This paper proposes a new hierarchical neural topic model that alleviates the posterior collapse problem of previous models. The key idea is to incorporate skip connections into the generation to alleviate the issue that higher layer representations exhibit similar patterns. The sequence-like generation procedure is formulated as a Markov decision process and learned with a policy gradient method.
Overall, most reviewers feel positively about this paper. Even though the specific techniques used are not novel (policy gradient, skip connections) their use to alleviate posterior collapse in higher layers of neural topic models seems novel. The experimental results are convincing, and the qualitative evaluation support the claim of overcoming "posterior collapse." The proposed method is potentially useful and is applicable to different types of neural topic models. Although there were some concerns in the original version (e.g. insufficient explanation on posterior collapse and why the proposed method addresses it; evaluation limited to perplexity) the authors' response addressed most of the concerns and added details in the updated version. Therefore, I recommend acceptance. | train | [
"mFivSJnLXO",
"wpUo7FlPmkP",
"ZaCWCBCuYr",
"AVb7Ghqd-7",
"dVdNYYlhGiG",
"w5yvOvcMJ8",
"XztXqf10MGH",
"u5NxcceUSJA",
"OQwKu_HRw90e",
"cqvLapfk61o",
"g4ZvSkc2k2_",
"83--Ddt-U4",
"nU9yI8WNLyu",
"l6bADOFQHMA"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your carefully reviewing our paper again.\n\nIt is correct and we think you have captured the main thought of applying PG-based method for training our models. As explained in our response to W5, similar to the recent popular idea of applying RL for sequence generation, the core reason why PG-based tr... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"wpUo7FlPmkP",
"u5NxcceUSJA",
"AVb7Ghqd-7",
"XztXqf10MGH",
"nU9yI8WNLyu",
"83--Ddt-U4",
"nU9yI8WNLyu",
"cqvLapfk61o",
"l6bADOFQHMA",
"nU9yI8WNLyu",
"83--Ddt-U4",
"nips_2022_LKPtAaJcuLx",
"nips_2022_LKPtAaJcuLx",
"nips_2022_LKPtAaJcuLx"
] |
nips_2022_Mn_HoKBcWK | Fast Algorithms for Packing Proportional Fairness and its Dual | The proportional fair resource allocation problem is a major problem studied in flow control of networks, operations research, and economic theory, where it has found numerous applications. This problem, defined as the constrained maximization of $\sum_i \log x_i$, is known as the packing proportional fairness problem when the feasible set is defined by positive linear constraints and $x \in \mathbb{R}_{\geq 0}^n$. In this work, we present a distributed accelerated first-order method for this problem which improves upon previous approaches. We also design an algorithm for the optimization of its dual problem. Both algorithms are width-independent. | Accept | The paper proposes fast algorithms for computing proportional fair allocations, which is one of the most widely defs of fairness. Although reviews were mixed, we believe that importance of the problem makes this a worthy paper to include in NeurIPS. However, we encourage the authors to incorporate the comments of the reviewers to make it more interesting for the ML audience. | train | [
"Txx9-Ln5UKk",
"MVUTFvRPF4",
"K2N-l3Wi-6M",
"xcpVgV2sfX5",
"jVqWiyLufLh",
"G3VmQ_0jZeS",
"AIXepM2WBc",
"4xLQLwEu6AY",
"icarjHMR0Aw"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I am grateful to the authors for clarifying all their points. I will be happy to raise my score to 8 if the authors can show me evidence of their work having \"...excellent impact on at least one area of AI or high-to-excellent impact on multiple areas of AI...\" that I might have missed (this seems to be the onl... | [
-1,
-1,
-1,
-1,
-1,
5,
7,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
2,
4
] | [
"xcpVgV2sfX5",
"icarjHMR0Aw",
"4xLQLwEu6AY",
"AIXepM2WBc",
"G3VmQ_0jZeS",
"nips_2022_Mn_HoKBcWK",
"nips_2022_Mn_HoKBcWK",
"nips_2022_Mn_HoKBcWK",
"nips_2022_Mn_HoKBcWK"
] |
nips_2022_8U5J6zK_MtV | LobsDICE: Offline Learning from Observation via Stationary Distribution Correction Estimation | We consider the problem of learning from observation (LfO), in which the agent aims to mimic the expert's behavior from the state-only demonstrations by experts. We additionally assume that the agent cannot interact with the environment but has access to the action-labeled transition data collected by some agents with unknown qualities. This offline setting for LfO is appealing in many real-world scenarios where the ground-truth expert actions are inaccessible and the arbitrary environment interactions are costly or risky. In this paper, we present LobsDICE, an offline LfO algorithm that learns to imitate the expert policy via optimization in the space of stationary distributions. Our algorithm solves a single convex minimization problem, which minimizes the divergence between the two state-transition distributions induced by the expert and the agent policy. Through an extensive set of offline LfO tasks, we show that LobsDICE outperforms strong baseline methods.
| Accept | Reviewers all agree that the paper has solid contributions on both theory and experiments. | train | [
"ObYFZoUjcaZ",
"K_LZtNGvLtn",
"w7v4k09DfE",
"TbdNYILVWqQ",
"shigH7DNHKx",
"FG1UKLyyvKG",
"6pZB0of8VIp",
"nO5SmBxzVJc",
"nLerHrGMyPU",
"-Vr0RjgaH_e",
"-kyuFsdgEcE",
"Zob7bR-B5QP"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I really appreciate the authors for additional comparison and the extension to the $\\gamma=1$ case, and I will raise my score.\n\nFor the poor empirical performance of using the new sampling strategy, I think it is expected that the empirical performance may not be better than the original uniform sampling strat... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
3
] | [
"6pZB0of8VIp",
"FG1UKLyyvKG",
"shigH7DNHKx",
"nips_2022_8U5J6zK_MtV",
"Zob7bR-B5QP",
"-kyuFsdgEcE",
"-Vr0RjgaH_e",
"nLerHrGMyPU",
"nips_2022_8U5J6zK_MtV",
"nips_2022_8U5J6zK_MtV",
"nips_2022_8U5J6zK_MtV",
"nips_2022_8U5J6zK_MtV"
] |
nips_2022_J3s8i8OfZZX | MoGDE: Boosting Mobile Monocular 3D Object Detection with Ground Depth Estimation | Monocular 3D object detection (Mono3D) in mobile settings (e.g., on a vehicle, a drone, or a robot) is an important yet challenging task. Due to the near-far disparity phenomenon of monocular vision and the ever-changing camera pose, it is hard to acquire high detection accuracy, especially for far objects. Inspired by the insight that the depth of an object can be well determined according to the depth of the ground where it stands, in this paper, we propose a novel Mono3D framework, called MoGDE, which constantly estimates the corresponding ground depth of an image and then utilizes the estimated ground depth information to guide Mono3D. To this end, we utilize a pose detection network to estimate the pose of the camera and then construct a feature map portraying pixel-level ground depth according to the 3D-to-2D perspective geometry. Moreover, to improve Mono3D with the estimated ground depth, we design an RGB-D feature fusion network based on the transformer structure, where the long-range self-attention mechanism is utilized to effectively identify ground-contacting points and pin the corresponding ground depth to the image feature map. We conduct extensive experiments on the real-world KITTI dataset. The results demonstrate that MoGDE can effectively improve the Mono3D accuracy and robustness for both near and far objects. MoGDE yields the best performance compared with the state-of-the-art methods by a large margin and is ranked number one on the KITTI 3D benchmark. | Accept | The paper received positive leaning reviews (2x borderline accept, 1x weak accept, 1x accept). The meta-reviewer agrees with the reviewers' assessment of the paper. | train | [
"4CPKRO-axTB",
"FZmHkoAB-DJ",
"_HDRwfhD6ui",
"vaWT7mqBEo",
"DtJNhp4WjdQ",
"8yS_347rvin",
"OeMLEuYsnXG",
"KRnZuSOAZBv"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" ## W1\n***a.***\nWhen the camera is mounted, its position is fixed rather than changing at any time, so the 3-Dof translation of the camera is equivalent to none. The change of the camera's yaw angle does not change the vanishing point and the horizon, so it does not need to be considered for the ground depth est... | [
-1,
-1,
-1,
-1,
7,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
4,
4,
5,
4
] | [
"KRnZuSOAZBv",
"OeMLEuYsnXG",
"8yS_347rvin",
"DtJNhp4WjdQ",
"nips_2022_J3s8i8OfZZX",
"nips_2022_J3s8i8OfZZX",
"nips_2022_J3s8i8OfZZX",
"nips_2022_J3s8i8OfZZX"
] |
nips_2022_Jb-d9fZX14 | Finding Second-Order Stationary Points in Nonconvex-Strongly-Concave Minimax Optimization | We study the smooth minimax optimization problem $\min_{\bf x}\max_{\bf y} f({\bf x},{\bf y})$, where $f$ is $\ell$-smooth, strongly-concave in ${\bf y}$ but possibly nonconvex in ${\bf x}$. Most of existing works focus on finding the first-order stationary point of the function $f({\bf x},{\bf y})$ or its primal function $P({\bf x})\triangleq \max_{\bf y} f({\bf x},{\bf y})$, but few of them focus on achieving the second-order stationary point, which is essential to nonconvex problems. In this paper, we propose a novel approach for minimax optimization, called Minimax Cubic Newton (MCN), which could find an ${\mathcal O}\left(\varepsilon,\kappa^{1.5}\sqrt{\rho\varepsilon}\right)$-second-order stationary point of $P({\bf x})$ with calling ${\mathcal O}\left(\kappa^{1.5}\sqrt{\rho}\varepsilon^{-1.5}\right)$ times of second-order oracles and $\tilde{\mathcal O}\left(\kappa^{2}\sqrt{\rho}\varepsilon^{-1.5}\right)$ times of first-order oracles, where $\kappa$ is the condition number and $\rho$ is the Lipschitz continuous constant for the Hessian of $f({\bf x},{\bf y})$. In addition, we propose an inexact variant of MCN for high-dimensional problems to avoid calling the expensive second-order oracles. Instead, our method solves the cubic sub-problem inexactly via gradient descent and matrix Chebyshev expansion. This strategy still obtains the desired approximate second-order stationary point with high probability but only requires $\tilde{\mathcal O}\left(\kappa^{1.5}\ell\varepsilon^{-2}\right)$ Hessian-vector oracle calls and $\tilde{\mathcal O}\left(\kappa^{2}\sqrt{\rho}\varepsilon^{-1.5}\right)$ first-order oracle calls. To the best of our knowledge, this is the first work that considers the non-asymptotic convergence behavior of finding second-order stationary points for minimax problems without the convex-concave assumptions. | Accept | This paper studies the minimax optimization problem with smooth objective function, where the objective function $f(x,y)$ is assumed to be strongly concave in $y$ but in general nonconvex in $x$. In comparison to prior non-asymptotic results that mostly focused on finding first-order stationary points, this paper takes an important step further by showing how to find a second-order stationary point with non-asymptotic convergence guarantees. Although the algorithm design herein is somewhat straightforward (i.e., it is accomplished via a simple combination of accelerated gradient methods and cubic regularized newton methods), the analysis for inexact MCN contains sufficient novelty. As a result, I recommend acceptance of this paper. | train | [
"VLSrEs98H9",
"K-5gLiZDyhN",
"wr9jy3npEH",
"qyX_rB8tyNr",
"BvZlrjbGdeL",
"08q5gKuA8U",
"JoPmg0C3qE_",
"iX_bnhv7lZE",
"w7nRSxv0qkz",
"ItPwftqUm2n",
"KoZPf27JpZ",
"GBDxfSPlZW",
"dvudCAtSlYK",
"ZVBFgeuz-c",
"Dn60wHdjulW"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your feedback.\n\n> there is a known relation between the spectrum of the Hessian and generalization performance\n\nThank you for your pointing out the relation between the generalization performance and the Hessian spectrum. Due to the time limitation, we do not have enough time to run all the doma... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"wr9jy3npEH",
"KoZPf27JpZ",
"w7nRSxv0qkz",
"ItPwftqUm2n",
"08q5gKuA8U",
"JoPmg0C3qE_",
"iX_bnhv7lZE",
"Dn60wHdjulW",
"ZVBFgeuz-c",
"dvudCAtSlYK",
"GBDxfSPlZW",
"nips_2022_Jb-d9fZX14",
"nips_2022_Jb-d9fZX14",
"nips_2022_Jb-d9fZX14",
"nips_2022_Jb-d9fZX14"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.