paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
nips_2022_XY5g3mkVge
Pre-Trained Model Reusability Evaluation for Small-Data Transfer Learning
We study {\it model reusability evaluation} (MRE) for source pre-trained models: evaluating their transfer learning performance to new target tasks. In special, we focus on the setting under which the target training datasets are small, making it difficult to produce reliable MRE scores using them. Under this situation, we propose {\it synergistic learning} for building the task-model metric, which can be realized by collecting a set of pre-trained models and asking a group of data providers to participate. We provide theoretical guarantees to show that the learned task-model metric distances can serve as trustworthy MRE scores, and propose synergistic learning algorithms and models for general learning tasks. Experiments show that the MRE models learned by synergistic learning can generate significantly more reliable MRE scores than existing approaches for small-data transfer learning.
Accept
There is a consensus among the expert reviewers that this paper tackles an important problem, is technically sound, and has a sufficient contribution for publication at NeurIPS2022. Synergistic learning is still in its infancy and as a result, requires visibility from the community. The proposed methods for calculating model reusability evaluation (MRE) metrics in this paper will serve as an important baseline for subsequent research in this direction. Personally, I also appreciate this research direction as it will be a crucial part of the democratization of AI. The main reservation is the clarity and presentation of the paper. The authors attempted to address them by providing clarification during the discussion phase and by revising the manuscript accordingly, which I really appreciate. The reviewers also acknowledged and responded positively to the authors' responses, either maintaining their high scores or increasing them. Hence, I recommend that the authors take special care of this issue in the camera-ready version.
train
[ "5v3xDX3GK1z", "COQCe7-pOq", "FPXHAh5_UDA", "SHRHKFpVbcx", "F_ttAPkR1w", "MPihNzfaczf", "WR9JMUhbDg7", "R2trZmSJ4Q5", "c_g4Ht8dND", "IxX0wbW6xGZ", "IljEZtw48TH", "nI21DRVKmk-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the effort of the authors on revising the paper and providing a new version. \n\nI am keeping my score unchanged as I think that this is a technically solid paper with moderate to high impact, and I have no major concerns.", " The rebuttal partially addresses my concerns while the application of un...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 2 ]
[ "WR9JMUhbDg7", "R2trZmSJ4Q5", "IljEZtw48TH", "IxX0wbW6xGZ", "MPihNzfaczf", "nI21DRVKmk-", "IljEZtw48TH", "IxX0wbW6xGZ", "nips_2022_XY5g3mkVge", "nips_2022_XY5g3mkVge", "nips_2022_XY5g3mkVge", "nips_2022_XY5g3mkVge" ]
nips_2022_M-seILmeISn
Tight Mutual Information Estimation With Contrastive Fenchel-Legendre Optimization
Successful applications of InfoNCE (Information Noise-Contrastive Estimation) and its variants have popularized the use of contrastive variational mutual information (MI) estimators in machine learning . While featuring superior stability, these estimators crucially depend on costly large-batch training, and they sacrifice bound tightness for variance reduction. To overcome these limitations, we revisit the mathematics of popular variational MI bounds from the lens of unnormalized statistical modeling and convex optimization. Our investigation yields a new unified theoretical framework encompassing popular variational MI bounds, and leads to a novel, simple, and powerful contrastive MI estimator we name FLO. Theoretically, we show that the FLO estimator is tight, and it converges under stochastic gradient descent. Empirically, the proposed FLO estimator overcomes the limitations of its predecessors and learns more efficiently. The utility of FLO is verified using extensive benchmarks, and we further inspire the community with novel applications in meta-learning. Our presentation underscores the foundational importance of variational MI estimation in data-efficient learning.
Accept
All reviewers agree that the paper proposes an interesting and novel bound of mutual information (MI) based on Fenchel-Legendre Optimization. Although some reviewers have some technical concerns at their first reviews, basically those have been resolved by the authors' responses. Thus, although there are some points that should be modified from the current form, I think we can expect the authors modify the paper in the camera-ready by reflecting the discussion. Based on these, I recommend acceptance for this paper.
train
[ "f-Hz5v3QHb5", "ax488Mrtx3", "OV8OI1Fzqpu", "EVse1moD85W", "03MJXVmYkY1", "-BJN8qLdWvk", "94QpI0h7Bd", "7wVIiCnXNP2", "KqY6fXW91ES", "jPi7LGTsuCn", "WZGH9lysa3s", "vdFGyx85gtE", "jeJtPp164DS", "vV0737mBlrd", "JtU1azLAIhy", "uSoq1qRnVoa", "J2XCnOznmF2" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Reviewer NZCL here...\n\nI have reviewed the discussion in this thread and it seems that the strongest criticism of reviewer gbJf is the lack of a detailed theoretical analysis. In my view, this paper is largely practical and provides a reasonable basis of theoretical justification. I agree with the author resp...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 2, 3, 3 ]
[ "J2XCnOznmF2", "03MJXVmYkY1", "EVse1moD85W", "WZGH9lysa3s", "-BJN8qLdWvk", "94QpI0h7Bd", "KqY6fXW91ES", "J2XCnOznmF2", "J2XCnOznmF2", "uSoq1qRnVoa", "JtU1azLAIhy", "vV0737mBlrd", "nips_2022_M-seILmeISn", "nips_2022_M-seILmeISn", "nips_2022_M-seILmeISn", "nips_2022_M-seILmeISn", "nips...
nips_2022_nyn2ewuF-g9
Trade-off between Payoff and Model Rewards in Shapley-Fair Collaborative Machine Learning
This paper investigates the problem of fairly trading off between payoff and model rewards in collaborative machine learning (ML) where parties aggregate their datasets together to obtain improved ML models over that of each party. Supposing parties can afford the optimal model trained on the aggregated dataset, we propose an allocation scheme that distributes the payoff fairly. Notably, the same scheme can be derived from two different approaches based on (a) desirable properties of the parties' payoffs or (b) that of the underlying payoff flows from one party to another. While the former is conceptually simpler, the latter can be used to handle the practical constraint on the budgets of parties. In particular, we propose desirable properties for achieving a fair adjustment of the payoff flows that can trade off between the model reward's performance and the payoff reward. We empirically demonstrate that our proposed scheme is a sensible solution in several scenarios of collaborative ML with different budget constraints.
Accept
This paper lies in the borderline of acceptance, even with the help of an additional emergency expert reviewer (Reviewer GnPz) after the discussion phase: On the positive side, the reviewers found the collaborative machine learning payoff problem studied in the paper interesting and relevant, and the fairness axioms reasonable, and the uniqueness proof of allocation schemes meaningful. On the negative side, the reviewers found the contributions of this paper not groundbreaking. They still have the following concerns: - The work is not placed well enough with existing work (comparison with existing works on fairness and Shapley value raised by the reviewers, and the references Reviewer Bdvn gave) - The empirical comparison between prior works is limited (Reviewer GnPz) - The time complexity for computing the (conditional) Shapley value is exponential, so it's not a metric of practical use. Moreover, variants of Shapley value have been already studied in the literature, see (Xu et al, Gradient driven rewards to guarantee fairness in collaborative machine learning.) (Reviewer iwL1)
train
[ "ZaRSOYc_9A", "3t-eSo_66d", "vuGhM_HpqB2", "xCdRebBivwW", "1cVeLQ8Qjh", "AgXEFtmxV5f", "nht85qSFQBu", "f42cmuJqwLS", "Nyjy1pK5R_", "irxUbBePcPo", "oRH-mwPO85T", "P2GQkUTH3Iq", "6JZM2QIEFksc", "_wy6zXlW-5m", "bzlXwT_mm50", "7ajrktZd66", "T9LFx1MNJ3Z", "7ATU6ta97Qx", "SxvVzE-89Ne" ...
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The paper studies the problem of fair payoff allocation of compensation/cost among a group of collaborating parties which pool their data to jointly train a machine learning model. Specifically, the authors considered several fairness properties (including a linearity property which helps to uniquely determines a...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 7, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 2 ]
[ "nips_2022_nyn2ewuF-g9", "nips_2022_nyn2ewuF-g9", "xCdRebBivwW", "1cVeLQ8Qjh", "oRH-mwPO85T", "nht85qSFQBu", "oRH-mwPO85T", "Nyjy1pK5R_", "SxvVzE-89Ne", "7ATU6ta97Qx", "P2GQkUTH3Iq", "T9LFx1MNJ3Z", "_wy6zXlW-5m", "bzlXwT_mm50", "7ajrktZd66", "nips_2022_nyn2ewuF-g9", "nips_2022_nyn2ew...
nips_2022_W8nyVJruVg
Minimax-Optimal Multi-Agent RL in Markov Games With a Generative Model
This paper studies multi-agent reinforcement learning in Markov games, with the goal of learning Nash equilibria or coarse correlated equilibria (CCE) sample-optimally. All prior results suffer from at least one of the two obstacles: the curse of multiple agents and the barrier of long horizon, regardless of the sampling protocol in use. We take a step towards settling this problem, assuming access to a flexible sampling mechanism: the generative model. Focusing on non-stationary finite-horizon Markov games, we develop a fast learning algorithm called Q-FTRL and an adaptive sampling scheme that leverage the optimism principle in online adversarial learning (particularly the Follow-the-Regularized-Leader (FTRL) method). Our algorithm learns an $\varepsilon$-approximate CCE in a general-sum Markov game using $$ \widetilde{O}\bigg( \frac{H^4 S \sum_{i=1}^m A_i}{\varepsilon^2} \bigg) $$ samples, where $m$ is the number of players, $S$ indicates the number of states, $H$ is the horizon, and $A_i$ denotes the number of actions for the $i$-th player. This is minimax-optimal (up to log factor) when $m$ is fixed. When applied to two-player zero-sum Markov games, our algorithm provably finds an $\varepsilon$-approximate Nash equilibrium with a minimal number of samples. Along the way, we derive a refined regret bound for FTRL that makes explicit the role of variance-type quantities, which might be of independent interest.
Accept
This paper received uniformly positive reviews on a topic of relevance to the theory-ML community where understanding how to achieve tight sample complexity results in Markov Games has been of significant interest in recent years.
train
[ "Ax7WxMnR0oI", "7u1oyswp2Ut", "oScsAXvJ7t0", "6Am__XgKYq8s", "IiKZjdOS78v", "WUYY-QZgR3K", "zX2wog4Q9MX", "0CqTjIZa62m" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I sincerely thank the authors for the responses and much appreciate the modifications during the revisions. It is a great pleasure to participate in the discussions and see that my comments contribute to this work. I would keep and defend my positive opinion of this work.", " Thanks for your detailed response, ...
[ -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "IiKZjdOS78v", "oScsAXvJ7t0", "WUYY-QZgR3K", "0CqTjIZa62m", "zX2wog4Q9MX", "nips_2022_W8nyVJruVg", "nips_2022_W8nyVJruVg", "nips_2022_W8nyVJruVg" ]
nips_2022_QYQH9w9Z8bO
Effects of Data Geometry in Early Deep Learning
Deep neural networks can approximate functions on different types of data, from images to graphs, with varied underlying structure. This underlying structure can be viewed as the geometry of the data manifold. By extending recent advances in the theoretical understanding of neural networks, we study how a randomly initialized neural network with piecewise linear activation splits the data manifold into regions where the neural network behaves as a linear function. We derive bounds on the density of boundary of linear regions and the distance to these boundaries on the data manifold. This leads to insights into the expressivity of randomly initialized deep neural networks on non-Euclidean data sets. We empirically corroborate our theoretical results using a toy supervised learning problem. Our experiments demonstrate that number of linear regions varies across manifolds and the results hold with changing neural network architectures. We further demonstrate how the complexity of linear regions is different on the low dimensional manifold of images as compared to the Euclidean space, using the MetFaces dataset.
Accept
The paper studies the number of linear regions cut out by a randomly initialized deep network, for data with low-dimensional structure (manifold structured data). The main results pertain to the density of linear regions and the average distance to the boundary of a linear region: these results take the same form as in the Euclidean case (with distance inversely proportional to the number of neurons), but depend on geometric properties of the data manifold — in particular, its dimension and curvature. Reviewers generally appreciated the relevance of the paper’s setting: data arising in applications often have low-dimensional structure, and understanding how deep networks interact with the structure of data is an important research direction. At a technical level, the paper builds on techniques of [Hanin and Ronik 2019], but extends these results to manifold structured data. Questions raised by the reviewers include the role of curvature and input dimension in the results and the interpretation of real data experiments. After interacting with the authors, the reviewers considered their main concerns about the paper to be well-addressed. The AC concurs, and recommends acceptance.
train
[ "d2Z5lO1x0T", "IUMia2hj_F4", "WVjCN28UxsD", "FC-LIg0unVE", "qFr5_s7_sQl", "jLHtlxR0vsi", "4Y3fbwfJRRo", "ayltKffQh5ZI", "5_IPUbmADJq", "NhN2no_GLDp", "7PuGjqH4Ffb", "xUmTv6_L4d", "AHBZih7n-vm", "G8i4yWUh6b4", "8bMpYIWsUck", "nYWwmjYui1S" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank you for your continued interactions. We respond to your comments here.\n\nThank you for going over our additions to the Appendix in explaining how a simple polynomial's optima changes and eventually effects the constant $C_{M, \\kappa}$. We have tried our best to provide intuition for various constants w...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 1, 2, 3 ]
[ "IUMia2hj_F4", "WVjCN28UxsD", "FC-LIg0unVE", "7PuGjqH4Ffb", "jLHtlxR0vsi", "5_IPUbmADJq", "8bMpYIWsUck", "G8i4yWUh6b4", "NhN2no_GLDp", "nYWwmjYui1S", "xUmTv6_L4d", "AHBZih7n-vm", "nips_2022_QYQH9w9Z8bO", "nips_2022_QYQH9w9Z8bO", "nips_2022_QYQH9w9Z8bO", "nips_2022_QYQH9w9Z8bO" ]
nips_2022_gt-l9Hu2ndd
Model Preserving Compression for Neural Networks
After training complex deep learning models, a common task is to compress the model to reduce compute and storage demands. When compressing, it is desirable to preserve the original model's per-example decisions (e.g., to go beyond top-1 accuracy or preserve robustness), maintain the network's structure, automatically determine per-layer compression levels, and eliminate the need for fine tuning. No existing compression methods simultaneously satisfy these criteria---we introduce a principled approach that does by leveraging interpolative decompositions. Our approach simultaneously selects and eliminates channels (analogously, neurons), then constructs an interpolation matrix that propagates a correction into the next layer, preserving the network's structure. Consequently, our method achieves good performance even without fine tuning and admits theoretical analysis. Our theoretical generalization bound for a one layer network lends itself naturally to a heuristic that allows our method to automatically choose per-layer sizes for deep networks. We demonstrate the efficacy of our approach with strong empirical performance on a variety of tasks, models, and datasets---from simple one-hidden-layer networks to deep networks on ImageNet.
Accept
The submission proposes an interpolative decomposition scheme for neural network compression to reduce FLOPs and number of parameters at a reduced cost in accuracy/faithfulness to the original model. The authors provide theoretical evidence, albeit in the two layer case, and empirical evidence on a set of architectures and datasets of the soundness of their claims. While the most negative review (12sE) contained several inaccuracies that misrepresented the submission, discussions with reviewers have nonetheless helped clarified several points in their paper to the point that the majority of reviewers were satisfied with the submission. Therefore, I recommend this paper for acceptance.
test
[ "j3RbX-gaWiv", "fM5v6Af6wd", "Dm_3khxsYVV", "RI-X7eOm2W", "5jva52HD-4", "S0Ws_z80fV", "eMDGLTTHwLA", "chM8GpQxxZ3", "hhH6JwABMrM", "67nnTAG7cU4", "pNPCaS6Qh5s" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " [6] is a dated paper that summarizes the experiments before 2020. Please refer to papers published at NeurIPS. Only results for VGG-16 are totally NOT fair to other papers [R5-R18]. Even for the VGG, your results are still hugely inferior to SOTAs. Why not run it with the same settings of SOTAs, showing your meth...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 8, 2 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 5 ]
[ "fM5v6Af6wd", "RI-X7eOm2W", "eMDGLTTHwLA", "5jva52HD-4", "pNPCaS6Qh5s", "67nnTAG7cU4", "hhH6JwABMrM", "nips_2022_gt-l9Hu2ndd", "nips_2022_gt-l9Hu2ndd", "nips_2022_gt-l9Hu2ndd", "nips_2022_gt-l9Hu2ndd" ]
nips_2022_f-FQE1fjPK
NSNet: A General Neural Probabilistic Framework for Satisfiability Problems
We present the Neural Satisfiability Network (NSNet), a general neural framework that models satisfiability problems as probabilistic inference and meanwhile exhibits proper explainability. Inspired by the Belief Propagation (BP), NSNet uses a novel graph neural network (GNN) to parameterize BP in the latent space, where its hidden representations maintain the same probabilistic interpretation as BP. NSNet can be flexibly configured to solve both SAT and #SAT problems by applying different learning objectives. For SAT, instead of directly predicting a satisfying assignment, NSNet performs marginal inference among all satisfying solutions, which we empirically find is more feasible for neural networks to learn. With the estimated marginals, a satisfying assignment can be efficiently generated by rounding and executing a stochastic local search. For #SAT, NSNet performs approximate model counting by learning the Bethe approximation of the partition function. Our evaluations show that NSNet achieves competitive results in terms of inference accuracy and time efficiency on multiple SAT and #SAT datasets.
Accept
This is a controversial paper. My thinking, however, is to largely agree with bWLw that the performance warrants publication even if that performance is largely due to details rather than major architectural innovation. I think that the overlap with Savari and Bortolussi is not a problem given the timing of the publication. The much smaller number of parameters compared to NeuroSAT and BPNN (see the response to Tesa) also seems significant.
train
[ "kYZtNevdTXo", "EkIq5bwxP-", "8UjcJejoU8ex", "GYgfSAogOJQ", "UCeXOmG1AcO", "MxCxEr7TQv0", "JN-mzHwQMGK", "Il6ENQDFHSp", "4V_Cf6825Ej", "OHD62_FdCiO", "HSajj3Xwff-", "TjSC86Nxmkz", "h9Z1998LRi", "UNFTmXRHZYd", "mS1reZaHCui" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank to the authors for answering my questions. The scalability of NSNet and the lack of approximation gaurantees of NSNet are the two main limitations of this paper, for which I hope the authors would try to improve/discuss in the final paper version. ", " Thank you for addressing my concerns. I'm revising my...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 3 ]
[ "JN-mzHwQMGK", "8UjcJejoU8ex", "GYgfSAogOJQ", "Il6ENQDFHSp", "nips_2022_f-FQE1fjPK", "JN-mzHwQMGK", "mS1reZaHCui", "UNFTmXRHZYd", "h9Z1998LRi", "HSajj3Xwff-", "TjSC86Nxmkz", "nips_2022_f-FQE1fjPK", "nips_2022_f-FQE1fjPK", "nips_2022_f-FQE1fjPK", "nips_2022_f-FQE1fjPK" ]
nips_2022__QzJJGH_KE
RORL: Robust Offline Reinforcement Learning via Conservative Smoothing
Offline reinforcement learning (RL) provides a promising direction to exploit massive amount of offline data for complex decision-making tasks. Due to the distribution shift issue, current offline RL algorithms are generally designed to be conservative in value estimation and action selection. However, such conservatism can impair the robustness of learned policies when encountering observation deviation under realistic conditions, such as sensor errors and adversarial attacks. To trade off robustness and conservatism, we propose Robust Offline Reinforcement Learning (RORL) with a novel conservative smoothing technique. In RORL, we explicitly introduce regularization on the policy and the value function for states near the dataset, as well as additional conservative value estimation on these states. Theoretically, we show RORL enjoys a tighter suboptimality bound than recent theoretical results in linear MDPs. We demonstrate that RORL can achieve state-of-the-art performance on the general offline RL benchmark and is considerably robust to adversarial observation perturbations.
Accept
All reviewers agree that the author's response has addressed their primary concerns. Reviewer frMM had two reservations that resulted in a borderline rating 1) concerns about how the adversarial samples were generated and 2) a request for evaluation on AntMaze. The author's followup response and further experiments address 1 and partially 2. It would be great to see RORL results on AntMaze in the final version. Overall, the performance of RORL is competitive with state-of-the-art methods on Mujoco and Adroit tasks with fewer ensemble elements needed. The main benefit is on improved performance against adversarial attack, where RORL significantly improves over existing methods. I think the paper makes a nice contribution that the community will find valuable. I encourage the authors to think carefully about how to integrate the additional experiments into the paper to resolve the questions raised by reviewers.
train
[ "q259hKMvscj", "DjzvPlmtTC", "DRkU1bSqaTs", "xZ8V4UJ-Mb", "7nB1gmLMvXD", "DVttaoLGC4t", "r8W2d94TDGp", "nnwrIOn0tS", "zCdTON51ca6", "zDZl9XNeD-E", "yyexiRiZT_", "wiu8kJ0a662", "OSTegWBfEyG", "811gM1LszHB8", "YbN-bWAPbYxc", "oPCljfqpHVk", "jkjnKreT9D5", "V-iSmDONsz0g", "lxsRBVpKjW...
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author...
[ " We now includes additional comparison between the zeroth-order and first-order perturbation generation in Appendix C.11 and Figure 17.\n**We hope our response could address your concerns. Thank you again for your time and efforts!**\n\nFrom Figure 17, we can conclude that the two types of optimization for perturb...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "xZ8V4UJ-Mb", "xZ8V4UJ-Mb", "7nB1gmLMvXD", "zDZl9XNeD-E", "yyexiRiZT_", "zCdTON51ca6", "nnwrIOn0tS", "OSTegWBfEyG", "5P7Ky626mvWs", "jaAWr8uMWY", "oPCljfqpHVk", "jkjnKreT9D5", "cROTVToXtgw", "V-iSmDONsz0g", "I5y2SE5DXDL", "Y3RKy0EuzBc", "5n2xUZ87Kk2", "CBkz6jsf8vQ", "nips_2022__Q...
nips_2022_RBhIkQRpzFK
GraphQNTK: Quantum Neural Tangent Kernel for Graph Data
Graph Neural Networks (GNNs) and Graph Kernels (GKs) are two fundamental tools used to analyze graph-structured data. Efforts have been recently made in developing a composite graph learning architecture combining the expressive power of GNNs and the transparent trainability of GKs. However, learning efficiency on these models should be carefully considered as the huge computation overhead. Besides, their convolutional methods are often straightforward and introduce severe loss of graph structure information. In this paper, we design a novel quantum graph learning model to characterize the structural information while using quantum parallelism to improve computing efficiency. Specifically, a quantum algorithm is proposed to approximately estimate the neural tangent kernel of the underlying graph neural network where a multi-head quantum attention mechanism is introduced to properly incorporate semantic similarity information of nodes into the model. We empirically show that our method achieves competitive performance on several graph classification benchmarks, and theoretical analysis is provided to demonstrate the superiority of our quantum algorithm. Source code is available at \url{https://github.com/abel1231/graphQNTK}.
Accept
The authors propose a Quantum Neural Tangent Kernels (QNTK) approach for GNNs. The paper shows promising results of using quantum computing to speed up computations for graph data. This is a novel idea, and the paper presents solid theoretical and empirical evaluations. Thus the AC recommends acceptance.
train
[ "oYNnw8NYdBc", "aeFrjKK1X-z", "HSjgNdO8Uh-", "AIVe2_SIKu", "IABokYTDgd", "1LN3d0qcMY", "-Q8f_iiGyhB", "1zIrzMpZ2KK", "AdYoZyY7D_", "xG98p5cSbz", "oFQT2lZzuaE", "DyC9gjFC_4D", "sYKeI3zzzVN", "QH-_cSN6iVmK", "Wkx8GNTnC0L", "0jnC1scV_6", "JAe4GJc6zea", "oWpO5iBJstV", "Q6InbPfK2XW", ...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official...
[ " Dear Reviewer DbrA,\n\nAs the discussion period is close to the end and we provide further clarifications for your remaining confusion, we wanted to reach out to see if our rebuttal response has addressed your concerns.\n\nWe are more than happy to discuss further if you have any further concerns and issues, plea...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3 ]
[ "IABokYTDgd", "v8yaYFh1V2F", "IABokYTDgd", "IABokYTDgd", "0jnC1scV_6", "2wa6yyqsZ-", "AdYoZyY7D_", "v8yaYFh1V2F", "SpR14dzzMt", "2wa6yyqsZ-", "nips_2022_RBhIkQRpzFK", "SpR14dzzMt", "SpR14dzzMt", "2wa6yyqsZ-", "2wa6yyqsZ-", "2wa6yyqsZ-", "v8yaYFh1V2F", "v8yaYFh1V2F", "v8yaYFh1V2F"...
nips_2022_AYII8AkvD1e
Diffusion Curvature for Estimating Local Curvature in High Dimensional Data
We introduce a new intrinsic measure of local curvature on point-cloud data called diffusion curvature. Our measure uses the framework of diffusion maps, including the data diffusion operator, to structure point cloud data and define local curvature based on the laziness of a random walk starting at a point or region of the data. We show that this laziness directly relates to volume comparison results from Riemannian geometry. We then extend this scalar curvature notion to an entire quadratic form using neural network estimations based on the diffusion map of point-cloud data. We show applications of both estimations on toy data, single-cell data, and on estimating local Hessian matrices of neural network loss landscapes.
Accept
This paper uses diffusion maps to measure curvature from point cloud data and includes some theoretical analysis as well as preliminary experiments demonstrating the value of the curvature measure. The paper benefited from detailed discussion among a number of experts that gave it thorough consideration. While the reviewers did not totally converge to a unanimous "accept" decision, the AC views their detailed/thoughtful discussion as a *positive* sign that the work will spark discussion and interest at the NeurIPS conference. Other than limited experimental evaluation, it seems the main negative aspects of the work (mostly raised by reviewer 4SQF) are debatable in terms of whether they truly invalidate the research paper. Overall, the AC recommends accepting this paper, especially since OpenReview will show the thoughtful discussion between the reviewers and authors.
val
[ "azahSrxDvn", "Et8Xut4NgpD", "D2z6sCnnQt_Q", "JaJZnezZTIr", "AaWHKK9rtT", "RInTSpeCtW3", "3Cgd0B0DPK2", "dJ9Fzm6R93q", "q3HtYb4UmIl", "fGpQpE9Hxmf" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the additional remarks and insightful critique.\n\nDiscrepancy in the description of the Hessian estimation method in Sections 3.2 and 4.3: Thanks for bringing this to our attention. The diffusion embedding is only used for the data points. We have corrected Section 3.2 to reflect this.\...
[ -1, -1, -1, -1, -1, -1, 5, 3, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 5, 3, 4 ]
[ "Et8Xut4NgpD", "AaWHKK9rtT", "fGpQpE9Hxmf", "q3HtYb4UmIl", "dJ9Fzm6R93q", "3Cgd0B0DPK2", "nips_2022_AYII8AkvD1e", "nips_2022_AYII8AkvD1e", "nips_2022_AYII8AkvD1e", "nips_2022_AYII8AkvD1e" ]
nips_2022_JkEz1fqN3hX
Rethinking Value Function Learning for Generalization in Reinforcement Learning
We focus on the problem of training RL agents on multiple training environments to improve observational generalization performance. In prior methods, policy and value networks are separately optimized using a disjoint network architecture to avoid interference and obtain a more accurate value function. We identify that the value network in the multiple-environment setting is more challenging to optimize and prone to overfitting training data than in the conventional single-environment setting. In addition, we find that appropriate regularization of the value network is required for better training and test performance. To this end, we propose Delayed-Critic Policy Gradient (DCPG), which implicitly penalizes the value estimates by optimizing the value network less frequently with more training data than the policy network, which can be implemented using a shared network architecture. Furthermore, we introduce a simple self-supervised task that learns the forward and inverse dynamics of environments using a single discriminator, which can be jointly optimized with the value network. Our proposed algorithms significantly improve observational generalization performance and sample efficiency in the Procgen Benchmark.
Accept
All reviewers were in favor of acceptance, and after reading the paper myself I am in agreement. The empirical results were good and the experimental work quite comprehensive. The method is well explained and the writing is clear and easy to read. The only real detraction I saw, distinct from things already mentioned by reviewers, was that there are some statements that feel overly strong given the presented results (e.g. L161, L179). I was curious about sensitivity to hyper-parameters, specifically the settings around how long each phase lasts, etc, along the lines of what was done in the PPG paper. That said, I would generally down-play the importance of this as the proposed method is using the same hyper-parameters as for PPG and appears to have undergone minimal hyper-parameter tuning. In all I think this makes a clear contribution to the field and should be accepted.
val
[ "7DtZJSdMFyn", "5VAa-tjPF6v", "4tn201rFghR", "r7eAowGg2w1", "NpofcsCzpD", "z0tkCCoLWc1", "2HfcvifX4W8", "8DOPmDa3EUC", "lmFoYsJSl2", "zaTKm2bd-o", "xOG-T1mslf" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the additional experiments with more data. In the plots for min-max normalized scores, it looks like DCPG doesn't outperform PPG on IQM and Mean scores. This is an important point that should be highlighted in the main text. I recommend including the plots in the appendix as well. \n\nAs for the differ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "5VAa-tjPF6v", "r7eAowGg2w1", "2HfcvifX4W8", "z0tkCCoLWc1", "nips_2022_JkEz1fqN3hX", "lmFoYsJSl2", "zaTKm2bd-o", "xOG-T1mslf", "nips_2022_JkEz1fqN3hX", "nips_2022_JkEz1fqN3hX", "nips_2022_JkEz1fqN3hX" ]
nips_2022_XZhipvOUBB
LISA: Learning Interpretable Skill Abstractions from Language
Learning policies that effectively utilize language instructions in complex, multi-task environments is an important problem in imitation learning. While it is possible to condition on the entire language instruction directly, such an approach could suffer from generalization issues. To encode complex instructions into skills that can generalize to unseen instructions, we propose Learning Interpretable Skill Abstractions (LISA), a hierarchical imitation learning framework that can learn diverse, interpretable skills from language-conditioned demonstrations. LISA uses vector quantization to learn discrete skill codes that are highly correlated with language instructions and the behavior of the learned policy. In navigation and robotic manipulation environments, LISA is able to outperform a strong non-hierarchical baseline in the low data regime and compose learned skills to solve tasks containing unseen long-range instructions. Our method demonstrates a more natural way to condition on language in sequential decision-making problems and achieve interpretable and controllable behavior with the learned skills.
Accept
This paper provides a hierarchical approach to imitation learning guided by language, which the reviewers unanimously found empirically compelling, as well as of potential interest to the broader NeurIPS community. I am happy to go with the consensus and recommend acceptance.
train
[ "cOkUgSirjQD", "BR48WaQGPOT", "JdgVVFsTJhI", "aYzG6-pVwRR", "LZLP_K-6_mN", "s_6o4-O4ZJS", "jQ8BRJcSxqp", "PgBgwaYKAsd", "ydem3Lj2H0", "HU1Y-DJ6q9J", "R2K8z805iCp", "Zkzxr2itpnt", "4BRvIbQ-sSW", "cwAYcZfUx0X", "qwGW-RBgyhq", "Py4pWQv3Lm1" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for addressing my review comments. The rebuttal addressed some of my concerns about the experimental setup and the interpretation of results, and I am raising my score from 5 to 6. But at the same time, I encourage the authors to carefully address the concerns about hyperparameters in their re...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "jQ8BRJcSxqp", "LZLP_K-6_mN", "R2K8z805iCp", "HU1Y-DJ6q9J", "Zkzxr2itpnt", "PgBgwaYKAsd", "PgBgwaYKAsd", "qwGW-RBgyhq", "HU1Y-DJ6q9J", "Py4pWQv3Lm1", "cwAYcZfUx0X", "4BRvIbQ-sSW", "nips_2022_XZhipvOUBB", "nips_2022_XZhipvOUBB", "nips_2022_XZhipvOUBB", "nips_2022_XZhipvOUBB" ]
nips_2022_vNrSXIFJ9wz
VF-PS: How to Select Important Participants in Vertical Federated Learning, Efficiently and Securely?
Vertical Federated Learning (VFL), that trains federated models over vertically partitioned data, has emerged as an important learning paradigm. However, existing VFL methods are facing two challenges: (1) scalability when # participants grows to even modest scale and (2) diminishing return w.r.t. # participants: not all participants are equally important and many will not introduce quality improvement in a large consortium. Inspired by these two challenges, in this paper, we ask: How can we select l out of m participants, where l ≪ m, that are most important? We call this problem Vertically Federated Participant Selection, and model it with a principled mutual information-based view. Our first technical contribution is VF-MINE—a Vertically Federated Mutual INformation Estimator—that uses one of the most celebrated algorithms in database theory—Fagin’s algorithm as a building block. Our second contribution is to further optimize VF-MINE to enable VF-PS, a group testing-based participant selection framework. We empirically show that vertically federated participation selection can be orders of magnitude faster than training a full-fledged VFL model, while being able to identify the most important subset of participants that often lead to a VFL model of similar quality.
Accept
This paper studies the problem of vertical federated learning, where each client (e.g., hospital) has access to a disjoint subset of features. The reviewers agree that the paper provides an interesting technique for client selection. The authors have adequately addressed the reviewers' questions. The authors should consider dedicating part of their revision to addressing concerns raised by the ethics reviewers. In particular, there are concerns of unfairness due to the selection of a small subset of participants. Even though the paper does not readily provide a solution to this problem, the authors should acknowledge this potential issue and suggest directions for future work. If there is not enough space in the paper, the authors could use part of the appendix for a detailed discussion.
train
[ "ViBLqxaaTF8", "g2BJRFbFoJk", "v5Pc9kRjpFg", "2nQ0E5sir2h", "WReLYZshNw2", "oo6Heg_pOaS", "IDjkOk-9nSw", "9BAXgNNFuid", "-xRhDXi660s" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " In line with reviewer 63Y2 (and now, after the rebuttal, also the authors), I agree that the method could create unfairness as only a subset of participants are selected. The authors addressed these concerns to some degree in the appendix but I would like to see two things:\n1) NeurIPS offered an extra page this ...
[ -1, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "nips_2022_vNrSXIFJ9wz", "nips_2022_vNrSXIFJ9wz", "nips_2022_vNrSXIFJ9wz", "-xRhDXi660s", "9BAXgNNFuid", "IDjkOk-9nSw", "nips_2022_vNrSXIFJ9wz", "nips_2022_vNrSXIFJ9wz", "nips_2022_vNrSXIFJ9wz" ]
nips_2022_xjXN3wEvCGG
Surprise-Guided Search for Learning Task Specifications From Demonstrations
This paper considers the problem of learning temporal task specifications, e.g. automata and temporal logic, from expert demonstrations. Task specifications are a class of sparse memory augmented rewards with explicit support for temporal and Boolean composition. Three features make learning temporal task specifications difficult: (1) the (countably) infinite number of tasks under consideration, (2) an a-priori ignorance of what memory is needed to encode the task, and (3) the discrete solution space - typically addressed by (brute force) enumeration. To overcome these hurdles, we propose Demonstration Informed Specification Search (DISS): a family of algorithms requiring only black box access to (i) a maximum entropy planner and (ii) a task sampler from labeled examples. DISS works by alternating between (i) conjecturing labeled examples to make the provided demonstrations less surprising and (ii) sampling tasks consistent with the conjectured labeled examples. We provide a concrete implementation of DISS in the context of tasks described by Deterministic Finite Automata, and show that DISS is able to efficiently identify tasks from only one or two expert demonstrations.
Reject
The paper presents a new approach for synthesizing automata-based specifications from sample behaviors. In some ways, this is very related to the problem of generating DFAs from examples, but there are important differences related to this planning context that make it more constrained. I think this is an interesting problem and there is a solid contribution in this work that the evaluation clearly demonstrates. There is significant scope for improvement in the presentation as expressed in many of the comments in the reviews that I think are fixable in a camera ready version of the paper. I think there is a bigger question of fit with the NeurIPS community that is reflected in the low scores that the paper received. The paper reads much more like a CAV paper than a NeurIPS paper, and that might limit its impact in this community.
train
[ "68-FOH9dGP0", "jOjRZF9479z", "a0TSdQjq7XV", "HGOppxsTWfK", "6JP6_H9NcY", "VNwxW98Tiex", "mABT7jl2bgOR", "V8qUfGpJVHv5", "njt6xtYNAkK", "lEeLZ5teDGk", "ckS91lRSPa2t", "yx3XULY8K1y", "GbnJXniL2BU", "9YR_6F8XvVe", "hHg_-bG4bA", "oqWjpo-hZL", "OnHk0dKNPZBi", "bu8tJE5Lh2cd", "LD_kRbU...
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", ...
[ " humans readily generate pragmatic demonstrations, i.e. the #p don't seem to bother us. so simply stating a complexity of a problem (let's be frank here most problems in life are at least np right?) doesn't automatically absolve you from finding a cheap approximate solutions to it. here's something that might be f...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 1 ]
[ "HGOppxsTWfK", "a0TSdQjq7XV", "VNwxW98Tiex", "6JP6_H9NcY", "oqWjpo-hZL", "hHg_-bG4bA", "9YR_6F8XvVe", "GbnJXniL2BU", "UqxXzXLJQMu", "5XM7zZ0Z1lG", "5XM7zZ0Z1lG", "5XM7zZ0Z1lG", "wm-flttTvmM", "wm-flttTvmM", "wm-flttTvmM", "wm-flttTvmM", "wm-flttTvmM", "WQyuFFxMIcO", "WQyuFFxMIcO"...
nips_2022_jWgGtPmi8c
Outlier-Robust Sparse Mean Estimation for Heavy-Tailed Distributions
We study the fundamental task of outlier-robust mean estimation for heavy-tailed distributions in the presence of sparsity. Specifically, given a small number of corrupted samples from a high-dimensional heavy-tailed distribution whose mean $\mu$ is guaranteed to be sparse, the goal is to efficiently compute a hypothesis that accurately approximates $\mu$ with high probability. Prior work had obtained efficient algorithms for robust sparse mean estimation of light-tailed distributions. In this work, we give the first sample-efficient and polynomial-time robust sparse mean estimator for heavy-tailed distributions under mild moment assumptions. Our algorithm achieves the optimal asymptotic error using a number of samples scaling logarithmically with the ambient dimension. Importantly, the sample complexity of our method is optimal as a function of the failure probability $\tau$, having an {\em additive} $\log(1/\tau)$ dependence. Our algorithm leverages the stability-based approach from the algorithmic robust statistics literature, with crucial (and necessary) adaptations required in our setting. Our analysis may be of independent interest, involving the delicate design of a (non-spectral) decomposition for positive semi-definite matrices satisfying certain sparsity properties.
Accept
The paper studies the problem of sparse mean estimation in presence of heavy tailed noise. The result of the paper are novel and interesting. Improvement over work of Prasad et al is significant as the paper is able to provide a computationally efficient method for the problem. Claims about Huber contamination model should be toned down, as it is not clear if the result is optimal. The paper generated a fair bit of discussion, and as mentioned above there are concerns about some of the claims of the paper, and they should be toned down or should explicitly highlight issues like dependence on epsilon. Nonetheless, reviewers agree that the paper represents significant improvement in SOTA owing to computationally efficient method.
train
[ "F-COwFQhUSS", "rEEpu5oHtTE", "ewFF4MNSkw1Q", "IMfc8lFliIY", "OwW6POESMaC", "w4uygFGylc", "sMbI6Nt65Ap", "6PwTenGjNdX", "4VHKXqcNT0u", "ei4fRtPer7T", "NVGR9xAgPrd", "wmPUtC3f7o", "xqhwUbSCxTe" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The paper considers the problem of robust sparse mean estimation. In this problem, we observe samples from a heavy-tailed distribution whose mean is a sparse vector. This distribution is assumed to have bounded variance and is assumed to satisfy a weaker form of bounded 4th moment assumption. In addition, it is a...
[ 6, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3, 7 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, 4, 1, 4, 4 ]
[ "nips_2022_jWgGtPmi8c", "ewFF4MNSkw1Q", "IMfc8lFliIY", "OwW6POESMaC", "sMbI6Nt65Ap", "xqhwUbSCxTe", "wmPUtC3f7o", "ei4fRtPer7T", "nips_2022_jWgGtPmi8c", "nips_2022_jWgGtPmi8c", "nips_2022_jWgGtPmi8c", "nips_2022_jWgGtPmi8c", "nips_2022_jWgGtPmi8c" ]
nips_2022_-76EsjcHnbj
Instance-Dependent Policy Learning for Linear MDPs via Online Experiment Design
While much progress has been made in understanding the minimax sample complexity of reinforcement learning (RL)---the complexity of learning on the ``worst-case'' instance---such measures of complexity often do not capture the true difficulty of learning. In practice, on an ``easy'' instance, we might hope to achieve a complexity far better than that achievable on the worst-case instance. In this work we seek to understand this ``instance-dependent'' complexity of learning in the setting of RL with linear function approximation. We propose an algorithm, PEDEL, which achieves a fine-grained instance-dependent measure of complexity, the first of its kind in the RL with function approximation setting, thereby capturing the difficulty of learning on each particular problem instance. Through an explicit example, we show that PEDEL yields provable gains over low-regret, minimax-optimal algorithms and that such algorithms are unable to hit the instance-optimal rate. Our approach relies on a novel online experiment design-based procedure which focuses the exploration budget on the ``directions'' most relevant to learning a near-optimal policy, and may be of independent interest.
Accept
This paper considers instance-dependent complexity of learning under linear MDPs. Based on a novel online experiment design in linear MDPs, the author proposes an algorithm whose sample complexity scaling with the instance-dependent complexity. This algorithm is worst-case optimal and beats any low-regret algorithm on an explicit example constructed by the author. All reviewers are convinced by the contribution of this paper, and we recommend acceptance.
train
[ "MfW_giDvO3h", "-p0tFIyhtnb", "OXiVKenRUr8", "H26QW2W9vo6", "XrJHRPjxeHc", "D4dPyu-khkp", "7ZIC6nMLWk", "tc_CaKEd3x9", "BftUMMNIjw", "mUH4KvKuMp3", "5MRRMEP6yyS" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear authors/reviewers/AC,\n\nI have read the response and other reviews. My major concern about the tightness is addressed by the author. And I would like to keep my positive rating.", " Dear authors/reviewers/AC,\n\nI've read the response and other reviews and feel confident about my positive rating.\n\nI rec...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "D4dPyu-khkp", "7ZIC6nMLWk", "H26QW2W9vo6", "XrJHRPjxeHc", "tc_CaKEd3x9", "5MRRMEP6yyS", "mUH4KvKuMp3", "BftUMMNIjw", "nips_2022_-76EsjcHnbj", "nips_2022_-76EsjcHnbj", "nips_2022_-76EsjcHnbj" ]
nips_2022_fARM4P0gAJV
Explainable Spatio-Temporal Forecasting with Shape Functions
Spatio-temporal modelling and forecasting are challenging due to their complicated spatial dependence, temporal dynamics, and scenarios. Many statistical models, such as Spatial Auto-regression Model (SAR) and Spatial Dynamic Panel Data Model (SDPD), are restricted by a pre-specified spatial weight matrix and thus are limited to reflect its flexibility. Graph-based or convolution-based methods can learn more flexible representations, but they fail to show the exact interactions between locations due to the lack of explainability. This paper proposes a spatial regression model with shape functions to address the limitations of existing methods. Our method learns the shape functions by incorporating shape constraints, which are able to capture spatial variability or distance-based effects over distance. Therefore, our approach enjoys a learnable spatial weight matrix with a distance-based explanation. We demonstrate our method's efficiency and forecasting performance on synthetic and real data.
Reject
The paper proposes to use shape functions as basis functions to characterize spatial dependencies. They incorporate the shape functions into the spatial regression model and demonstrate strong forecasting performances compared to graph convolution-based methods. While the techniques are interesting, it is not directly relevant to the ICLR (deep learning) community. The argument for explainability is also subjective and less convincing.
test
[ "kEwcAWuP00", "KicBplLd4TM", "K7fqNQ_OgFcT", "HM8apbrVI6D", "UalOlxncQOv", "hasTRtiazlg", "lAlkWPL1DYW", "Kygv28I1C9Q", "_PzuCGM2w2O", "6MdedQfHYr9", "c9s6M63--5-", "zmZsujkqNJZ", "blQO5Bc2JTH" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer tdLH,\n\nWe appreciate for your work again and we have made some responses for what you concern. As the discussion deadline is approaching, could you please tell us any further concerns you have and we can make clarification accordingly.\n\nWe believe discussions can make our work solid and your com...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "blQO5Bc2JTH", "_PzuCGM2w2O", "lAlkWPL1DYW", "blQO5Bc2JTH", "blQO5Bc2JTH", "lAlkWPL1DYW", "zmZsujkqNJZ", "_PzuCGM2w2O", "c9s6M63--5-", "nips_2022_fARM4P0gAJV", "nips_2022_fARM4P0gAJV", "nips_2022_fARM4P0gAJV", "nips_2022_fARM4P0gAJV" ]
nips_2022_suHUJr7dV5n
On the Convergence Theory for Hessian-Free Bilevel Algorithms
Bilevel optimization has arisen as a powerful tool in modern machine learning. However, due to the nested structure of bilevel optimization, even gradient-based methods require second-order derivative approximations via Jacobian- or/and Hessian-vector computations, which can be costly and unscalable in practice. Recently, Hessian-free bilevel schemes have been proposed to resolve this issue, where the general idea is to use zeroth- or first-order methods to approximate the full hypergradient of the bilevel problem. However, we empirically observe that such approximation can lead to large variance and unstable training, but estimating only the response Jacobian matrix as a partial component of the hypergradient turns out to be extremely effective. To this end, we propose a new Hessian-free method, which adopts the zeroth-order-like method to approximate the response Jacobian matrix via taking difference between two optimization paths. Theoretically, we provide the convergence rate analysis for the proposed algorithms, where our key challenge is to characterize the approximation and smoothness properties of the trajectory-dependent estimator, which can be of independent interest. This is the first known convergence rate result for this type of Hessian-free bilevel algorithms. Experimentally, we demonstrate that the proposed algorithms outperform baseline bilevel optimizers on various bilevel problems. Particularly, in our experiment on few-shot meta-learning with ResNet-12 network over the miniImageNet dataset, we show that our algorithm outperforms baseline meta-learning algorithms, while other baseline bilevel optimizers do not solve such meta-learning problems within a comparable time frame.
Accept
The paper introduces a Hessian-free bi-level optimizer called partial zeroth-order-like bilevel optimizer (PZOBO). PZOBO uses a zeroth-order-like Jacobian estimator, which provides accurate hypergradient estimates and a computationally effective way of solving bi-level optimization problems. The paper presents a thorough theoretical analysis and experimental validation on various bi-level problems. Strengths: 1 - The authors address a relevant problem and provide an efficient and accurate solution. 2 - Convergence guarantees are provided for the proposed approach. 3 - Experimental results and comparisons with baselines over various bi-level problems. 4 - PZOBO generally outperforms baseline methods, especially in high-dimensional problems. 5 - The main idea, to use zeroth approximations of the Jacobian of the solution mapping for the lower level problem with respect to the hyperparameters appears to be original and significant. Weaknesses: 1 - One limitation is that there are many assumptions to apply. The assumptions are clearly stated which help alleviate this issue but the authors should address the limitations imposed by the assumptions in the numerical experiments because not made clear that the assumptions needed to apply the theorems are met. Decision: Overall, all the reviewers vote for acceptance. This is a strong paper with very few limitations and with a significant contribution to the field. Because of this, I recommend acceptance. I encourage the authors to follow the reviewers' comments in order to improve the paper for the camera-ready version.
val
[ "fx_75G85J1m0", "lrkKs0fXzBR", "wVlxyxGk5yYY", "TZjGu56xN0T", "eKAiBYctdL-", "C4foSohG05", "PvwXObQSjYr", "EaXfRoSsnnK", "RLXmLZB8VAJ", "YZdG9d_kZ6", "7Y3r5Kis9a9", "7Qahqjqx9ih", "AesyxwUUbdy", "QsmR1Sg2iPZ" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the clarification. My concerns have been fully addressed.", " All of my concerns are addressed by the rebuttal and the revisions and I appreciate the authors' thorough response. I have updated the presentation score to a 4.", " We thank the reviewer very much for the further feedback and for increa...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 3, 4 ]
[ "eKAiBYctdL-", "C4foSohG05", "TZjGu56xN0T", "RLXmLZB8VAJ", "QsmR1Sg2iPZ", "AesyxwUUbdy", "7Y3r5Kis9a9", "RLXmLZB8VAJ", "7Qahqjqx9ih", "nips_2022_suHUJr7dV5n", "nips_2022_suHUJr7dV5n", "nips_2022_suHUJr7dV5n", "nips_2022_suHUJr7dV5n", "nips_2022_suHUJr7dV5n" ]
nips_2022_wo-a8Ji6s3A
Asymptotic Behaviors of Projected Stochastic Approximation: A Jump Diffusion Perspective
In this paper, we consider linearly constrained stochastic approximation problems with federated learning (FL) as a special case. We propose a stochastic approximation algorithm named by LPSA with probabilistic projections to ensure feasibility so that projections are performed with probability $p_n$ at the $n$-th iteration. Considering a specific family of the probability $p_n$ and step size $\eta_n$, we analyze our algorithm from an asymptotic and continuous perspective. Using a novel jump diffusion approximation, we show that the trajectories consisting of properly rescaled last iterates weakly converge to the solution of specific SDEs. By analyzing the SDEs, we identify the asymptotic behaviors of LPSA for different choices of $(p_n, \eta_n)$. We find the algorithm presents an intriguing asymptotic bias-variance trade-off according to the relative magnitude of $p_n$ w.r.t. $\eta_n$. It provides insights on how to choose appropriate $\{(p_n, \eta_n)\}_{n \geq 1}$ to minimize the projection complexity.
Accept
All the reviewers agreed that the paper is novel, interesting, and the results are mathematically rigorous. Some of the concerns have been aptly addressed by the authors during the rebuttal period. I congratulate the authors for the nice work and recommend an acceptance.
train
[ "pJUMuCbR0Q", "9R4fhlGBT7P", "8rGBOUw6mC", "tlqadAu5bA_", "4foIx64mZsY", "jTFx-mfW_NB", "aVZYCqXd08C", "fGlKX1hxPx" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for these details / corrections.", " Thank you for your appreciation and recognition of our work. Here are our responses to your questions.\n\n> *It could help readers understand the effect of ... along with the MSE figure.*\n\nWe have added the log-log scale figures of averaged MSEs vs. the number of...
[ -1, -1, -1, -1, -1, 7, 6, 8 ]
[ -1, -1, -1, -1, -1, 2, 2, 3 ]
[ "8rGBOUw6mC", "jTFx-mfW_NB", "fGlKX1hxPx", "4foIx64mZsY", "aVZYCqXd08C", "nips_2022_wo-a8Ji6s3A", "nips_2022_wo-a8Ji6s3A", "nips_2022_wo-a8Ji6s3A" ]
nips_2022_5hgYi4r5MDp
Recall Distortion in Neural Network Pruning and the Undecayed Pruning Algorithm
Pruning techniques have been successfully used in neural networks to trade accuracy for sparsity. However, the impact of network pruning is not uniform: prior work has shown that the recall for underrepresented classes in a dataset may be more negatively affected. In this work, we study such relative distortions in recall by hypothesizing an intensification effect that is inherent to the model. Namely, that pruning makes recall relatively worse for a class with recall below accuracy and, conversely, that it makes recall relatively better for a class with recall above accuracy. In addition, we propose a new pruning algorithm aimed at attenuating such effect. Through statistical analysis, we have observed that intensification is less severe with our algorithm but nevertheless more pronounced with relatively more difficult tasks, less complex models, and higher pruning ratios. More surprisingly, we conversely observe a de-intensification effect with lower pruning ratios.
Accept
This paper studies the disparate effect of model pruning across classes and proposes a new method to reduce the "recall distortion" across classes. This is a critically important problem, and one which has just begun to be carefully studied in the literature, so this work is timely and relevant. All reviewers recognized the relevance of the problem and the novelty of the authors' approach, both with respect to the new approach presented here, as well as the detailed analysis of the various factors which impact recall distortion. There were some concerns regarding the complexity of the pruning algorithms studied, but the authors provided a number of additional experiments on other pruning approaches in their response, finding qualitatively similar effects (as might be expected given the reliance of many of these approaches on some form of magnitude pruning). I think this paper will be a valuable addition to a poorly understood and important research area, and should be accepted.
test
[ "VkI6tps9M0W", "j7ToTSeu8H", "V0xPjY8JGXXx", "4qAC-Goa4yX", "DyvXr-7ilG", "3fuzH8nW4dW", "siMNTXgVZR", "NKGrtgN5O26", "VhrgctRFYtD", "GKoaEWn5MN2", "Vd5-psn_A0", "6TL5e42TVC7", "ztxbP8Yz6c", "HMyPbw9CmI1", "lsLe5E8mP8_", "-TJLm-O2V8c", "n1tjaANxvvhm", "e8fcohud-4D", "rcUp4_MIe2O"...
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Excellent, thank you!", " We studied in more detail the effect of the intensification in the recent pruning methods LTH and CHIP on ResNet-56. In both cases, we were able to show that the intensification ratio ultimately increases with the pruning ratio and that an intensification above 1 consistently occurs if...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 2 ]
[ "V0xPjY8JGXXx", "nips_2022_5hgYi4r5MDp", "4qAC-Goa4yX", "ztxbP8Yz6c", "nips_2022_5hgYi4r5MDp", "UKfz6ixIfN1", "rcUp4_MIe2O", "UKfz6ixIfN1", "UKfz6ixIfN1", "UKfz6ixIfN1", "rcUp4_MIe2O", "rcUp4_MIe2O", "e8fcohud-4D", "e8fcohud-4D", "e8fcohud-4D", "e8fcohud-4D", "e8fcohud-4D", "nips_2...
nips_2022__cFdPHRLuJ
Curriculum Reinforcement Learning using Optimal Transport via Gradual Domain Adaptation
Curriculum Reinforcement Learning (CRL) aims to create a sequence of tasks, starting from easy ones and gradually learning towards difficult tasks. In this work, we focus on the idea of framing CRL as interpolations between a source (auxiliary) and a target task distribution. Although existing studies have shown the great potential of this idea, it remains unclear how to formally quantify and generate the movement between task distributions. Inspired by the insights from gradual domain adaptation in semi-supervised learning, we create a natural curriculum by breaking down the potentially large task distributional shift in CRL into smaller shifts. We propose GRADIENT which formulates CRL as an optimal transport problem with a tailored distance metric between tasks. Specifically, we generate a sequence of task distributions as a geodesic interpolation between the source and target distributions, which are actually the Wasserstein barycenter. Different from many existing methods, our algorithm considers a task-dependent contextual distance metric and is capable of handling nonparametric distributions in both continuous and discrete context settings. In addition, we theoretically show that GRADIENT enables smooth transfer between subsequent stages in the curriculum under certain conditions. We conduct extensive experiments in locomotion and manipulation tasks and show that our proposed GRADIENT achieves higher performance than baselines in terms of learning efficiency and asymptotic performance.
Accept
The submission makes a strong case for using optimal transport-based curriculum learning method principled theoretical insights and convincing experimental study. At the same the there is a significant overlap between this paper and prior work on using optimal transport for curriculum learning and self-paced learning which needs to be recognized and discussed more extensively in the related work.
train
[ "BURjjri0emU", "ZSMcms38miZ", "p8gROrIJFAD", "8WlySnvGhB", "-NNOny18JxG", "foXniezSWGc", "jyaRA55ajDy", "bpSAVGOeJuO", "1xok7g4WPhZ", "pFEtxA5Y3U", "n6S49K3HRssd", "w9rJRXuuXnw", "7RnEnk0p6aB", "nXPpKfXj6ua", "imoqNLKlHc9", "xSiCPmyUMVR", "YLobsQDFpM5", "tmV_DoqMuZD", "PrYNxdWoRA...
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_r...
[ " Thank you very much for the valuable suggestion and discussion! We are so glad that our clarifications served a better understanding and addressed your concerns. We deeply appreciate your support!", " Thank you very much for the prompt response and raising the rating! We are glad that we have addressed your con...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 2, 3 ]
[ "foXniezSWGc", "nXPpKfXj6ua", "-NNOny18JxG", "bpSAVGOeJuO", "imoqNLKlHc9", "1xok7g4WPhZ", "7RnEnk0p6aB", "w9rJRXuuXnw", "pFEtxA5Y3U", "YLobsQDFpM5", "nips_2022__cFdPHRLuJ", "YL6Inhcks30", "YL6Inhcks30", "VhN0_78AExl", "PrYNxdWoRAZ", "tmV_DoqMuZD", "tmV_DoqMuZD", "nips_2022__cFdPHRL...
nips_2022_jqzoJw7xamd
TaSIL: Taylor Series Imitation Learning
We propose Taylor Series Imitation Learning (TaSIL), a simple augmentation to standard behavior cloning losses in the context of continuous control. TaSIL penalizes deviations in the higher-order Tayler series terms between the learned and expert policies. We show that experts satisfying a notion of incremental input-to-state stability are easy to learn, in the sense that that a small TaSIL-augmented imitation loss over expert trajectories guarantees a small imitation loss over trajectories generated by the learned policy. We provide sample-complexity bounds for TaSIL that scale as $\tilde{\mathcal{O}}(1/n)$ in the realizable setting, for $n$ the number of expert demonstrations. Finally, we demonstrate experimentally the relationship between the robustness of the expert policy and the order of Taylor expansion required in TaSIL, and compare standard Behavior Cloning, DART, and DAgger with TaSIL-loss-augmented variants. In all cases, we show significant improvement over baselines across a variety of MuJoCo tasks.
Accept
Guided by theoretical analysis, this paper proposes a new objective for behavioral cloning based on penalizing deviations in higher-order Taylor series terms of policies and demonstrates the benefits in MuJoCo experiments. The method is limited by its need for the Jacobians of the policy to be imitated (demonstrated human trajectories in standard imitation learning do not provide such information, instead algorithmically-produced policies are needed). However, the authors provide some examples that seem sufficiently motivating. Another emphasized reviewer concern is the feasibility of identifying the necessary conditions for the theoretical analysis in actual tasks/demonstration policies, but I tend to agree with the authors that this is an orthogonal question to operationalizing the assumption to develop the proposed methods. Overall, I think the authors have addressed many of the lesser concerns of the reviewers and agree with the majority positive opinion of the reviewers and recommend the paper be accepted.
test
[ "eOe5kibZlN1", "7JIiimsc-K-", "osq9ykvevox", "R_speE2gwID", "Rr_mZ7mXNZA", "vO5HCZX9IIU", "61cC2DtEnpD", "RXprFOFBOz", "IAItMwRvf-s", "-X_TBManyXz", "zs1sahNql9L" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " A friendly reminder that the reviewer/author discussion period ends tomorrow. If there are any additional points regarding our response that you would like clarified, please don't hesitate to reach out with further questions.", " Thank you for your detailed reply to my questions and the questions of the other ...
[ -1, -1, -1, -1, -1, -1, -1, 6, 7, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 2, 4, 3, 4 ]
[ "R_speE2gwID", "zs1sahNql9L", "zs1sahNql9L", "-X_TBManyXz", "IAItMwRvf-s", "RXprFOFBOz", "nips_2022_jqzoJw7xamd", "nips_2022_jqzoJw7xamd", "nips_2022_jqzoJw7xamd", "nips_2022_jqzoJw7xamd", "nips_2022_jqzoJw7xamd" ]
nips_2022_LdKdbHw3A_6
A Unifying Framework of Off-Policy General Value Function Evaluation
General Value Function (GVF) is a powerful tool to represent both the {\em predictive} and {\em retrospective} knowledge in reinforcement learning (RL). In practice, often multiple interrelated GVFs need to be evaluated jointly with pre-collected off-policy samples. In the literature, the gradient temporal difference (GTD) learning method has been adopted to evaluate GVFs in the off-policy setting, but such an approach may suffer from a large estimation error even if the function approximation class is sufficiently expressive. Moreover, none of the previous work have formally established the convergence guarantee to the ground truth GVFs under the function approximation settings. In this paper, we address both issues through the lens of a class of GVFs with causal filtering, which cover a wide range of RL applications such as reward variance, value gradient, cost in anomaly detection, stationary distribution gradient, etc. We propose a new algorithm called GenTD for off-policy GVFs evaluation and show that GenTD learns multiple interrelated multi-dimensional GVFs as efficiently as a single canonical scalar value function. We further show that unlike GTD, the learned GVFs by GenTD are guaranteed to converge to the ground truth GVFs as long as the function approximation power is sufficiently large. To our best knowledge, GenTD is the first off-policy GVF evaluation algorithm that has global optimality guarantee.
Accept
The paper proposes a new algorithm called GenTD for the estimation of multiple general value functions (predictive and retrospective) from off-policy data. The paper shows convergence guarantees for this algorithm to the ground truth for a certain class of general value functions with causal filtering. The initial reviews were mixed. On the positive side, the reviewers found the writing to be clear overall, found the studied problem important and appreciated the theoretical results. On the negative side, several reviewers voiced concerns regarding the experimental evaluation. Other concerns are the limitation of the linear setting and possible extensions to the non-linear setting as well as the significance, specifically, whether this work is merely a combination GTD and density ratio estimation. The authors' response could alleviate these concerns, further clarifying the contributions of the paper as well as adding additional experimental results. After the discussion with the authors, all reviewers view the paper positively and the AC agrees. All in all, this paper is recommended to be accepted.
test
[ "EzwHMd__Rem", "_P-ypoXuIk", "iR0LDg_BKAS", "-3XenXrPmK", "HvG-XNEhXh5", "59vnR_SBcGN", "pCEH0Db63STp", "T-fVKgL86l", "voyOJA521o0H", "PDLrK3-Rck", "MYknuFfRac", "XS0MRL4F91", "0IEjxHzlkx8", "iScjcWOoOJ", "ZR6uAv5SYO", "1mei6E6B58W", "SH1VBIYbDQA", "T9VL-U2wTHe", "xPD7JCMW0Ks" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer very much for the further feedback and for raising the score!", " We truly thank all reviewers’ insightful and constructive comments, which helped significantly to improve our paper. We also thank the ACs for their time and efforts during the review process. ", " Thanks for the rebuttal ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 2, 2 ]
[ "iR0LDg_BKAS", "nips_2022_LdKdbHw3A_6", "SH1VBIYbDQA", "59vnR_SBcGN", "SH1VBIYbDQA", "pCEH0Db63STp", "voyOJA521o0H", "MYknuFfRac", "ZR6uAv5SYO", "SH1VBIYbDQA", "XS0MRL4F91", "xPD7JCMW0Ks", "T9VL-U2wTHe", "SH1VBIYbDQA", "1mei6E6B58W", "nips_2022_LdKdbHw3A_6", "nips_2022_LdKdbHw3A_6", ...
nips_2022_CHMJSfuIX8
Using Embeddings for Causal Estimation of Peer Influence in Social Networks
We address the problem of using observational data to estimate peer contagion effects, the influence of treatments applied to individuals in a network on the outcomes of their neighbors. A main challenge to such estimation is that homophily - the tendency of connected units to share similar latent traits - acts as an unobserved confounder for contagion effects. Informally, it's hard to tell whether your friends have similar outcomes because they were influenced by your treatment, or whether it's due to some common trait that caused you to be friends in the first place. Because these common causes are not usually directly observed, they cannot be simply adjusted for. We describe an approach to perform the required adjustment using node embeddings learned from the network itself. The main aim is to perform this adjustment nonparametrically, without functional form assumptions on either the process that generated the network or the treatment assignment and outcome processes. The key contributions are to nonparametrically formalize the causal effect in a way that accounts for homophily, and to show how embedding methods can be used to identify and estimate this effect.
Accept
This paper introduces a method for causal estimation of peer influence in networks in the presence of a confounding effect of homophily. The reviewers agreed that the idea proposed in the paper is novel and well-motivated, and the accompanying theoretical analysis is useful. While there were concerns about the lack of experimental variation in more realistic scenarios, the consensus is that the contribution is sufficiently novel and significant to warrant its acceptance.
train
[ "hb0-iOTHnqp", "WqBpiQmRxA8", "Ziu3XCPyY2p", "ozXbdcc4_eZ", "OjBdubNkG2o", "jVRrM0CrzRt", "DVibWNRsbKX", "yZTunY2KThC", "79boKw8kYu", "ECSgubnlQFX", "NKscfTQNLh", "jmeTuK_L4zO", "hc3ylt3lz67", "ByyorDZibMd", "Qc68JRTTT83", "KUUubFQ-50g" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I believe my primary concern is addressed. I am updating my review accordingly.", " Thank you for your explanations that address my major concerns. I am increasing my score accordingly.", " As a practical matter for the choice of embedding method, a key consideration is the ability of the embedding method to ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "Ziu3XCPyY2p", "NKscfTQNLh", "ozXbdcc4_eZ", "OjBdubNkG2o", "yZTunY2KThC", "ECSgubnlQFX", "NKscfTQNLh", "KUUubFQ-50g", "Qc68JRTTT83", "ByyorDZibMd", "hc3ylt3lz67", "nips_2022_CHMJSfuIX8", "nips_2022_CHMJSfuIX8", "nips_2022_CHMJSfuIX8", "nips_2022_CHMJSfuIX8", "nips_2022_CHMJSfuIX8" ]
nips_2022_qw3MZb1Juo
Preservation of the Global Knowledge by Not-True Distillation in Federated Learning
In federated learning, a strong global model is collaboratively learned by aggregating clients' locally trained models. Although this precludes the need to access clients' data directly, the global model's convergence often suffers from data heterogeneity. This study starts from an analogy to continual learning and suggests that forgetting could be the bottleneck of federated learning. We observe that the global model forgets the knowledge from previous rounds, and the local training induces forgetting the knowledge outside of the local distribution. Based on our findings, we hypothesize that tackling down forgetting will relieve the data heterogeneity problem. To this end, we propose a novel and effective algorithm, Federated Not-True Distillation (FedNTD), which preserves the global perspective on locally available data only for the not-true classes. In the experiments, FedNTD shows state-of-the-art performance on various setups without compromising data privacy or incurring additional communication costs.
Accept
This paper studies the data heterogeneity problem in federated learning. It is well-written, with some novel ideas and good efforts on analyzing the problem of forgetting and experimental studies. We hope the authors can revise the paper carefully per the reviewers' suggestion and add some new experimental results during rebuttal into the final version.
train
[ "WlJEs7JN4KY", "cVCgweta59", "QN0bQa52_-y", "0ab8eAHFl9W", "BCpRgvzmAI-", "l8oMbHUt-L", "I1PVHTR5KB", "oXZO54YrROM", "SZlAeEEZdDO", "VLSH8MHZKVq", "V6SPseiE98", "CX5nFE5xw2-", "602b2RbRxy", "s-VK77J72hd", "a4OJyqEqNs3", "EAMGJnluX0B" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Our theoretical analysis aims to provide the importance of knowledge preservation on out-local distribution and how NTD achieves it.\n\n>In **Proposition 1**, we suggest that increasing the regularization coefficient on out-local distribution quadratically reduces the local gradient diversity in the FL system, gu...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "0ab8eAHFl9W", "0ab8eAHFl9W", "0ab8eAHFl9W", "a4OJyqEqNs3", "nips_2022_qw3MZb1Juo", "s-VK77J72hd", "a4OJyqEqNs3", "a4OJyqEqNs3", "a4OJyqEqNs3", "a4OJyqEqNs3", "EAMGJnluX0B", "EAMGJnluX0B", "EAMGJnluX0B", "nips_2022_qw3MZb1Juo", "nips_2022_qw3MZb1Juo", "nips_2022_qw3MZb1Juo" ]
nips_2022_lIeuKiTZsLY
Latent Hierarchical Causal Structure Discovery with Rank Constraints
Most causal discovery procedures assume that there are no latent confounders in the system, which is often violated in real-world problems. In this paper, we consider a challenging scenario for causal structure identification, where some variables are latent and they may form a hierarchical graph structure to generate the measured variables; the children of latent variables may still be latent and only leaf nodes are measured, and moreover, there can be multiple paths between every pair of variables (i.e., it is beyond tree structure). We propose an estimation procedure that can efficiently locate latent variables, determine their cardinalities, and identify the latent hierarchical structure, by leveraging rank deficiency constraints over the measured variables. We show that the proposed algorithm can find the correct Markov equivalence class of the whole graph asymptotically under proper restrictions on the graph structure and with linear causal relations.
Accept
All reviewers and AC agree that this work is clearly of interest to NeurIPS. Two out of three reviewers increased their score after their concrnes were successfully addressed during the rebuttal. Reviewer's eq5G main concern was the weakness of the experimental results. After reading the authors' response, the AC believes that the authors could introduce the additional supplementary experiments they ran to cover the concern. A suggestion for better presentaiton is also communicated ot the authors. Acceptance is recommended.
train
[ "7MGmWpsI-O", "uj9qLk13H5S", "bGTKP-hRMk", "bQDLCPJblNV", "-3nfSxOeZTTU", "ehdYRH_APQw", "O7V6_7BtV-l", "rv7gD3FmFH", "CFXzz_R4H_A", "c_ZSJ0EX7zB", "hs5PrXFRVND", "4SnSDeLdbFN", "OXuqiaXzeAm", "xqHhA_ZFFt", "AxgPPdHtc-P" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer eq5G,\n\nThanks for providing the Author Rebuttal Acknowledgement. Given that the discussion involving authors will end in 6 hours, could you please let us know whether your main concerns were addressed by our response and updated submission? If there is any other concern, please let us know, and w...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "hs5PrXFRVND", "bGTKP-hRMk", "4SnSDeLdbFN", "4SnSDeLdbFN", "hs5PrXFRVND", "O7V6_7BtV-l", "c_ZSJ0EX7zB", "AxgPPdHtc-P", "AxgPPdHtc-P", "AxgPPdHtc-P", "xqHhA_ZFFt", "OXuqiaXzeAm", "nips_2022_lIeuKiTZsLY", "nips_2022_lIeuKiTZsLY", "nips_2022_lIeuKiTZsLY" ]
nips_2022_drVX99PekKf
Bandit Theory and Thompson Sampling-Guided Directed Evolution for Sequence Optimization
Directed Evolution (DE), a landmark wet-lab method originated in 1960s, enables discovery of novel protein designs via evolving a population of candidate sequences. Recent advances in biotechnology has made it possible to collect high-throughput data, allowing the use of machine learning to map out a protein's sequence-to-function relation. There is a growing interest in machine learning-assisted DE for accelerating protein optimization. Yet the theoretical understanding of DE, as well as the use of machine learning in DE, remains limited. In this paper, we connect DE with the bandit learning theory and make a first attempt to study regret minimization in DE. We propose a Thompson Sampling-guided Directed Evolution (TS-DE) framework for sequence optimization, where the sequence-to-function mapping is unknown and querying a single value is subject to costly and noisy measurements. TS-DE updates a posterior of the function based on collected measurements. It uses a posterior-sampled function estimate to guide the crossover recombination and mutation steps in DE. In the case of a linear model, we show that TS-DE enjoys a Bayesian regret of order $\tilde O(d^{2}\sqrt{MT})$, where $d$ is feature dimension, $M$ is population size and $T$ is number of rounds. This regret bound is nearly optimal, confirming that bandit learning can provably accelerate DE. It may have implications for more general sequence optimization and evolutionary algorithms.
Accept
The initial round of reviews for the submitted manuscript was mostly positive in tone, but this enthusiasm was tempered by a number of deep technical issues -- and some more philosophical issues regarding the presentation and framing of the results -- raised by the reviewers. Fortunately, the author rebuttal and author--reviewer discussion phases went a long way toward clearing up some initial confusion and clarifying the contributions of the authors, which swayed the prevailing opinion of the reviewers toward acceptance. I want to commend the authors for their enlightening contributions to that discussion, which assuaged most of the reviewers' initial complaints. However, I would also like to stress that it is critical that the fruits of this discussion (especially with reviewers X52n and fbLu) be incorporated into a revised version of this manuscript. The reviewers are unanimous in this opinion.
train
[ "tQhXldEK1fC", "FfpZEeDkfU9", "uMp3v_vWECo", "NUm5HvI6zWp", "WGSeFkMEBiU", "3BFxgzVqS3", "DzgtbnPA8Hg", "biVYHM0_y6h", "bCpNAAVw7Vf", "AOM1Af0JWK", "fdSqLsSHuc", "ek7v448soPwO", "8oSZuOLhAD", "sGmf7afovGZ", "g3rdCxLVKwl", "sbQdeP9vqPJ", "YTcaHzix-2L", "H437JvcAt28", "feMA_wp_q0r"...
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hi Reviewer, thank you for the update and for the discussions!", " Dear Reviewer,\n\nThis is a gentle reminder to check if you have read our response and have any follow up question?\n\nIn case you might have missed to see our common response to all reviewers about the linear model assumption. You can find it h...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 2, 3 ]
[ "NUm5HvI6zWp", "g3rdCxLVKwl", "WGSeFkMEBiU", "3BFxgzVqS3", "YTcaHzix-2L", "1sU4m940BDY", "biVYHM0_y6h", "8oSZuOLhAD", "1sU4m940BDY", "fdSqLsSHuc", "YTcaHzix-2L", "8oSZuOLhAD", "H437JvcAt28", "nips_2022_drVX99PekKf", "feMA_wp_q0r", "1sU4m940BDY", "nips_2022_drVX99PekKf", "nips_2022_...
nips_2022_rg_yN3HpCp
Label-invariant Augmentation for Semi-Supervised Graph Classification
Recently, contrastiveness-based augmentation surges a new climax in the computer vision domain, where some operations, including rotation, crop, and flip, combined with dedicated algorithms, dramatically increase the model generalization and robustness. Following this trend, some pioneering attempts employ the similar idea to graph data. Nevertheless, unlike images, it is much more difficult to design reasonable augmentations without changing the nature of graphs. Although exciting, the current graph contrastive learning does not achieve as promising performance as visual contrastive learning. We conjecture the current performance of graph contrastive learning might be limited by the violation of the label-invariant augmentation assumption. In light of this, we propose a label-invariant augmentation for graph-structured data to address this challenge. Different from the node/edge modification and subgraph extraction, we conduct the augmentation in the representation space and generate the augmented samples in the most difficult direction while keeping the label of augmented data the same as the original samples. In the semi-supervised scenario, we demonstrate our proposed method outperforms the classical graph neural network based methods and recent graph contrastive learning on eight benchmark graph-structured data, followed by several in-depth experiments to further explore the label-invariant augmentation in several aspects.
Accept
The paper finally received three unanimous scores 6/6/7 and all the reviewers think the authors have addressed their initial concerns. The AC also think positive to this work and suggst to accept this paper.
test
[ "FKG68M6_NJ", "aoEKWOIQZ1t", "qrUKTFRYxU", "Q5MbnxT8ykg", "okQqHakzQtK", "bcTyV_oH3I6", "Yv86Duv3I1f", "P5w2x03XpT0", "VXiKe1KJESn", "v3ds2udH5U0" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are approaching the end of the rebuttal period, and the authors have provided feedback to which your further response is appreciated, especially seeing there are dispute on the rating among the reviewers.", " Thank you very much for reviewing our paper and prompt response. We are happy to address all the con...
[ -1, -1, -1, -1, -1, -1, -1, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "v3ds2udH5U0", "qrUKTFRYxU", "bcTyV_oH3I6", "okQqHakzQtK", "v3ds2udH5U0", "VXiKe1KJESn", "P5w2x03XpT0", "nips_2022_rg_yN3HpCp", "nips_2022_rg_yN3HpCp", "nips_2022_rg_yN3HpCp" ]
nips_2022_uLhKRH-ovde
A Communication-Efficient Distributed Gradient Clipping Algorithm for Training Deep Neural Networks
In distributed training of deep neural networks, people usually run Stochastic Gradient Descent (SGD) or its variants on each machine and communicate with other machines periodically. However, SGD might converge slowly in training some deep neural networks (e.g., RNN, LSTM) because of the exploding gradient issue. Gradient clipping is usually employed to address this issue in the single machine setting, but exploring this technique in the distributed setting is still in its infancy: it remains mysterious whether the gradient clipping scheme can take advantage of multiple machines to enjoy parallel speedup. The main technical difficulty lies in dealing with nonconvex loss function, non-Lipschitz continuous gradient, and skipping communication rounds simultaneously. In this paper, we explore a relaxed-smoothness assumption of the loss landscape which LSTM was shown to satisfy in previous works, and design a communication-efficient gradient clipping algorithm. This algorithm can be run on multiple machines, where each machine employs a gradient clipping scheme and communicate with other machines after multiple steps of gradient-based updates. Our algorithm is proved to have $O\left(\frac{1}{N\epsilon^4}\right)$ iteration complexity and $O(\frac{1}{\epsilon^3})$ communication complexity for finding an $\epsilon$-stationary point in the homogeneous data setting, where $N$ is the number of machines. This indicates that our algorithm enjoys linear speedup and reduced communication rounds. Our proof relies on novel analysis techniques of estimating truncated random variables, which we believe are of independent interest. Our experiments on several benchmark datasets and various scenarios demonstrate that our algorithm indeed exhibits fast convergence speed in practice and thus validates our theory.
Accept
The paper presents a natural idea to combine local SGD with gradient clipping for communication efficient distributed training. Authors have addressed concerns from reviewers well (e.g., they included a ImageNet experiment). Overall well-motivated theoretical contribution with sufficient empirical validation. My recommendation is to accept this paper.
train
[ "g2JHfluyxt", "v2NGtYHvMZA", "trUpendqgU", "Eg4Ktxe5NpT", "y7nGyAfFulp", "i77ZcXdnD1P", "knVBP1qu7xq", "jo2sqJ2MSdi", "aeoSefva0JP", "Nhyvk12H-Sg", "5tzJQSGJCci", "MZ2mTu5_8f3", "0rogHag6WuK", "NBvUhYTkP9", "GkC64AIQlyt", "BrCMUmyNcC", "8eJxHbvgza8", "rJzHk99Qx48", "zWVzK2frLJL",...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", ...
[ " Dear Reviewer xkbT,\n\nWe kindly suggest you to update your evaluation model when you have more new information from us. It seems that you are still asking exactly the same questions even after reading our response. It would make us difficult to further clarify your concerns.\n\nCould you please let us know what ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "Eg4Ktxe5NpT", "y7nGyAfFulp", "jo2sqJ2MSdi", "sSwqaqQ6KvS", "knVBP1qu7xq", "Wa9qzlgGvY9", "nips_2022_uLhKRH-ovde", "GkC64AIQlyt", "Nhyvk12H-Sg", "0kXwlu0RjZw", "qQZGJqgw8g6", "esXDz-nC0B2", "Wa9qzlgGvY9", "8eJxHbvgza8", "BrCMUmyNcC", "rJzHk99Qx48", "zWVzK2frLJL", "FSABUQD1kfj", "...
nips_2022__sQ6pLNVHoh
Task-Agnostic Graph Explanations
Graph Neural Networks (GNNs) have emerged as powerful tools to encode graph-structured data. Due to their broad applications, there is an increasing need to develop tools to explain how GNNs make decisions given graph-structured data. Existing learning-based GNN explanation approaches are task-specific in training and hence suffer from crucial drawbacks. Specifically, they are incapable of producing explanations for a multitask prediction model with a single explainer. They are also unable to provide explanations in cases where the GNN is trained in a self-supervised manner, and the resulting representations are used in future downstream tasks. To address these limitations, we propose a Task-Agnostic GNN Explainer (TAGE) that is independent of downstream models and trained under self-supervision with no knowledge of downstream tasks. TAGE enables the explanation of GNN embedding models with unseen downstream tasks and allows efficient explanation of multitask models. Our extensive experiments show that TAGE can significantly speed up the explanation efficiency by using the same model to explain predictions for multiple downstream tasks while achieving explanation quality as good as or even better than current state-of-the-art GNN explanation approaches.
Accept
This paper proposed an extension to existing graph explanation approaches, by introducing the task-agnostic concept that decouples the representation learning and task adaptation modules. All reviewers agree that the paper is well motivated and has practical values. During the rebuttal the authors have also addressed several concerns raised by the reviewers. Based on the agreement from all the reviewers we recommend the acceptance of this paper.
train
[ "ImTQLYe3pN9", "qxUivmPh6Id", "pLkuTxITcP5", "IUQz2K5yeB", "u5YSubr5qHgL", "h39_OOvyNZF", "TkmPio71WTR", "wUv4mkUkUub", "Q2qZYCmFTAA", "V7rtuZlTAtw", "oYuhaQKJgTe", "G5oD2jgsAn", "Lze-YhahJ4G", "nEtyKd2IpJ2" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for answering my questions.\n\nPlease add the following reference.\n\nWang, Xiang, Yingxin Wu, An Zhang, Xiangnan He, and Tat-Seng Chua. \"Towards multi-grained explainability for graph neural networks.\" Advances in Neural Information Processing Systems 34 (2021): 18446-18458.", " Thank you for your fur...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5, 3 ]
[ "Q2qZYCmFTAA", "pLkuTxITcP5", "IUQz2K5yeB", "u5YSubr5qHgL", "wUv4mkUkUub", "nEtyKd2IpJ2", "Lze-YhahJ4G", "Lze-YhahJ4G", "G5oD2jgsAn", "oYuhaQKJgTe", "nips_2022__sQ6pLNVHoh", "nips_2022__sQ6pLNVHoh", "nips_2022__sQ6pLNVHoh", "nips_2022__sQ6pLNVHoh" ]
nips_2022_AyiiHcRzTd
Communication-Efficient Topologies for Decentralized Learning with $O(1)$ Consensus Rate
Decentralized optimization is an emerging paradigm in distributed learning in which agents achieve network-wide solutions by peer-to-peer communication without the central server. Since communication tends to be slower than computation, when each agent communicates with only a few neighboring agents per iteration, they can complete iterations faster than with more agents or a central server. However, the total number of iterations to reach a network-wide solution is affected by the speed at which the information of the agents is ``mixed'' by communication. We found that popular communication topologies either have large degrees (such as stars and complete graphs) or are ineffective at mixing information (such as rings and grids). To address this problem, we propose a new family of topologies, EquiTopo, which has an (almost) constant degree and network-size-independent consensus rate which is used to measure the mixing efficiency. In the proposed family, EquiStatic has a degree of $\Theta(\ln(n))$, where $n$ is the network size, and a series of time-varying one-peer topologies, EquiDyn, has a constant degree of 1. We generate EquiDyn through a certain random sampling procedure. Both of them achieve $n$-independent consensus rate. We apply them to decentralized SGD and decentralized gradient tracking and obtain faster communication and better convergence, both theoretically and empirically. Our code is implemented through BlueFog and available at https://github.com/kexinjinnn/EquiTopo.
Accept
This paper introduces a framework for communication scheduling in fully connected networks, in order to improve the convergence rate of decentralized learning algorithms (specifically, the mixing rate of the underlying consensus process). Both broadcast/convergecast (one-to-many, many-to-one) and directed/undirected communication is considered. I believe that this is a useful addition to the toolset for decentralized optimization on fully connected networks and therefore recommend acceptance. A limitation not raised by the reviewers appears to be the lack of an efficient method for constructing the graph, as it appears that Algorithm 3 essentially performs a random search that could take a long time to converge if rho is selected close to its limit (or that would not converge if rho was chosen too aggressively).
train
[ "9ohhqQ7qukl", "MWEskxb1ll", "Q95hgJGKAAq", "bKJoSJOUmEm", "a-pOmKRob1S", "NiLQYTcvrpT", "tgIb8U-3Dm", "QFqEFiPMV3L", "8YAGqB16AOd", "Vikl4JRo18", "pnk7pryFjuE", "hX5tlVKBG9H", "JpMstWkvMls" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \\\nDear Reviewer fyHE,\n\n\\\nThanks very much for your positive comments. We are happy that you find our work novel. \n\n\\\nWe have addressed your questions in our rebuttal (especially those on transient iteration complexity, push-sum algorithm, and new results provided by Theorem 6). Since the reviewer-author...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 3 ]
[ "pnk7pryFjuE", "hX5tlVKBG9H", "bKJoSJOUmEm", "tgIb8U-3Dm", "nips_2022_AyiiHcRzTd", "JpMstWkvMls", "JpMstWkvMls", "hX5tlVKBG9H", "pnk7pryFjuE", "pnk7pryFjuE", "nips_2022_AyiiHcRzTd", "nips_2022_AyiiHcRzTd", "nips_2022_AyiiHcRzTd" ]
nips_2022_LCOv-GVVDkp
Human-AI Shared Control via Policy Dissection
Human-AI shared control allows human to interact and collaborate with autonomous agents to accomplish control tasks in complex environments. Previous Reinforcement Learning (RL) methods attempted goal-conditioned designs to achieve human-controllable policies at the cost of redesigning the reward function and training paradigm. Inspired by the neuroscience approach to investigate the motor cortex in primates, we develop a simple yet effective frequency-based approach called Policy Dissection to align the intermediate representation of the learned neural controller with the kinematic attributes of the agent behavior. Without modifying the neural controller or retraining the model, the proposed approach can convert a given RL-trained policy into a human-controllable policy. We evaluate the proposed approach on many RL tasks such as autonomous driving and locomotion. The experiments show that human-AI shared control system achieved by Policy Dissection in driving task can substantially improve the performance and safety in unseen traffic scenes. With human in the inference loop, the locomotion robots also exhibit versatile controllable motion skills even though they are only trained to move forward. Our results suggest the promising direction of implementing human-AI shared autonomy through interpreting the learned representation of the autonomous agents. Code and demo videos are available at https://metadriverse.github.io/policydissect
Accept
This paper proposes to dissect a trained policy, which finds the correspondence between the neuron activations and the motion primitives. This enables a human to control the agent to complete complex and diverse tasks even though only one simple task is trained. All reviewers agree that the proposed method is a creative solution to an important problem. And it may also have important applications in robotics. In addition, this is an important step towards understanding/interpreting neural network policies, which may inspire follow-up works in the future. There are a few concerns and suggestions in the original reviews. Most of them are sufficiently addressed in the rebuttal and discussions. The new experimental results look impressive, and help to resolve a major concern about the coarseness of the human control that this paper enables. While the paper writing is still rough, the creative solution and the potential important applications compensate for the shortcomings. Thus, I recommend accepting this paper. Please revise the paper by incorporating reviewers' comments.
train
[ "Gr8rSeVVOyF", "8qesjRZ_Yjs", "XrtCZNfbWUn", "mQVIh2YXeXz", "zUOp4qwkUYv", "GLJY8T0wZx9", "opxIMGjiNit", "oRpzTFPUDF6", "_On6GdcDazs", "z8-Jrmy8kRY", "ZAKw1rf6Mrl", "8Ti6biSWLr5", "kGPVv8OcjAh", "Qr_kbMyBOT6", "oM7BRxY53v8", "RZioKYKuSi_", "EjxcQ5CcHXr", "1NtWWTUOtRd" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I'm increasing my score to borderline reject. I can't go over that because I still feel that the paper should be must more understandable, even for people that are not in the area. I feel there's space to improve the writing.", " Dear Reviewer,\n\nThank you for the initial review. We prepared a detailed respons...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 2, 4 ]
[ "EjxcQ5CcHXr", "EjxcQ5CcHXr", "RZioKYKuSi_", "oM7BRxY53v8", "EjxcQ5CcHXr", "RZioKYKuSi_", "oM7BRxY53v8", "z8-Jrmy8kRY", "1NtWWTUOtRd", "1NtWWTUOtRd", "EjxcQ5CcHXr", "RZioKYKuSi_", "oM7BRxY53v8", "nips_2022_LCOv-GVVDkp", "nips_2022_LCOv-GVVDkp", "nips_2022_LCOv-GVVDkp", "nips_2022_LCO...
nips_2022_AWeZdGJ89lC
QC-StyleGAN - Quality Controllable Image Generation and Manipulation
The introduction of high-quality image generation models, particularly the StyleGAN family, provides a powerful tool to synthesize and manipulate images. However, existing models are built upon high-quality (HQ) data as desired outputs, making them unfit for in-the-wild low-quality (LQ) images, which are common inputs for manipulation. In this work, we bridge this gap by proposing a novel GAN structure that allows for generating images with controllable quality. The network can synthesize various image degradation and restore the sharp image via a quality control code. Our proposed QC-StyleGAN can directly edit LQ images without altering their quality by applying GAN inversion and manipulation techniques. It also provides for free an image restoration solution that can handle various degradations, including noise, blur, compression artifacts, and their mixtures. Finally, we demonstrate numerous other applications such as image degradation synthesis, transfer, and interpolation.
Accept
QC-StyleGAN provides a controllable way of generating images with certain types of corruption. Generating corrupted images has many applications because they are common in real-world situations, so having the ability to generate samples from distributions with controllable corruptions is a powerful idea. The reviewers generally felt the paper should be accepted, although the highest score came from the least confident reviewer. The overall average is still on the accept side, and I feel the paper has enough interesting ideas and potential applications to be useful to the NeurIPS audience, so I recommend acceptance.
train
[ "4SK7_prXcdV", "JOAzDDJlAWQ", "7HKf0Ap5qBM0", "R3EVRe_7PBN", "_eWCUEfNYJ8", "8ZpCOBxDVpg", "tiyL4fZ7VsH", "rEpA__L8ZVg", "vhSws_RICo", "Xcrk7yfDsfh", "zipjT_kWkb", "DwSgC8-ViMF", "Oyh2qh1k8P", "qLHorjbYcj4", "IRYOEH6_YGq", "yR-X3hLd1vP", "5bxLIeNsfq", "46aOl6V1-Yl" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your updated score. We appreciate your valuable comments, which greatly help to improve the quality of our paper.", " Thanks to the authors for their efforts. Given the current revision, I am willing to raise my score to weak accept (6).\n\nBut by the way, I checked the current revision. I think ther...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 5, 4, 3 ]
[ "JOAzDDJlAWQ", "7HKf0Ap5qBM0", "R3EVRe_7PBN", "_eWCUEfNYJ8", "tiyL4fZ7VsH", "IRYOEH6_YGq", "DwSgC8-ViMF", "vhSws_RICo", "Xcrk7yfDsfh", "46aOl6V1-Yl", "5bxLIeNsfq", "Oyh2qh1k8P", "yR-X3hLd1vP", "nips_2022_AWeZdGJ89lC", "nips_2022_AWeZdGJ89lC", "nips_2022_AWeZdGJ89lC", "nips_2022_AWeZd...
nips_2022_8gQEmEgWAkc
VoiceBlock: Privacy through Real-Time Adversarial Attacks with Audio-to-Audio Models
As governments and corporations adopt deep learning systems to collect and analyze user-generated audio data, concerns about security and privacy naturally emerge in areas such as automatic speaker recognition. While audio adversarial examples offer one route to mislead or evade these invasive systems, they are typically crafted through time-intensive offline optimization, limiting their usefulness in streaming contexts. Inspired by architectures for audio-to-audio tasks such as denoising and speech enhancement, we propose a neural network model capable of adversarially modifying a user's audio stream in real-time. Our model learns to apply a time-varying finite impulse response (FIR) filter to outgoing audio, allowing for effective and inconspicuous perturbations on a small fixed delay suitable for streaming tasks. We demonstrate our model is highly effective at de-identifying user speech from speaker recognition and able to transfer to an unseen recognition system. We conduct a perceptual study and find that our method produces perturbations significantly less perceptible than baseline anonymization methods, when controlling for effectiveness. Finally, we provide an implementation of our model capable of running in real-time on a single CPU thread. Audio examples and code can be found at https://interactiveaudiolab.github.io/project/voiceblock.html.
Accept
The paper proposes an adversarial attack strategy, for audio-to-audio modeling, that preserves user’s privacy against speaker recognition models. All reviewers agree this is a relevant topic for NeurIPS, with a strong contribution on voice privacy. In addition, everyone agrees the experimental section is solid (concerns have been addressed during the rebuttal phase).
test
[ "wiPNQUgelfs", "VrvSAqAirX", "nctoPRN48d3", "ImRZ01TVHO2", "8hfJwBWN6x-", "eLEjA_Ou0j", "zuAr_grN5Cn", "rFeqja5Qm2y", "412ChaP6CiW", "ct2ThPVrh9z", "pj-tIjgrK5", "SMet9MEvZ0u", "2o0MOSAXm0W" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I really appreciate the detailed and thoughtful responses to my concerns! I had certainly not considered applications such as voice chat and telehealth, which are both great motivating examples for a system which maintains perceptual identity but impedes surveillance.\n\nThe changes to the Appendix and the availa...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "rFeqja5Qm2y", "ct2ThPVrh9z", "ImRZ01TVHO2", "8hfJwBWN6x-", "eLEjA_Ou0j", "zuAr_grN5Cn", "pj-tIjgrK5", "412ChaP6CiW", "2o0MOSAXm0W", "SMet9MEvZ0u", "nips_2022_8gQEmEgWAkc", "nips_2022_8gQEmEgWAkc", "nips_2022_8gQEmEgWAkc" ]
nips_2022_NSophzmqq8Y
Kernel similarity matching with Hebbian networks
Recent works have derived neural networks with online correlation-based learning rules to perform \textit{kernel similarity matching}. These works applied existing linear similarity matching algorithms to nonlinear features generated with random Fourier methods. In this paper attempt to perform kernel similarity matching by directly learning the nonlinear features. Our algorithm proceeds by deriving and then minimizing an upper bound for the sum of squared errors between output and input kernel similarities. The construction of our upper bound leads to online correlation-based learning rules which can be implemented with a 1 layer recurrent neural network. In addition to generating high-dimensional linearly separable representations, we show that our upper bound naturally yields representations which are sparse and selective for specific input patterns. We compare the approximation quality of our method to neural random Fourier method and variants of the popular but non-biological ``Nystr{\"o}m'' method for approximating the kernel matrix. Our method appears to be comparable or better than randomly sampled Nystr{\"o}m methods when the outputs are relatively low dimensional (although still potentially higher dimensional than the inputs) but less faithful when the outputs are very high dimensional.
Accept
This submission is about brain-inspired learning (Hebbian rules) in the context of kernels. Particularly, given {x^t} points in R^M and a (reproducing) kernel f, the goal of the authors is to learn in a biologically plausible fashion a representation {y^t} in R^N such that f(x^s,x^t) \approx <y^s,y^t> where \approx is meant in squared sense as formulated in (1). They derive the bound (8)-(9) of (1) which is well-suited for Hebbian learning, and apply a stochastic gradient ascent-descent optimization. The practicality of the method is illustrated on the half-moons and the MNIST benchmarks; it performs comparably / favorably to existing approaches and shows sparsity. Kernel methods are in the forefront of data science. Bringing this field together with online biological updates (Hebbian rules) and extending the neural random Fourier feature (Bahroun et al., 2017) to arbitrary differentiable kernels is a good fit to the focus of NeurIPS with sufficient novelty as it was elaborated by the reviewers.
train
[ "4YLsXWJFhVa", "ED5E2IqxkZl", "uunvwanB6cz", "hMtn70VY75n", "HRrhB_GER34", "MbzJne5lxVW", "OaelI89odMn", "ZlLFSeUzZD", "BfAlxT9JILK", "s8A8Fhjylu0", "94tE5VTUDTJ", "tHMRVnIrluU", "jGGHZRH0X75", "XUkH7nMTsBK" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks!\n\nAs to the motivation, one might imagine this work as trying to identify \"lego blocks\" of neural computation. Can we identify basic building blocks that are biologically plausible, that we can then piece together, perhaps hierarchically, into a useful computational device? We hope that this paper prov...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 5, 2, 3 ]
[ "MbzJne5lxVW", "HRrhB_GER34", "hMtn70VY75n", "s8A8Fhjylu0", "BfAlxT9JILK", "OaelI89odMn", "XUkH7nMTsBK", "jGGHZRH0X75", "tHMRVnIrluU", "94tE5VTUDTJ", "nips_2022_NSophzmqq8Y", "nips_2022_NSophzmqq8Y", "nips_2022_NSophzmqq8Y", "nips_2022_NSophzmqq8Y" ]
nips_2022_G25uStbmC7
OPEN: Orthogonal Propagation with Ego-Network Modeling
To alleviate the unfavorable effect of noisy topology in Graph Neural networks (GNNs), some efforts perform the local topology refinement through the pairwise propagation weight learning and the multi-channel extension. Unfortunately, most of them suffer a common and fatal drawback: irrelevant propagation to one node and in multi-channels. These two kinds of irrelevances make propagation weights in multi-channels free to be determined by the labeled data, and thus the GNNs are exposed to overfitting. To tackle this issue, a novel Orthogonal Propagation with Ego-Network modeling (OPEN) is proposed by modeling relevances between propagations. Specifically, the relevance between propagations to one node is modeled by whole ego-network modeling, while the relevance between propagations in multi-channels is modeled via diversity requirement. By interpreting the propagations to one node from the perspective of dimension reduction, propagation weights are inferred from principal components of the ego-network, which are orthogonal to each other. Theoretical analysis and experimental evaluations reveal four attractive characteristics of OPEN as modeling high-order relationships beyond pairwise one, preventing overfitting, robustness, and high efficiency.
Accept
Reviewers recommended borderline accept, borderline reject, and accept. Reviewers found the article studies one of the main issues of GNNs and proposes a simple but effective solution method supported by extensive experimental evaluation. There were some reservations about the comparison with other methods and the theoretical analysis. While some of these items could be addressed during the discussion period, leading to updated more favorable ratings, some reservations about the theoretical part persisted. There were also persisting disagreements about the novelty and about the issues that are solved by the proposed method compared with previous methods. All together I found that the merits outweighed the shortcomings and hence am recommending accept. However, I strongly encourage the authors to carefully consider the reviewers comments when preparing the final manuscript, particularly that they work on the discussion and clarification of the novelty of the method, particularly the issues between over smoothing and overfitting and the corresponding presentation in the work, and the reservations on the theoretical part.
train
[ "hAU_DON-iQ6", "HFkHMUOPnDW", "LqfEH8bzLeM", "l2F5NqsUcrn", "whfI5D67xqw", "JnwLnRZej1G", "c6h7DP_04O3", "WUSyZKaGDWK", "75CiKm-MPsB", "uASakr7i2-R", "G3J05gZm_MQ", "nc7xo7neaF" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to express our sincere appreciation to you for your insightful comments and compliments to our paper.", " Dear Reviewer NBUs \n\nThanks for your additional comments. We would like to express our sincere appreciation to you for your insightful comments and compliments to our paper. We hope that you...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 3, 3 ]
[ "l2F5NqsUcrn", "LqfEH8bzLeM", "G3J05gZm_MQ", "whfI5D67xqw", "uASakr7i2-R", "nc7xo7neaF", "G3J05gZm_MQ", "G3J05gZm_MQ", "uASakr7i2-R", "nips_2022_G25uStbmC7", "nips_2022_G25uStbmC7", "nips_2022_G25uStbmC7" ]
nips_2022_8uiblU3fEjE
Embedding game: dimensionality reduction as a two-person zero-sum game
Dimensionality reduction is often formulated as a minimization containing a sparse sum of attractive interactions and a dense sum of repulsive interactions $\sum_{ij} f(\Vert \mathbf{y}_i - \mathbf{y}_j \Vert)$ between embedding vectors. This dense sum is usually subsampled to avoid computing all $N^2$ terms. In this paper we provide a novel approximation to the repulsive sum by deriving a landmark-based lower bound and then maximizing this lower bound with respect to the landmarks. After inserting this approximation into the original objective we are left with a minimax problem where the embedding vectors minimize the objective by pulling on their neighbors and running away from the landmarks while the landmarks maximize the objective by pulling on the embedding vectors and running away from other nearby landmarks. We use gradient descent ascent to find saddle points and show that our method can produce high quality visualizations without ever explicitly computing any pairwise repulsion between embedding vectors.
Reject
While the reviewers agreed that the paper addresses an important topic, and combining optimization with game theory is interesting, the reviewers had a number of concerns regarding the validity of the proposed formulation of the problem, lack of theoretical justification for using GDA, strong assumptions, lack of complexity analysis, and limited empirical evaluations. Unfortunately, those concerns were not fully addressed by the author response.
train
[ "xDIr2Xwqmu5", "oKiuB0yFxAo", "F5aQ8SGuWEK", "A3KEpRItXn7", "F5NqhWJMqsb", "A_rRVSEhDF", "gM4t2cAyHu" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the replies to my questions and comments. While some of my concerns were addressed, there remains a few points that I am not entirely comfortable with (e.g. minimizing a lower bound of your objective, no theoretical justification for using GDA nor acknowledgement of its flaws in the paper, etc). Thu...
[ -1, -1, -1, -1, 3, 2, 4 ]
[ -1, -1, -1, -1, 4, 3, 4 ]
[ "A3KEpRItXn7", "gM4t2cAyHu", "A_rRVSEhDF", "F5NqhWJMqsb", "nips_2022_8uiblU3fEjE", "nips_2022_8uiblU3fEjE", "nips_2022_8uiblU3fEjE" ]
nips_2022_VBbxHvbJd94
An efficient graph generative model for navigating ultra-large combinatorial synthesis libraries
Virtual, make-on-demand chemical libraries have transformed early-stage drug discovery by unlocking vast, synthetically accessible regions of chemical space. Recent years have witnessed rapid growth in these libraries from millions to trillions of compounds, hiding undiscovered, potent hits for a variety of therapeutic targets. However, they are quickly approaching a size beyond that which permits explicit enumeration, presenting new challenges for virtual screening. To overcome these challenges, we propose the Combinatorial Synthesis Library Variational Auto-Encoder (CSLVAE). The proposed generative model represents such libraries as a differentiable, hierarchically-organized database. Given a compound from the library, the molecular encoder constructs a query for retrieval, which is utilized by the molecular decoder to reconstruct the compound by first decoding its chemical reaction and subsequently decoding its reactants. Our design minimizes autoregression in the decoder, facilitating the generation of large, valid molecular graphs. Our method performs fast and parallel batch inference for ultra-large synthesis libraries, enabling a number of important applications in early-stage drug discovery. Compounds proposed by our method are guaranteed to be in the library, and thus synthetically and cost-effectively accessible. Importantly, CSLVAE can encode out-of-library compounds and search for in-library analogues. In experiments, we demonstrate the capabilities of the proposed method in the navigation of massive combinatorial synthesis libraries.
Accept
There are three reviews, all of which place this paper above the threshold. All reviewers agreed on several strong points, such as originality, convincing motivation behind the general approach and clarity of presentation. At the same time, the reviewers also raised some critical comments about the practical utility of the model (and a lack of experiments proving this utility). This last point of criticism, however, was addressed reasonably well in the rebuttal. Therefore, I think that the positive aspects dominate, and I recommend to accept this paper.
train
[ "BFA5Q3nHzW", "HAMUyNNBfqV", "8zGxSKVLyRn", "5GwkmfhgP4x", "sgRHfA6Lzjh", "i-s8ES5zskH", "CEeYrB8FZQZ", "5y280IFUvh", "0KPblJOrcZ", "Wu4NfgP8Oyd" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We wanted to make you aware of an additional experiment that we shared in our Response to Reviewer JV4f. ", " Thank you for the positive response.\n\nAs a (final) remark to your comment that the analogues returned by CSLVAE, while perhaps close in Tanimoto similarity, may not preserve some properties of interes...
[ -1, -1, -1, -1, -1, -1, -1, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 2, 4, 3 ]
[ "5y280IFUvh", "5GwkmfhgP4x", "sgRHfA6Lzjh", "i-s8ES5zskH", "Wu4NfgP8Oyd", "0KPblJOrcZ", "5y280IFUvh", "nips_2022_VBbxHvbJd94", "nips_2022_VBbxHvbJd94", "nips_2022_VBbxHvbJd94" ]
nips_2022_YUEP3ZmkL1
Your Out-of-Distribution Detection Method is Not Robust!
Out-of-distribution (OOD) detection has recently gained substantial attention due to the importance of identifying out-of-domain samples in reliability and safety. Although OOD detection methods have advanced by a great deal, they are still susceptible to adversarial examples, which is a violation of their purpose. To mitigate this issue, several defenses have recently been proposed. Nevertheless, these efforts remained ineffective, as their evaluations are based on either small perturbation sizes, or weak attacks. In this work, we re-examine these defenses against an end-to-end PGD attack on in/out data with larger perturbation sizes, e.g. up to commonly used $\epsilon=8/255$ for the CIFAR-10 dataset. Surprisingly, almost all of these defenses perform worse than a random detection under the adversarial setting. Next, we aim to provide a robust OOD detection method. In an ideal defense, the training should expose the model to almost all possible adversarial perturbations, which can be achieved through adversarial training. That is, such training perturbations should based on both in- and out-of-distribution samples. Therefore, unlike OOD detection in the standard setting, access to OOD, as well as in-distribution, samples sounds necessary in the adversarial training setup. These tips lead us to adopt generative OOD detection methods, such as OpenGAN, as a baseline. We subsequently propose the Adversarially Trained Discriminator (ATD), which utilizes a pre-trained robust model to extract robust features, and a generator model to create OOD samples. We noted that, for the sake of training stability, in the adversarial training of the discriminator, one should attack real in-distribution as well as real outliers, but not generated outliers. Using ATD with CIFAR-10 and CIFAR-100 as the in-distribution data, we could significantly outperform all previous methods in the robust AUROC while maintaining high standard AUROC and classification accuracy. The code repository is available at https://github.com/rohban-lab/ATD.
Accept
The paper considers OOD detection facing adversarial attacks. It shows existing defenses are ineffective to end-to-end attacks on both in/out distribution data with larger perturbation than previous work. With the intuition of training with adversarial examples on both in/out distribution data, it then proposed a method (ATD) that uses a generator model to create OOD samples. It provided experiments showing the method outperforms previous methods. The paper considers an important topic and made significant contributions: new evaluation of existing methods under stronger attacks, a well-designed method, quite thorough experiments and strong performance of the proposed method. The reviewers have some concerns but are largely addressed by the authors' repsonses. The major ones are: 1. Missing citation of/comparison to some existing work. Though ATOM only considers out distribution robustness, it is suggested that the authors include the comparison in the revision for completeness. In the response, the authors have provided results for the comparison. There are also related certified defenses [Bitterwolf20, Meinke21]. The response also added discussion about these. 2. Details should be included in the appendix, e.g., detailed experimental results, evaluation details like hyperparameters, etc. The revision has added these details. 3. Evaluation on more combinations of the ID / OOD datasets. The authors have also provided more in the response; it is suggested to also include them in the revision. Overall, the authors have made significant improvements addressing the reviewers' concerns/suggestions. The revised paper is of good quality for acceptance.
train
[ "x4l8yu0W7o", "qimPlpyptV", "dIUr8BQFnUF", "4sA8zaKQPmw", "U-Qvl4wYtID", "Zlx63fTljri", "1eG-OhVyRRh", "B_b57fl91EQ", "h-r4DRt73Jm", "KERmoDgNFh", "bvA3MPKFOgH", "jfat5lplidV" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The computational cost for training and evaluation is added in Appendix K.\n\nThe results of OOD detection on CIFAR-100 vs CIFAR-10 are added to Tables 10 and 11, which are also provided below. For the baseline methods which are evaluated with different detection methods such as MSP, MD, RMD, and Openmax, only th...
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "dIUr8BQFnUF", "dIUr8BQFnUF", "4sA8zaKQPmw", "U-Qvl4wYtID", "bvA3MPKFOgH", "jfat5lplidV", "KERmoDgNFh", "h-r4DRt73Jm", "nips_2022_YUEP3ZmkL1", "nips_2022_YUEP3ZmkL1", "nips_2022_YUEP3ZmkL1", "nips_2022_YUEP3ZmkL1" ]
nips_2022_OrcLKV9sKWp
Pruning’s Effect on Generalization Through the Lens of Training and Regularization
Practitioners frequently observe that pruning improves model generalization. A long-standing hypothesis based on bias-variance trade-off attributes this generalization improvement to model size reduction. However, recent studies on over-parameterization characterize a new model size regime, in which larger models achieve better generalization. Pruning models in this over-parameterized regime leads to a contradiction -- while theory predicts that reducing model size harms generalization, pruning to a range of sparsities nonetheless improves it. Motivated by this contradiction, we re-examine pruning’s effect on generalization empirically. We show that size reduction cannot fully account for the generalization-improving effect of standard pruning algorithms. Instead, we find that pruning leads to better training at specific sparsities, improving the training loss over the dense model. We find that pruning also leads to additional regularization at other sparsities, reducing the accuracy degradation due to noisy examples over the dense model. Pruning extends model training time and reduces model size. These two factors improve training and add regularization respectively. We empirically demonstrate that both factors are essential to fully explaining pruning's impact on generalization.
Accept
All the reviewers, including the AC, agree that the paper makes a significant contribution to the field that deserves publication at NeurIPS. The AC refers the readers to the reviews for the discussions on the pros and cons of the paper.
test
[ "SzR1VCPBD6T", "HII7OgjualP", "ArL-yNKVIQr", "rwmROioSMg4", "iNv1LdYK0W6", "zA7IXv-RSHK", "mor2-VQ-tqz", "-eoD6-9vbxmb", "xq-z-4iwLm0", "QIiI-ra1AjE", "2OTzoh2fPE8G", "FocAPoY67ny", "A-mUPrlLv7", "NDmyJbCu0kFe", "81733yKgYVcX", "-CSqlBstB72j", "-EZdZ_l1PClI", "AavLGK-UWYT", "n0Oz...
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We appreciate your feedback! We agree that \"better/improved training\" is strictly better than \"better optimization\" and applied your suggestion in our newly uploaded paper.", " I hope to give you an update on the remaining experiments we promised to run comparing pruning with training with random sparsity.\...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ "zA7IXv-RSHK", "iNv1LdYK0W6", "mor2-VQ-tqz", "iNv1LdYK0W6", "81733yKgYVcX", "AavLGK-UWYT", "-CSqlBstB72j", "n0OzC4lFlI", "n0OzC4lFlI", "n0OzC4lFlI", "n0OzC4lFlI", "n0OzC4lFlI", "NDmyJbCu0kFe", "n0OzC4lFlI", "T6KwRxvJ9_", "4n8rNe_yTEj", "4n8rNe_yTEj", "n0OzC4lFlI", "nips_2022_OrcL...
nips_2022_xbJAITw9Z6t
Stacked unsupervised learning with a network architecture found by supervised meta-learning
Stacked unsupervised learning (SUL) seems more biologically plausible than backpropagation, because learning is local to each layer. But SUL has fallen far short of backpropagation in practical applications, undermining the idea that SUL can explain how brains learn. Here we show an SUL algorithm that can perform completely unsupervised clustering of MNIST digits with comparable accuracy relative to unsupervised algorithms based on backpropagation. Our algorithm is exceeded only by self-supervised methods requiring training data augmentation by geometric distortions. The only prior knowledge in our unsupervised algorithm is implicit in the network architecture. Multiple convolutional ``energy layers'' contain a sum-of-squares nonlinearity, inspired by ``energy models'' of primary visual cortex. Convolutional kernels are learned with a fast minibatch implementation of the K-Subspaces algorithm. High accuracy requires preprocessing with an initial whitening layer, representations that are less sparse during inference than learning, and rescaling for gain control. The hyperparameters of the network architecture are found by supervised meta-learning, which optimizes unsupervised clustering accuracy. We regard such dependence of unsupervised learning on prior knowledge implicit in network architecture as biologically plausible, and analogous to the dependence of brain architecture on evolutionary history.
Reject
This paper proposes a specific architecture for performing stacked unsupervised learning, and demonstrates this algorithm on mnist. All reviewer scores are borderline, and one reviewer lowered their score during discussion. No reviewer was willing to champion the paper during discussion. Based upon this, I recommend rejection. I spent a little bit of time looking at the paper myself. One particular concern I have from my own reading is that the proposed algorithm was extensively meta-trained on the target task (MNIST classification), so it wasn't clear to me that this should count as an unsupervised algorithm. (More typically, meta-training would be performed on other tasks, and then the learned algorithm applied to the target task.) (Reviewer x8sP made a similar observation.)
train
[ "mdA6AujnfSw", "SQyzzJ81HyI", "ZsCoWh_ukh4", "G8g9POiMfy", "fwmWyD3dGFe", "eb5f0XCYDam", "ULTEWHgQHBk", "rpZ4GmTCb86", "bIKUsQY6_3" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank authors for the detailed responses, which addressed most of my questions. I would like to increase the rate to 5.", " Thanks for the detailed answer. Most of my concerns are addressed. After reading the other reviews and the answers, I confirm my rating.", " *It remains a bit unclear what the authors re...
[ -1, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "fwmWyD3dGFe", "G8g9POiMfy", "bIKUsQY6_3", "rpZ4GmTCb86", "eb5f0XCYDam", "ULTEWHgQHBk", "nips_2022_xbJAITw9Z6t", "nips_2022_xbJAITw9Z6t", "nips_2022_xbJAITw9Z6t" ]
nips_2022_MOGt8ZizQJL
Quantile Constrained Reinforcement Learning: A Reinforcement Learning Framework Constraining Outage Probability
Constrained reinforcement learning (RL) is an area of RL whose objective is to find an optimal policy that maximizes expected cumulative return while satisfying a given constraint. Most of the previous constrained RL works consider expected cumulative sum cost as the constraint. However, optimization with this constraint cannot guarantee a target probability of outage event that the cumulative sum cost exceeds a given threshold. This paper proposes a framework, named Quantile Constrained RL (QCRL), to constrain the quantile of the distribution of the cumulative sum cost that is a necessary and sufficient condition to satisfy the outage constraint. This is the first work that tackles the issue of applying the policy gradient theorem to the quantile and provides theoretical results for approximating the gradient of the quantile. Based on the derived theoretical results and the technique of the Lagrange multiplier, we construct a constrained RL algorithm named Quantile Constrained Policy Optimization (QCPO). We use distributional RL with the Large Deviation Principle (LDP) to estimate quantiles and tail probability of the cumulative sum cost for the implementation of QCPO. The implemented algorithm satisfies the outage probability constraint after the training period.
Accept
This paper made novel contributions in developing policy gradient methods in constrained RL. Unlike previous approaches that focused on expected cost constraint this work studied a more chance-constrained setting and proposed a Quantile Constrained RL (QCRL) framework that is based on distributional RL and Large Deviation Principle (LDP). Experiments show that this newly developed approach outperforms SOTA both in learning performance and safety guarantees. Therefore, without major controversies the review committee has the consensus of accepting this paper to be published at NeurIPS22.
train
[ "yayEYshxDRI", "wNAUUb7iCn", "_hz7_mF1ZCs", "o6C_N60o71", "fV_T6zxIQX", "fuUgc749p_M", "r5bNQ14QNTQ", "u_Y1YQ6Ng71", "xKTacIrq-18", "fr_4Zyjzb9", "UObhBAL55d8", "6uXrsFfRkWm", "fuilKLoJzI3", "CLK6ZoNdeI3", "8IEko9UrAX4" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks again for accepting a reviewer of this paper, and for your valuable comments.", " We are glad that our response helps you to understand this paper. Thanks again for your valuable comments, and we will revise this paper for better readability in the case that the paper is accepted.", " I thank the autho...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5, 3 ]
[ "_hz7_mF1ZCs", "o6C_N60o71", "r5bNQ14QNTQ", "u_Y1YQ6Ng71", "fuUgc749p_M", "8IEko9UrAX4", "CLK6ZoNdeI3", "xKTacIrq-18", "fuilKLoJzI3", "6uXrsFfRkWm", "nips_2022_MOGt8ZizQJL", "nips_2022_MOGt8ZizQJL", "nips_2022_MOGt8ZizQJL", "nips_2022_MOGt8ZizQJL", "nips_2022_MOGt8ZizQJL" ]
nips_2022_iWg5LjFbeT_
Branch & Learn for Recursively and Iteratively Solvable Problems in Predict+Optimize
This paper proposes Branch & Learn, a framework for Predict+Optimize to tackle optimization problems containing parameters that are unknown at the time of solving. Given an optimization problem solvable by a recursive algorithm satisfying simple conditions, we show how a corresponding learning algorithm can be constructed directly and methodically from the recursive algorithm. Our framework applies also to iterative algorithms by viewing them as a degenerate form of recursion. Extensive experimentation shows better performance for our proposal over classical and state of the art approaches.
Accept
This paper considers the general setting of the predict+optimize framework, where "optimize" part of the problem is typically solved via a recursive algorithm. The paper proposes a new exact algorithm to directly optimize the regret in this setting. An extensive evaluation of the new methodology is also provided. This paper represents a significant generalization of the existing techniques and is definitely of interest to Neurips community.
train
[ "94asGUHsiKi", "KBOC7yz_9qB", "RZ5LBpnBBvF", "sHuczica-Kj", "Asqxa2kJK-o", "LHzzpPsv-_R", "pmUAJkhFFgH", "ld75iHki04v", "uwoy-WcrF7Q", "nb2WtEE0E75", "h4X0Kk2jb5-", "e508M6v69jN", "_0nY5D94i5Y", "8vsbl5PfdA" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for following up. We have uploaded a revision of the paper, with changes highlighted in blue. The new Section 6 includes a more detailed discussion comparing our work and the work of Guler et al.", " We thank the reviewer for further comments, and are encouraged by the positive response. We have revis...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 3, 3, 3 ]
[ "RZ5LBpnBBvF", "sHuczica-Kj", "uwoy-WcrF7Q", "Asqxa2kJK-o", "8vsbl5PfdA", "pmUAJkhFFgH", "_0nY5D94i5Y", "e508M6v69jN", "h4X0Kk2jb5-", "nips_2022_iWg5LjFbeT_", "nips_2022_iWg5LjFbeT_", "nips_2022_iWg5LjFbeT_", "nips_2022_iWg5LjFbeT_", "nips_2022_iWg5LjFbeT_" ]
nips_2022_TIQfmR7IF6H
Minimax Optimal Algorithms for Fixed-Budget Best Arm Identification
We consider the fixed-budget best arm identification problem where the goal is to find the arm of the largest mean with a fixed number of samples. It is known that the probability of misidentifying the best arm is exponentially small to the number of rounds. However, limited characterizations have been discussed on the rate (exponent) of this value. In this paper, we characterize the minimax optimal rate as a result of an optimization over all possible parameters. We introduce two rates, $R^{\mathrm{go}}$ and $R^{\mathrm{go}}_{\infty}$, corresponding to lower bounds on the probability of misidentification, each of which is associated with a proposed algorithm. The rate $R^{\mathrm{go}}$ is associated with $R^{\mathrm{go}}$-tracking, which can be efficiently implemented by a neural network and is shown to outperform existing algorithms. However, this rate requires a nontrivial condition to be achievable. To address this issue, we introduce the second rate $R^{\mathrm{go}}_\infty$. We show that this rate is indeed achievable by introducing a conceptual algorithm called delayed optimal tracking (DOT).
Accept
This paper had very mixed pre-rebuttal scores, fairly detailed reviews, and significant author/reviewer interaction. Following all of this, the reviewers are now generally positive, with several of the initial concerns being resolved. One of the scores remains below the threshold, due to certain claims and statements being too vague, and insufficient distinction between fixed confidence and fixed budget. However, another reviewer responded by noting that the distinction is generally clear from existing works (e.g., [a]). Since this remaining concern does not appear to be a deal-breaker, I believe that acceptance is the correct decision. However, the authors should very carefully modify the paper according to the reviewer feedback, and be extra careful of unclear statements such as those pointed out by Reviewer BbC9. [a] Emilie Kaufmann, Olivier Cappé, and Aurélien Garivier. On the complexity of best-arm identification in multi-armed bandit models. JMLR, 2016.
val
[ "abGk_KhVOl", "9-5vfgRajX", "QArUEancyYc", "rCUQ1Sl6bw", "26ZG59i7dc", "OVxDMhKSwn", "zYdaNpFGVvO", "5B-CRQ-KeIz", "68HTx8UvWmH", "FaMKwbjZdhP", "oriZxW1Z7m4", "ftCAr1rcI8F", "4nxncikpJV4", "8SLYvxz_reE", "L7_vSWaEaYw", "gIJRjimSNjD", "_kYsZyab8M1", "PInJ6EfHEX2", "_8ibmwV5cxx", ...
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_re...
[ " Yes, the difference between FB and FC settings is the important motivation of our paper, and we will clarify it more in detail in the revised version. Still, the detailed *computation* appearing in the FC bound would not be so relevant and we will adequately skip it with a proper reference.\n", " > I had said $...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "QArUEancyYc", "rCUQ1Sl6bw", "zYdaNpFGVvO", "L7_vSWaEaYw", "OVxDMhKSwn", "FaMKwbjZdhP", "5B-CRQ-KeIz", "68HTx8UvWmH", "ftCAr1rcI8F", "oriZxW1Z7m4", "gIJRjimSNjD", "8SLYvxz_reE", "7nYcbonv4Wf", "_8ibmwV5cxx", "PInJ6EfHEX2", "_kYsZyab8M1", "nips_2022_TIQfmR7IF6H", "nips_2022_TIQfmR7I...
nips_2022_sFapsu4hYo
Exposing and Exploiting Fine-Grained Block Structures for Fast and Accurate Sparse Training
Sparse training is a popular technique to reduce the overhead of training large models. Although previous work has shown promising results for nonstructured sparse models, it is still unclear whether a sparse model with structural constraints can be trained from scratch to high accuracy. In this work, we study the dynamic sparse training for a class of sparse models with shuffled block structures. Compared to nonstructured models, such fine-grained structured models are more hardware-friendly and can effectively accelerate the training process. We propose an algorithm that keeps adapting the sparse model while maintaining the active parameters in shuffled blocks. We conduct experiments on a variety of networks and datasets and obtain positive results. In particular, on ImageNet, we achieve dense accuracy for ResNet50 and ResNet18 at 0.5 sparsity. On CIFAR10/100, we show that dense accuracy can be recovered at 0.6 sparsity for various models. At higher sparsity, our algorithm can still match the accuracy of nonstructured sparse training in most cases, while reducing the training time by up to 5x due to the fine-grained block structures in the models.
Accept
This paper studies dynamic sparse training (DST) with fine-grained structured pruning. A topic of practical relevance, the authors proposed hardware-friendly shuffled blocking and simultaneously prune-and-grow. While this paper seems to be a mild extension from Chen et.al. 2022, the contributions are clear. One main concern raised is whether evaluating the end-to-end speedup with layer-wise CUDA execution time is a correct practice. I agree with the authors, that is common in the ML literature and more real evaluation requires system-level engineering (often beyond the scope of an algorithm paper). Three out of four reviewers acknowledged they were convinced by the authors' rebuttal. One reviewer mainly questioned one missing reference (while acknowledging “distinct”) + several algorithm step clarifications, to which the authors delivered point-by-point responses. Therefore, given the overall sentiment among reviewers, I’m recommending accept.
test
[ "qPFbgQVWhT", "ZfjcLgu5g3S", "p4X2BH-EdpF", "iAqdwQ93hFZ", "sgApQ8wl3bd", "RtSA3VpUVI", "Tgm6jC7o5M", "Ux4gysSn2jg", "othsOqylPRs", "McDKK3biE-", "Oych9N_yMBE", "d7dlTk891Aq", "gQt8wWAMBmc", "qESfLdbCVDS", "4jM20bJU945", "3PJLisuKux" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We understand your concern about the end-to-end evaluation. We want to point out that our method for evaluating the end-to-end speedup with layer-wise CUDA execution time is commonly used in the literature. According to the source code of [1] and [2] and our conversation with the authors, they evaluate the end-to...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "ZfjcLgu5g3S", "sgApQ8wl3bd", "iAqdwQ93hFZ", "d7dlTk891Aq", "McDKK3biE-", "Oych9N_yMBE", "othsOqylPRs", "nips_2022_sFapsu4hYo", "3PJLisuKux", "4jM20bJU945", "qESfLdbCVDS", "gQt8wWAMBmc", "nips_2022_sFapsu4hYo", "nips_2022_sFapsu4hYo", "nips_2022_sFapsu4hYo", "nips_2022_sFapsu4hYo" ]
nips_2022_A7l8WZIKz3
Model-Based Opponent Modeling
When one agent interacts with a multi-agent environment, it is challenging to deal with various opponents unseen before. Modeling the behaviors, goals, or beliefs of opponents could help the agent adjust its policy to adapt to different opponents. In addition, it is also important to consider opponents who are learning simultaneously or capable of reasoning. However, existing work usually tackles only one of the aforementioned types of opponents. In this paper, we propose model-based opponent modeling (MBOM), which employs the environment model to adapt to all kinds of opponents. MBOM simulates the recursive reasoning process in the environment model and imagines a set of improving opponent policies. To effectively and accurately represent the opponent policy, MBOM further mixes the imagined opponent policies according to the similarity with the real behaviors of opponents. Empirically, we show that MBOM achieves more effective adaptation than existing methods in a variety of tasks, respectively with different types of opponents, i.e., fixed policy, naive learner, and reasoning learner.
Accept
This paper tackles the problem of modeling agents that are simultaneously learning or are able to reason during interaction. The proposed approach employs an environment model to simulate the opponent's reasoning process. The initial reviews are split and main concern is that the results do not adequately demonstrate that the proposed approach actually models opponents better. I believe the added ablation study and baselines have adequately addressed the concerns. Two reviewers also support acceptance after discussion (the other two didn't respond). I believe the work tackles and important problem in MARL and would spur valuable discussion at the conference. Thus, I'm leaning towards acceptance.
train
[ "pK2okozC48O", "AiaXWxQW7zg", "Kg6z0nJZgX8k", "3TDtjrUo0mu", "UIIJ3ohxQbn", "gWIzCYACg3", "YLbaMoumN7x", "RSCL3wCH8p", "_cSYEPnM_5A", "nSdF8sVO-nz", "u0i05YAqQEX", "1kjuE56_l2Q", "jv2ceA1SUXU" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their responses to our questions and comments.", " Dear Reviewers,\n\nWe first would like to thank the reviewers' efforts and time in reviewing our work. We were wondering if our responses have resolved your concerns as the author-reviewer discussion is ending soon. We will be happy to h...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "YLbaMoumN7x", "nips_2022_A7l8WZIKz3", "gWIzCYACg3", "jv2ceA1SUXU", "jv2ceA1SUXU", "1kjuE56_l2Q", "u0i05YAqQEX", "nSdF8sVO-nz", "nips_2022_A7l8WZIKz3", "nips_2022_A7l8WZIKz3", "nips_2022_A7l8WZIKz3", "nips_2022_A7l8WZIKz3", "nips_2022_A7l8WZIKz3" ]
nips_2022_AKM3C3tsSx3
A General Framework for Auditing Differentially Private Machine Learning
We present a framework to statistically audit the privacy guarantee conferred by a differentially private machine learner in practice. While previous works have taken steps toward evaluating privacy loss through poisoning attacks or membership inference, they have been tailored to specific models or have demonstrated low statistical power. Our work develops a general methodology to empirically evaluate the privacy of differentially private machine learning implementations, combining improved privacy search and verification methods with a toolkit of influence-based poisoning attacks. We demonstrate significantly improved auditing power over previous approaches on a variety of models including logistic regression, Naive Bayes, and random forest. Our method can be used to detect privacy violations due to implementation errors or misuse. When violations are not present, it can aid in understanding the amount of information that can be leaked from a given dataset, algorithm, and privacy specification.
Accept
The paper got overall positive reviews. I recommend the authors to carefully incorporate all the pointes raised during the rebuttal period. especially, in regards to the discussion on novelty.
train
[ "oVZDAltOiK7", "141-hC6FJPu", "K6A9d4Uh1c", "m_2vO7kcHSz", "cZkdCWs0hmT", "9TTbttBhEAWE", "ljc_oUptwLno", "TNyCogsS1fs", "RbHqEUT-3py", "4Il9GUgSP73" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the clarification to my questions. I am increasing my score.", " Thank you for your suggestions and your interest in the results! \n\nRe: Fig 2, we will put the new attack(s) in the main body, we were just thinking that it would be more convenient at first. \n\nRe: backdoor poisoning, thank you for e...
[ -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "9TTbttBhEAWE", "K6A9d4Uh1c", "ljc_oUptwLno", "nips_2022_AKM3C3tsSx3", "RbHqEUT-3py", "TNyCogsS1fs", "4Il9GUgSP73", "nips_2022_AKM3C3tsSx3", "nips_2022_AKM3C3tsSx3", "nips_2022_AKM3C3tsSx3" ]
nips_2022_ObgXE0EMIqH
Non-Linguistic Supervision for Contrastive Learning of Sentence Embeddings
Semantic representation learning for sentences is an important and well-studied problem in NLP. The current trend for this task involves training a Transformer-based sentence encoder through a contrastive objective with text, i.e., clustering sentences with semantically similar meanings and scattering others. In this work, we find the performance of Transformer models as sentence encoders can be improved by training with multi-modal multi-task losses, using unpaired examples from another modality (e.g., sentences and unrelated image/audio data). In particular, besides learning by the contrastive loss on text, our model clusters examples from a non-linguistic domain (e.g., visual/audio) with a similar contrastive loss at the same time. The reliance of our framework on unpaired non-linguistic data makes it language-agnostic, enabling it to be widely applicable beyond English NLP. Experiments on 7 semantic textual similarity benchmarks reveal that models trained with the additional non-linguistic (images/audio) contrastive objective lead to higher quality sentence embeddings. This indicates that Transformer models are able to generalize better by doing a similar task (i.e., clustering) with \textit{unpaired} examples from different modalities in a multi-task fashion. The code is available at https://github.com/yiren-jian/NonLing-CSE.
Accept
This paper improves contrastive learning of sentence embedding by using unpaired examples from the image or audio modality. Reviewers liked the significance of this work due to its simplicity and general applicability, but some questioned the amount of improvement and advocated for the inclusion of low resource languages. The authors included Chinese, which is non-European but not low resource.
train
[ "44faYmgLp0", "Li-e9H6FAN", "KFw5P26UQch", "3hiKd6JSf26", "5MOOcQBV-zJ", "AQIaOl3MAG", "r2ajD93jJm", "uI75bAU8HmT", "FFUy-dP-M3k", "O2QVaFagt9f", "qVx61is4VK", "2hM7541BN2M" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The detailed response from the authors solved most of my concerns. After reading the author's response and other review comments, I decided to keep my original rating as \"Borderline Accept.\"\n\n", " We sincerely thank all the reviewers for their time and their thoughtful comments and questions. We are encoura...
[ -1, -1, -1, -1, -1, -1, -1, 6, 6, 4, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4, 4 ]
[ "3hiKd6JSf26", "nips_2022_ObgXE0EMIqH", "2hM7541BN2M", "qVx61is4VK", "O2QVaFagt9f", "FFUy-dP-M3k", "uI75bAU8HmT", "nips_2022_ObgXE0EMIqH", "nips_2022_ObgXE0EMIqH", "nips_2022_ObgXE0EMIqH", "nips_2022_ObgXE0EMIqH", "nips_2022_ObgXE0EMIqH" ]
nips_2022_u6p_NvZ23qt
A Non-Asymptotic Moreau Envelope Theory for High-Dimensional Generalized Linear Models
We prove a new generalization bound that shows for any class of linear predictors in Gaussian space, the Rademacher complexity of the class and the training error under any continuous loss $\ell$ can control the test error under all Moreau envelopes of the loss $\ell$ . We use our finite-sample bound to directly recover the “optimistic rate” of Zhou et al. (2021) for linear regression with the square loss, which is known to be tight for minimal $\ell_2$-norm interpolation, but we also handle more general settings where the label is generated by a potentially misspecified multi-index model. The same argument can analyze noisy interpolation of max-margin classifiers through the squared hinge loss, and establishes consistency results in spiked-covariance settings. More generally, when the loss is only assumed to be Lipschitz, our bound effectively improves Talagrand’s well-known contraction lemma by a factor of two, and we prove uniform convergence of interpolators (Koehler et al. 2021) for all smooth, non-negative losses. Finally, we show that application of our generalization bound using localized Gaussian width will generally be sharp for empirical risk minimizers, establishing a non-asymptotic Moreau envelope theory for generalization that applies outside of proportional scaling regimes, handles model misspecification, and complements existing asymptotic Moreau envelope theories for M-estimation.
Accept
The manuscript proves a new (finite-sample) generalization bound for generalized linear models, using Moreau envelope theory. The paper also provides experimental validation. While the results only hold for Gaussian data, I believe there is some interesting novel results which might inspire some future work in the learning theory community. (In comparison, several novel frameworks have been created in the past for other problems, in which the initial versions assumed Gaussianity, e.g., the work on support recovery on sparse linear regression, for instance.) Several technical clarifications regarding the assumptions (e.g., surrogate distribution, comparison to prior results) were asked by the reviewers, which the authors thoroughly and successfully addressed during the rebuttal phase. I recommend adding this to the camera-ready version of the paper, as well as other discussions and clarifications raised by all the reviewers.
train
[ "mdTX2P1JRAC", "85BFM_aKR5x", "JfmrKe5JCVR", "_mO12karoRc", "mo-3riLEKWL", "neNzxaMGMR8", "MEqi-Q6dSt0", "601YwgkBONK", "RJ-JQW8gOms", "wx-RfxYMyV", "kl9Fyue4yqg", "vGPVHpp100h", "slaIJFLrsHS" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " >>> the present work would be stronger and more compelling if there was an example where these methods provided a strictly better rate than existing work.\n\n>> Note that the existing asymptotic Moreau envelope framework for M-estimators, as discussed in the beginning of Related Work (see e.g. reference Thrampoul...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 3 ]
[ "85BFM_aKR5x", "neNzxaMGMR8", "RJ-JQW8gOms", "wx-RfxYMyV", "nips_2022_u6p_NvZ23qt", "MEqi-Q6dSt0", "slaIJFLrsHS", "vGPVHpp100h", "vGPVHpp100h", "kl9Fyue4yqg", "nips_2022_u6p_NvZ23qt", "nips_2022_u6p_NvZ23qt", "nips_2022_u6p_NvZ23qt" ]
nips_2022_Vc4QUfqr4do
ACIL: Analytic Class-Incremental Learning with Absolute Memorization and Privacy Protection
Class-incremental learning (CIL) learns a classification model with training data of different classes arising progressively. Existing CIL either suffers from serious accuracy loss due to catastrophic forgetting, or invades data privacy by revisiting used exemplars. Inspired by learning of linear problems, we propose an analytic class-incremental learning (ACIL) with absolute memorization of past knowledge while avoiding breaching of data privacy (i.e., without storing historical data). The absolute memorization is demonstrated in the sense that the CIL using ACIL given present data would give identical results to that from its joint-learning counterpart that consumes both present and historical samples. This equality is theoretically validated. The data privacy is ensured by showing that no historical data are involved during the learning process. Empirical validations demonstrate ACIL's competitive accuracy performance with near-identical results for various incremental task settings (e.g., 5-50 phases). This also allows ACIL to outperform the state-of-the-art methods for large-phase scenarios (e.g., 25 and 50 phases).
Accept
The reviewers agree that this is a solid contribution. Please do revise the paper according to the reviewers comments and the discussion.
train
[ "XpOynoN0cbm", "U-q-sBU-Y5e", "XyD2n0sPwrd", "6jx0AGOsVOo", "eYEsvZpzASn", "wkO1hflzQrB", "xsVRAm5_gV4", "5UD1ZcHDrOU", "YK1A_6ik83", "p3p3FU5WIYb" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for the response! Your comments are very helpful!", " Thank you very much for responding to my review. Your comment really clarified the primary issues I was not sure about.\nIt is really nice that you managed to run an experiment with K=250 to support your claim for large K values! Thanks f...
[ -1, -1, -1, -1, -1, -1, -1, 8, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "U-q-sBU-Y5e", "wkO1hflzQrB", "6jx0AGOsVOo", "xsVRAm5_gV4", "p3p3FU5WIYb", "5UD1ZcHDrOU", "YK1A_6ik83", "nips_2022_Vc4QUfqr4do", "nips_2022_Vc4QUfqr4do", "nips_2022_Vc4QUfqr4do" ]
nips_2022_f_kvHrM4Q0
Co-Modality Graph Contrastive Learning for Imbalanced Node Classification
Graph contrastive learning (GCL), leveraging graph augmentations to convert graphs into different views and further train graph neural networks (GNNs), has achieved considerable success on graph benchmark datasets. Yet, there are still some gaps in directly applying existing GCL methods to real-world data. First, handcrafted graph augmentations require trials and errors, but still can not yield consistent performance on multiple tasks. Second, most real-world graph data present class-imbalanced distribution but existing GCL methods are not immune to data imbalance. Therefore, this work proposes to explicitly tackle these challenges, via a principled framework called \textit{\textbf{C}o-\textbf{M}odality \textbf{G}raph \textbf{C}ontrastive \textbf{L}earning} (\textbf{CM-GCL}) to automatically generate contrastive pairs and further learn balanced representation over unlabeled data. Specifically, we design inter-modality GCL to automatically generate contrastive pairs (e.g., node-text) based on rich node content. Inspired by the fact that minority samples can be ``forgotten'' by pruning deep neural networks, we naturally extend network pruning to our GCL framework for mining minority nodes. Based on this, we co-train two pruned encoders (e.g., GNN and text encoder) in different modalities by pushing the corresponding node-text pairs together and the irrelevant node-text pairs away. Meanwhile, we propose intra-modality GCL by co-training non-pruned GNN and pruned GNN, to ensure node embeddings with similar attribute features stay closed. Last, we fine-tune the GNN encoder on downstream class-imbalanced node classification tasks. Extensive experiments demonstrate that our model significantly outperforms state-of-the-art baseline models and learns more balanced representations on real-world graphs.
Accept
All the reviewers finally give prone-to-accept score 5/5/8 to this paper after the authors' informative communication and adding more experiments on two additional datasets. I agree that the paper in general is novel and the approach is reasonably designed. I suggest to accept this paper while it can be also improved in many aspects especially for clarity as pointed out by the reviewers.
train
[ "i6s94BD6V8X", "rNwhLhgNqzl", "84ezkgrFm-", "jaYA_lAQLm8", "F9PtZ4ORJhF", "HnMYagzSoS", "Mj7_VG5vKSk", "Roen21bUSXU", "N-XSibKGPZu", "TyMc21WHo-", "eY1lKXY4l86", "lwEb_jwYZl1", "b6KxuzUzLNe", "0iqz5TCJmwg", "qPguShJs3w7", "nzXpD8U36U", "bT1I6ESRWJ", "3JMS_FTls9" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for increasing your rating. We try to address your concerns during the rebuttal and discussion period. To further make CMI-GCL more convincing, except for the AMiner dataset, we apply CMI-GCL to another benchmark dataset with raw text: YelpCHI [1], to detect spam reviews on Yelp (binary classi...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "F9PtZ4ORJhF", "jaYA_lAQLm8", "jaYA_lAQLm8", "0iqz5TCJmwg", "b6KxuzUzLNe", "3JMS_FTls9", "Roen21bUSXU", "TyMc21WHo-", "eY1lKXY4l86", "nzXpD8U36U", "nips_2022_f_kvHrM4Q0", "3JMS_FTls9", "3JMS_FTls9", "bT1I6ESRWJ", "bT1I6ESRWJ", "nips_2022_f_kvHrM4Q0", "nips_2022_f_kvHrM4Q0", "nips_2...
nips_2022_QMrs1nggaL
Faster and Scalable Algorithms for Densest Subgraph and Decomposition
We study the densest subgraph problem (DSG) and the densest subgraph local decomposition problem (DSG-LD) in undirected graphs. We also consider supermodular generalizations of these problems. For large scale graphs simple iterative algorithms perform much better in practice than theoretically fast algorithms based on network-flow or LP solvers. Boob et al [1] recently gave a fast iterative algorithm called Greedy++ for DSG. It was shown in [2] that it converges to a $(1-\epsilon)$ relative approximation to the optimum density in $O(\frac{1}{\epsilon^2} \frac{\Delta(G)}{\lambda^*})$ iterations where $\Delta(G)$ is the maximum degree and $\lambda^*$ is the optimum density. Danisch et al. [3] gave an iterative algorithm based on the Frank-Wolfe algorithm for DSG-LD that takes $O(\frac{m\Delta(G) }{\epsilon^2})$ iterations to converge to an $\epsilon$-additive approximate local decomposition vector $\hat{b}$, where $m$ is number of edges in the graph. In this paper we give a new iterative algorithm for both problems that takes at most $O(\frac{\sqrt{m\Delta(G)}}{\epsilon})$ iterations to converge to an $\epsilon$-additive approximate local decomposition vector; each iteration can be implemented in $O(m)$ time. We describe a fractional peeling technique which has strong empirical performance as well as theoretical guarantees. The algorithm is scalable and simple, and can be applied to graphs with hundreds of millions of edges. We test our algorithm on real and synthetic data sets and show that it provides a significant benefit over previous algorithms. The algorithm and analysis extends to hypergraphs.
Accept
Overall the reviewers found the paper to tackle an important problem and the level of technical novelty was high. The experiments were also good. Clear accept.
train
[ "sWx5Lpxcyz1", "L8OahNaes3W", "LJvyPA9V0Cq", "phObYwinO5", "y7cVN2ZL0P", "g0UlvFH_d88", "tI-MR4Mql53", "Y9SHbZA7qFn", "BmsAmodZzke", "H-8XzE2dV5u", "MbWJTEfv3rx", "xo3b5kww74s", "A7ABGA3fzW2" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response. My question is well addressed.", " We thank the reviewer for taking the time to read the clarification and reconsidering their score. ", " Thanks for the clarification. I would be happy to revise my score to 7.", " We thank the reviewers for their reviews, useful comments, and posit...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 2, 4 ]
[ "g0UlvFH_d88", "LJvyPA9V0Cq", "tI-MR4Mql53", "nips_2022_QMrs1nggaL", "A7ABGA3fzW2", "xo3b5kww74s", "Y9SHbZA7qFn", "MbWJTEfv3rx", "H-8XzE2dV5u", "nips_2022_QMrs1nggaL", "nips_2022_QMrs1nggaL", "nips_2022_QMrs1nggaL", "nips_2022_QMrs1nggaL" ]
nips_2022_SCD0hn3kMHw
In What Ways Are Deep Neural Networks Invariant and How Should We Measure This?
It is often said that a deep learning model is ``invariant'' to some specific type of transformation. However, what is meant by this statement strongly depends on the context in which it is made. In this paper we explore the nature of invariance and equivariance of deep learning models with the goal of better understanding the ways that they actually capture these concepts on a formal level. We introduce a family of invariance and equivariance metrics that allow us to quantify these properties in a way that disentangles them from other metrics such as loss or accuracy. We use our metrics to better understand the two most popular methods used to build invariance into networks, data augmentation and equivariant layers. We draw a range of conclusions about invariance and equivariance in deep learning models, ranging from whether initializing a model with pretrained weights has an effect on a trained model's invariance, to the extent to which invariance learned via training can generalize to out-of-distribution data.
Accept
The paper proposes metrics to empirically explore the nature of invariance and equivariance of deep learning models with the goal of better understanding the ways that they actually capture these concepts on a formal level. They utilize their proposed metrics to shed light on two popular methods used to build invariance into networks, data augmentation and equivariant layers. The reviewers agree on the significance of the contribution and quality of the presentation. Some questions were raised by the reviewers that required some clarification and the the authors did a great job addressing these, leading to improvements in the paper and as a result in the reviewer’s opinion of the paper.
train
[ "bADx2fTFxFV", "r3N0mKBCdMY-", "Ct-NGkZxo_x", "Fr_05zFWdYx", "3Q0r6_9dCvO", "rn5CKhmXNo", "bAUdRHBId34", "qY3ZE5PzOFqY", "lOPFYaj_uO", "M6-AySzy5wn", "6J0tkEtVlUp", "cm7HzBu7dJ", "lyF-7wgmVm-", "iszFPpSTyZv", "_obrShA8pQU", "4mB1SwgzvUm", "2k4hHLbNoz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I'm happy with the authors response and I updated my score. ", " I appreciate the authors' hard work in their response to the review. I will raise my score accordingly.", " I would like to congratulate the authors for an excellent rebuttal, to my initial comments and to other reviewers. I appreciate the amoun...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "lOPFYaj_uO", "3Q0r6_9dCvO", "6J0tkEtVlUp", "bAUdRHBId34", "rn5CKhmXNo", "2k4hHLbNoz", "qY3ZE5PzOFqY", "4mB1SwgzvUm", "M6-AySzy5wn", "_obrShA8pQU", "cm7HzBu7dJ", "iszFPpSTyZv", "nips_2022_SCD0hn3kMHw", "nips_2022_SCD0hn3kMHw", "nips_2022_SCD0hn3kMHw", "nips_2022_SCD0hn3kMHw", "nips_2...
nips_2022_u6OfmaGIya1
Second Thoughts are Best: Learning to Re-Align With Human Values from Text Edits
We present Second Thoughts, a new learning paradigm that enables language models (LMs) to re-align with human values. By modeling the chain-of-edits between value-unaligned and value-aligned text, with LM fine-tuning and additional refinement through reinforcement learning, Second Thoughts not only achieves superior performance in three value alignment benchmark datasets but also shows strong human-value transfer learning ability in few-shot scenarios. The generated editing steps also offer better interpretability and ease for interactive error correction. Extensive human evaluations further confirm its effectiveness.
Accept
This paper received reviews and ratings that are leaning positively, the reviewer discussion and ethics review highlighted weaknesses that make this paper quite borderline. Strengths: 1. The goal and task of the paper are quite well motivated, and they are geared towards positive societal impact: refining generated text to be more aligned with “human values” (e.g., gearing text towards “moral actions”). 2. The text editing approach of the paper using adversarial imitation learning is both novel and intuitive according to the reviewer. 3. The authors give a convincing justification for their text-edit paradigm, as prior attribute-control generation methods (e.g., PPLM) tend to struggle when the context is polluted. Experimental results appear to support their claim. 4. The chain-of-edit paradigm of the paper (showing, e.g., how the text morphs into a more moral one) eases error diagnosis and enables interactive correction. Weaknesses: 1. The questions asked to AMT workers in the human evaluation do not seem to be well formulated: (i.e., “To what extent does the edited response improve the original response in terms of alignment with human values?”). First, the term “human value” is very generic, even the specific human value gets defined later (e.g., “deontology”). The AMT question form reads more like one that would be given to an expert rater, and would need to be either written in plain English or given to trained judges. Second, the judges need to identify *improvement* in alignment with human values could be a source of confusion, as the revised response could align well but no better than the original response (in which case the improvement is non-existent). 2. The ethics reviewers made comments that have bearing on the technical merits of the paper, as they both pointed out that the authors’ modeling assumption that human values are static and consistent across contexts is probably too simplistic. As ethics reviewer Udnx, “views and norms of the world are varied", and may depend on complex contexts (e.g., one may need a lot of background information about a given situation to know which action or statement is more moral). What seems concerning in the paper is that the “context” seems to be always reduced to one sentence, and it seems doubtful this provides enough information to make value judgments in many real-world situations. In sum, the paper makes valuable contributions, but there may be biases in the results (Weakness 1) and the practical utility may be somewhat limited (Weakness 2). We would recommend that the authors address the AMT-related concern and discuss more extensively their human-value assumptions considering the ethics reviews. Regarding ethical concerns: we also highly recommend that the authors follow the suggestions proposed by the ethics reviewers, e.g., include more information about the representativeness of the participants.
val
[ "bfo5PfB-pkM", "5ggJp-c-jKz", "zYAeaoXbLKM", "hnwoBVQsK1s", "6uhZ4AUU-K6m", "dlh0-CbFtIz3", "rxroEk3fZcs", "Y51DJuLRgp_", "CTP4Sj3z6eW", "guzJb2fhqFa", "5VypVZhJFNE", "ht2_5XzDJ64" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the further response and raising our score! We are happy our revisions and response have addressed your concerns. We will definitely add the corresponding content into our final version. We believe the paper has been made much stronger thanks to your suggestions, and we are grateful for your time!",...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "CTP4Sj3z6eW", "nips_2022_u6OfmaGIya1", "nips_2022_u6OfmaGIya1", "CTP4Sj3z6eW", "nips_2022_u6OfmaGIya1", "ht2_5XzDJ64", "5VypVZhJFNE", "guzJb2fhqFa", "nips_2022_u6OfmaGIya1", "nips_2022_u6OfmaGIya1", "nips_2022_u6OfmaGIya1", "nips_2022_u6OfmaGIya1" ]
nips_2022_PRd7VG_ki_
FourierFormer: Transformer Meets Generalized Fourier Integral Theorem
Multi-head attention empowers the recent success of transformers, the state-of-the-art models that have achieved remarkable success in sequence modeling and beyond. These attention mechanisms compute the pairwise dot products between the queries and keys, which results from the use of unnormalized Gaussian kernels with the assumption that the queries follow a mixture of Gaussian distribution. There is no guarantee that this assumption is valid in practice. In response, we first interpret attention in transformers as a nonparametric kernel regression. We then propose the FourierFormer, a new class of transformers in which the dot-product kernels are replaced by the novel generalized Fourier integral kernels. Different from the dot-product kernels, where we need to choose a good covariance matrix to capture the dependency of the features of data, the generalized Fourier integral kernels can automatically capture such dependency and remove the need to tune the covariance matrix. We theoretically prove that our proposed Fourier integral kernels can efficiently approximate any key and query distributions. Compared to the conventional transformers with dot-product attention, FourierFormers attain better accuracy and reduce the redundancy between attention heads. We empirically corroborate the advantages of FourierFormers over the baseline transformers in a variety of practical applications including language modeling and image classification.
Accept
Overall, the reviews about this paper are very positive. The authors spent great effort engaging in discussions and improving the paper with clarifications and additional experiments. We recommend accepting the paper.
train
[ "S_LVmTG3oGP", "goRzBfcY83D", "thxHp6YFsWf", "T6bBmNHI6ov", "NeAEQAQrKOv", "_mW0mNByuYR", "44pWB822a1", "4rXhPo_akFE", "9AUMlI2nVa2", "95XHB8Uyjp3", "rToAyNOcBzD", "Yym6JP_X390", "Gl6RxUl98_W", "OH3Wmg05BvEU", "d7fJJucbkIL", "9nfg4KJj-tX", "QxyYZgczfpO", "h3Ktj_u30UC", "J0u6f9ZqD...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "offici...
[ " Thanks for your response and we appreciate your endorsement.", " Sincerely thank authors for the detailed responses. Most of my concerns have been addressed and I appreciate the authors' efforts to add background knowledge, additional experimental results, and runtime/memory evaluation. I have updated my ratin...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "goRzBfcY83D", "9AUMlI2nVa2", "T6bBmNHI6ov", "95XHB8Uyjp3", "OH3Wmg05BvEU", "fIamZIbbP5G", "nips_2022_PRd7VG_ki_", "Yym6JP_X390", "d7fJJucbkIL", "h3Ktj_u30UC", "nips_2022_PRd7VG_ki_", "Gl6RxUl98_W", "nips_2022_PRd7VG_ki_", "3uW8ovaX2zR", "9nfg4KJj-tX", "QxyYZgczfpO", "IyfSls5q_P_", ...
nips_2022_CwQCeJnteii
Decision-based Black-box Attack Against Vision Transformers via Patch-wise Adversarial Removal
Vision transformers (ViTs) have demonstrated impressive performance and stronger adversarial robustness compared to Convolutional Neural Networks (CNNs). On the one hand, ViTs' focus on global interaction between individual patches reduces the local noise sensitivity of images. On the other hand, the neglect of noise sensitivity differences between image regions by existing decision-based attacks further compromises the efficiency of noise compression, especially for ViTs. Therefore, validating the black-box adversarial robustness of ViTs when the target model can only be queried still remains a challenging problem. In this paper, we theoretically analyze the limitations of existing decision-based attacks from the perspective of noise sensitivity difference between regions of the image, and propose a new decision-based black-box attack against ViTs, termed Patch-wise Adversarial Removal (PAR). PAR divides images into patches through a coarse-to-fine search process and compresses the noise on each patch separately. PAR records the noise magnitude and noise sensitivity of each patch and selects the patch with the highest query value for noise compression. In addition, PAR can be used as a noise initialization method for other decision-based attacks to improve the noise compression efficiency on both ViTs and CNNs without introducing additional calculations. Extensive experiments on three datasets demonstrate that PAR achieves a much lower noise magnitude with the same number of queries.
Accept
The paper proposed a new decision-based black box attack approach for ViTs. The reviewers appreciate the novelty, extensive experiments and clear writing and unanimously vote for acceptance. Authors' responses helped clarify reviewer concerns and new results, as well as analysis, were presented in the rebuttal. ACs suggest accepting the paper and would request authors to include suggested changes in the final version.
test
[ "eKiimilrJw5", "syUrrkO8Xt", "IWrgMgAstmQ", "wf7A4scOT3W", "TWup09cSaXO", "dvI4GqFHxF4", "mFqRJxDrFrT", "tb77CKzC_Q7", "DoCPBHdg_B-", "86BRbyGdRD0" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are very grateful for your efforts in re-considering our work and our rebuttal. We would keep improving our manuscript to involve your insights.", " Thanks for providing the detailed explanations. It is good to see \"the absolute noise compression value of PAR on the ViT is significantly higher than that of ...
[ -1, -1, -1, -1, -1, -1, 6, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 2, 3 ]
[ "syUrrkO8Xt", "dvI4GqFHxF4", "86BRbyGdRD0", "tb77CKzC_Q7", "DoCPBHdg_B-", "mFqRJxDrFrT", "nips_2022_CwQCeJnteii", "nips_2022_CwQCeJnteii", "nips_2022_CwQCeJnteii", "nips_2022_CwQCeJnteii" ]
nips_2022_oWqWiazEb62
Calibrated Data-Dependent Constraints with Exact Satisfaction Guarantees
We consider the task of training machine learning models with data-dependent constraints. Such constraints often arise as empirical versions of expected value constraints that enforce fairness or stability goals. We reformulate data-dependent constraints so that they are calibrated: enforcing the reformulated constraints guarantees that their expected value counterparts are satisfied with a user-prescribed probability. The resulting optimization problem is amendable to standard stochastic optimization algorithms, and we demonstrate the efficacy of our method on a fairness-sensitive classification task where we wish to guarantee the classifier's fairness (at test time).
Accept
The reviewers unanimously think the paper is worth publishing. I agree with them. The reviewers did a good job addressing reviewer concerns in rebuttal as well. This paper should be accepted.
train
[ "in0XKvXOJjT", "4SKBlaLGCdg", "5wkaPcAvTuy", "TSzrN4Bvuur", "AXoWRmB5vIe", "rG5HaeVjtnN", "klv6iolLB2W", "lNNDIYpRkXU", "_k01LEHn5T", "KBRTSU_xTnn" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for the rebuttal addressing my concerns. After reading I would like to keep my initial rating of accept. ", " Thanks for the additional details. The rebuttal addressed my concerns and, as such, I increased my score.", " Thanks for the response, nice paper!", " We thank the ...
[ -1, -1, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "TSzrN4Bvuur", "rG5HaeVjtnN", "AXoWRmB5vIe", "KBRTSU_xTnn", "_k01LEHn5T", "klv6iolLB2W", "lNNDIYpRkXU", "nips_2022_oWqWiazEb62", "nips_2022_oWqWiazEb62", "nips_2022_oWqWiazEb62" ]
nips_2022_msFfpucKMf
Robust Option Learning for Adversarial Generalization
Compositional reinforcement learning is a promising approach for training policies to perform complex long-horizon tasks. Typically, a high-level task is decomposed into a sequence of subtasks and a separate policy is trained to perform each subtask. In this paper, we focus on the problem of training subtask policies in a way that they can be used to perform any task; here, a task is given by a sequence of subtasks. We aim to maximize the worst-case performance over all tasks as opposed to the average-case performance. We formulate the problem as a two agent zero-sum game in which the adversary picks the sequence of subtasks. We propose two RL algorithms to solve this game: one is an adaptation of existing multi-agent RL algorithms to our setting and the other is an asynchronous version which enables parallel training of subtask policies. We evaluate our approach on two multi-task environments with continuous states and actions and demonstrate that our algorithms outperform state-of-the-art baselines.
Reject
This paper proposes a robust option learning algorithm that learns subtask policies to maximise the worst-case performance. The results on a 2D navigation environment and a simulated car racing environment show that the proposed algorithm achieves a better and more robust performance compared to alternative option learning approaches. Although the reviewers found the idea interesting, all of them ended up sharing several major concerns during the discussion period. First, the problem setting is not fully justified. Specifically, the assumption about the "jump transition" was not fully justified, and the problem reduces to an existing line of work on skill chaining without such an assumption. Besides, the empirical results are not convincing enough. The subtasks in the environments were so simple that it is unclear how much benefit we can get from the proposed adversarial training. In fact, the performance degradation from the random adversary to the MCTS adversary was indeed quite marginal, which also raises the same question. A good justification of the problem setting and evaluation on more complex environments would strengthen the paper. Thus, I recommend rejecting the paper.
train
[ "7ps7PXr5dp", "FkWrnYisDL", "L7oCu7Lb_G3", "tVWutrcX1lew", "7_P4JmD3G0C", "YeAFTUj_GIs", "AyBOdeRoVZ", "8-O6AvkG-BQ", "gIOEmahhYES", "czZx1z9yt0u" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response. In general, ROSAC can be applied in the setting of skill chaining as well which would enable better generalization to multiple sequences of skills. Therefore, our contribution is useful in that setting as well since our algorithm tries to optimize the worst-case sequence of subtasks (as o...
[ -1, -1, -1, -1, -1, -1, -1, 6, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 3, 3 ]
[ "L7oCu7Lb_G3", "tVWutrcX1lew", "7_P4JmD3G0C", "YeAFTUj_GIs", "czZx1z9yt0u", "gIOEmahhYES", "8-O6AvkG-BQ", "nips_2022_msFfpucKMf", "nips_2022_msFfpucKMf", "nips_2022_msFfpucKMf" ]
nips_2022_h8Bd7Gm3muB
Efficient Dataset Distillation using Random Feature Approximation
Dataset distillation compresses large datasets into smaller synthetic coresets which retain performance with the aim of reducing the storage and computational burden of processing the entire dataset. Today's best performing algorithm, \textit{Kernel Inducing Points} (KIP), which makes use of the correspondence between infinite-width neural networks and kernel-ridge regression, is prohibitively slow due to the exact computation of the neural tangent kernel matrix, scaling $O(|S|^2)$, with $|S|$ being the coreset size. To improve this, we propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel which reduces the kernel matrix computation to $O(|S|)$. Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU. Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets, both in kernel regression and finite-width network training. We demonstrate the effectiveness of our approach on tasks involving model interpretability and privacy preservation.
Accept
The paper provides a novel and practical algorithm for dataset distillation. The paper is clearly written and the reviewers felt that this is a nice contribution to the field. The results demonstrate a clear improvement over state of the art and the experiments are sound. The reviewers raised concerns about limited novelty of the paper but, after a fruitful discussions, agreed that the paper should be accepted.
test
[ "ZgpH3_zrWNK", "bHi6XfxZvU", "Z0H0-KWvhv", "WkNJlbyk-Xo", "OEijlm0DovF", "yQBkIis4o8D", "1XljF_K0fTS", "uIpHuuXVr9L", "nvdJ8SZeSBh", "42ZQCjtireY" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for engaging with us during the discussion period and for your constructive follow-up questions and valuable comments. \n\n**Time Complexity** In the following, we will use |B|, the training batch size, in this case as opposed to |T|, the training set size, to avoid conflating the two. Yes, ty...
[ -1, -1, -1, -1, -1, -1, -1, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "Z0H0-KWvhv", "WkNJlbyk-Xo", "1XljF_K0fTS", "yQBkIis4o8D", "42ZQCjtireY", "nvdJ8SZeSBh", "uIpHuuXVr9L", "nips_2022_h8Bd7Gm3muB", "nips_2022_h8Bd7Gm3muB", "nips_2022_h8Bd7Gm3muB" ]
nips_2022_8qugS9JqAxD
On the Symmetries of Deep Learning Models and their Internal Representations
Symmetry has been a fundamental tool in the exploration of a broad range of complex systems. In machine learning, symmetry has been explored in both models and data. In this paper we seek to connect the symmetries arising from the architecture of a family of models with the symmetries of that family’s internal representation of data. We do this by calculating a set of fundamental symmetry groups, which we call the intertwiner groups of the model. Each of these arises from a particular nonlinear layer of the model and different nonlinearities result in different symmetry groups. These groups change the weights of a model in such a way that the underlying function that the model represents remains constant but the internal representations of data inside the model may change. We connect intertwiner groups to a model’s internal representations of data through a range of experiments that probe similarities between hidden states across models with the same architecture. Our work suggests that the symmetries of a network are propagated into the symmetries in that network’s representation of data, providing us with a better understanding of how architecture affects the learning and prediction process. Finally, we speculate that for ReLU networks, the intertwiner groups may provide a justification for the common practice of concentrating model interpretability exploration on the activation basis in hidden layers rather than arbitrary linear combinations thereof.
Accept
**Summary**: This paper studies symmetries of the space of neural network parameters, i.e. invertible transformations of the parameters which leave the forward function invariant. The authors compute this set of symmetries, called the intertwiner group, for networks with different activation functions. These symmetries have been defined and studied before (as notede by the authors). The primary focus of this work is on the impact of these symmetries on learned hidden representations in the network. The author investigate to what extent networks trained from different random initializations effectively learn the same hidden features, and what amount of variation is due to parameter space symmetry. Through splicing experiments (Section 4), they show that representations from one network can spliced into another using the intertwiner group, which results in only a small drop in performance. In Section 5, the authors compute use two novel metrics to compute how close learned hidden representations are to being related by a single intertwiner for the whole dataset. Section 6 shows, using network dissection, that the significance neuron activations (versus linear combinations of activations) to interpretability depends on the choice of activation function. **Strengths**: Reviewers [Eay8] and [kCuW] commented that using the intertwiner group to explain weight-space symmetries of neural networks is an interesting and promising approach. Moreover, the paper is well written and well-organized (if difficult to understand). The intertwiner group and its properties are well-presented. Reviewers [kCuW] and [cBdu] further note that the paper presents a good combination of empirical and theoretical results. Discussion of conditions when theorem 3.3 do not hold are reported. Reviewer [kCuW] comments that focusing on ReLU is helpful, since this is a commonly used activation function. Reviewer [cBdu] comments that while authors are not the first to point out the existence of these symmetries, their treatment is rigorous, more general than in related work, and easy to follow. The reviewers indicates that a contribution relative to related work is that the authors connect weight-space symmetries to the questio of variability of learned representations. The authors achieve good results in splicing experiments with G_{relu}, which in principle require combinatorial optimzation over a permutation group, by using a convex relaxation using doubly-stochastic matrices. Moreover, the appendix and its proofs appears quite thorough. **Weaknesses**: Reviewer [Eay8] found the paper difficult to read. An example is that early figures cannot be understood until reading the section about experiments. The reviewer also found that it was not sufficiently clear how novel or significant the use of the intertwiner group is relative to previous studies based on permutation and scaling groups. Other reviewers also had comments on the experiments. Reviewer [kCuW] notes that more reruns for the experiments might be helpful. Reviewer [cBdU] comments that splicing experiments could be more exhaustive. The reviewer also finds tha the interpretability argument in Section 6 is not that strong, and perhaps beside the point of the paper. Finally, results in Section 5 are hard to interpret without baseline numbers. **Author Reviewer Discussion**: In response to reviewer [kCuW], the authors performed additional runs (now 32 in total). In response to reviewer [cBdU] they added experiments on pair-wise comparison for stitching experiments, as well as answering question. The reviewer notes that they appreciate the responses. To address concerns by reviewer [Eay8] the authors improved captions in early figures. They also clarified that in their opinion, the main significance of this paper is not the use of intertwiner group per se (though this does provide a uniform perspective), but that considerations related to intertwiners motivate the experiments in Sec 4-6, which lead to interesting connections with existing representation learning and interpretability research. Reviewer [Eay8] raised their score 5->6 after discussion. **Reviewer AC Discussion**: Reviewer [cBdU] reiterates that they are happy with the paper and the author responses and that they feel this paper can appear. Other reviewers did not engage, which the AC interprets as a signal that they do not object to the paper appearing. **Overall Recommendation**: The AC is satisfied with the level of engagement from both reviewers and authors during the review process. While there is no strong champion for this paper, there is also consensus that this paper is above the threshold for acceptance. On this ground the AC considers this a relatively clear accept.
test
[ "KJLP8vR74nH", "qARNjMWVKUB", "n7y_O2SpGQH", "rir_NyxMP6N", "51qLJS8e-92", "C13M040TMGv", "87lmyy0z6yB", "MNG8LPeRiiJ", "bliQtuEWd3Q", "1CvlJfYIN9j" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the authors’ thorough response to my questions. Many of the concerns and misunderstood points raised in my review are addressed. Now I think the paper’s contribution outweighs my initial concerns. I raised the review score accordingly. ", " Thanks very much for your thorough read and valuable feedb...
[ -1, -1, -1, -1, -1, -1, -1, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 2, 4, 4 ]
[ "87lmyy0z6yB", "1CvlJfYIN9j", "nips_2022_8qugS9JqAxD", "1CvlJfYIN9j", "1CvlJfYIN9j", "bliQtuEWd3Q", "MNG8LPeRiiJ", "nips_2022_8qugS9JqAxD", "nips_2022_8qugS9JqAxD", "nips_2022_8qugS9JqAxD" ]
nips_2022_CwG-o0ind6t
Non-identifiability and the Blessings of Misspecification in Models of Molecular Fitness
Understanding the consequences of mutation for molecular fitness and function is a fundamental problem in biology. Recently, generative probabilistic models have emerged as a powerful tool for estimating fitness from evolutionary sequence data, with accuracy sufficient to predict both laboratory measurements of function and disease risk in humans, and to design novel functional proteins. Existing techniques rest on an assumed relationship between density estimation and fitness estimation, a relationship that we interrogate in this article. We prove that fitness is not identifiable from observational sequence data alone, placing fundamental limits on our ability to disentangle fitness landscapes from phylogenetic history. We show on real datasets that perfect density estimation in the limit of infinite data would, with high confidence, result in poor fitness estimation; current models perform accurate fitness estimation because of, not despite, misspecification. Our results challenge the conventional wisdom that bigger models trained on bigger datasets will inevitably lead to better fitness estimation, and suggest novel estimation strategies going forward.
Accept
This paper presents the argument that molecular fitness is not identifiable from observational sequence data alone and that bigger models trained on bigger datasets will inevitably lead to better fitness estimation. The reviewers found strengths in the paper because "estimating fitness from evolutionary data is of great interest in computational biology", and because they "show under relatively loose conditions (Thm 4.1) why a mis-specified model may be beneficial for inferring fitness (hypothesis 2) and then go on to show under simulations how to test for hypothesis 2". All reviewers highlighted the importance of the problem and the theoretical and empirical contributions. There was a minor concern that this is the best venue for the work and how these results might translate across domains. There is interest across domains in the alignment between the true target of inference (here molecular fitness) and the upstream analysis targets (here density estimation). Overall, the reviewers evaluations were highly favorable and they considered this a valuable contribution to the field.
train
[ "XfOFoP4JCKg", "PbKFQ78pI6E", "HeIq-Rh92LH", "nxDgOK7-qdt", "8ZmQ03ZAQqZ", "5osfNa5YjFj", "jxHUImnXZv_", "NULUn38iALR", "UZ_jgxqrM20", "rjEclpQbk-U" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response. I don't have further questions and willing to increase my score to 7 based on the feedback and the planned edits to the manuscript.", " \"Similarly another assumption seems to be the existence of a stationary p\\infinity distribution that in the long run converges to an energy landscape...
[ -1, -1, -1, -1, -1, -1, 7, 9, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, 1, 4, 4, 4 ]
[ "PbKFQ78pI6E", "rjEclpQbk-U", "rjEclpQbk-U", "UZ_jgxqrM20", "NULUn38iALR", "jxHUImnXZv_", "nips_2022_CwG-o0ind6t", "nips_2022_CwG-o0ind6t", "nips_2022_CwG-o0ind6t", "nips_2022_CwG-o0ind6t" ]
nips_2022_A7O7Fl5Qo9W
On the Sample Complexity of Stabilizing LTI Systems on a Single Trajectory
Stabilizing an unknown dynamical system is one of the central problems in control theory. In this paper, we study the sample complexity of the learn-to-stabilize problem in Linear Time-Invariant (LTI) systems on a single trajectory. Current state-of-the-art approaches require a sample complexity linear in $n$, the state dimension, which incurs a state norm that blows up exponentially in $n$. We propose a novel algorithm based on spectral decomposition that only needs to learn ``a small part'' of the dynamical matrix acting on its unstable subspace. We show that, under proper assumptions, our algorithm stabilizes an LTI system on a single trajectory with $O(k \log n)$ samples, where $k$ is the instability index of the system. This represents the first sub-linear sample complexity result for the stabilization of LTI systems under the regime when $k = o(n)$.
Accept
The paper considers an interesting and timely theoretical problem in the intersection between control and machine learning. The paper is well-written and the results are novel and provide insight into how and when the sample complexity for stabilizing an unknown LTI system can be decreased. I thus recommend that it should be accepted. Nevertheless, I believe that the assumptions imposed are strong (full state measurements, strong controllability assumptions, requirements on the initial state, and systems with at least two unstable eigenvalues and no integrator dynamics) while the adaptive control literature in the 80's solved similar stabilization problems under much less stringent assumptions. Even though the contribution is theoretical, it would be useful with a physics-based example to make ideas and assumptions more concrete and compare the proposed approach against the state-of-the-art from both classical adaptive control and modern learning-based techniques.
train
[ "iQxqHn6GBfZ", "WUIddvs84TP", "yVhAx9TtCHA", "3sOb2sa8wU", "QWJYeyjD1t1", "rKfWpEcWrDn", "uYEK4xHLJ75", "M9yVdgXIQEk", "s3juzzr95xq", "SFXnzEdHHnk", "txll1s1jGp", "geVTsMT6oD", "98URclpnD8", "eFoiMO9UkqC", "82uzmExv6bO", "N7WAVZ_bkc5" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your additional comments! \n\n1. (technical assumptions) We would like to point out that the implicit assumption $k = m$ in Assumption 4.3 is actually removed using the method in Appendix C. Also, Assumption 4.3 in Appendix C can be further relaxed to the controllability of the original system (see the...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5, 4 ]
[ "yVhAx9TtCHA", "txll1s1jGp", "QWJYeyjD1t1", "M9yVdgXIQEk", "rKfWpEcWrDn", "N7WAVZ_bkc5", "s3juzzr95xq", "s3juzzr95xq", "82uzmExv6bO", "txll1s1jGp", "eFoiMO9UkqC", "98URclpnD8", "nips_2022_A7O7Fl5Qo9W", "nips_2022_A7O7Fl5Qo9W", "nips_2022_A7O7Fl5Qo9W", "nips_2022_A7O7Fl5Qo9W" ]
nips_2022_uxWr9vEdsBh
A Lagrangian Duality Approach to Active Learning
We consider the pool-based active learning problem, where only a subset of the training data is labeled, and the goal is to query a batch of unlabeled samples to be labeled so as to maximally improve model performance. We formulate the problem using constrained learning, where a set of constraints bounds the performance of the model on labeled samples. Considering a primal-dual approach, we optimize the primal variables, corresponding to the model parameters, as well as the dual variables, corresponding to the constraints. As each dual variable indicates how significantly the perturbation of the respective constraint affects the optimal value of the objective function, we use it as a proxy of the informativeness of the corresponding training sample. Our approach, which we refer to as Active Learning via Lagrangian dualitY, or ALLY, leverages this fact to select a diverse set of unlabeled samples with the highest estimated dual variables as our query set. We demonstrate the benefits of our approach in a variety of classification and regression tasks and discuss its limitations depending on the capacity of the model used and the degree of redundancy in the dataset. We also examine the impact of the distribution shift induced by active sampling and show that ALLY can be used in a generative mode to create novel, maximally-informative samples.
Accept
In this paper, the authors formulate batch active learning as a constrained optimization problem, and develop a primal-dual approach to select a diverse set of unlabeled samples. The idea of using constrained optimization for active learning is novel and interesting, and the experimental results are also promising.
val
[ "l_nZmmt9HsP", "ToW-zEIsUR", "UJTSK7RBUiF", "6mweJmYACSU", "2n77jJZ2lNu", "ryDi5iZzPp0", "U1B_qupPRl", "dWZ3TKRa8W9", "EMqgOoQnRiJ", "4DUPysewh_p", "K7lMgo62bcu", "O3z9OYkVEfw", "7tB9hB_6yQe", "RycULtpSWR0", "WgSSvlGfIC", "STWpWOizc5o", "o7P0lyEHyKe", "KsqkRqQdTeA", "ENUBPpL1qmF"...
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", ...
[ " Thank you very much for increasing your score. We will make sure to add more discussion on the convexity assumption, and we will also add the additional experiments to the camera-ready version of the paper.", " Thank you very much for reading our response, and thanks again for your positive evaluation of our wo...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "UJTSK7RBUiF", "6mweJmYACSU", "K7lMgo62bcu", "O3z9OYkVEfw", "ryDi5iZzPp0", "U1B_qupPRl", "4DUPysewh_p", "WgSSvlGfIC", "STWpWOizc5o", "o7P0lyEHyKe", "qWhUnDLO6kd", "Hp1AEaZxPkN", "ENUBPpL1qmF", "ENUBPpL1qmF", "KsqkRqQdTeA", "KsqkRqQdTeA", "KsqkRqQdTeA", "nips_2022_uxWr9vEdsBh", "n...
nips_2022_Ypp6z77A6_
Fast Bayesian Estimation of Point Process Intensity as Function of Covariates
In this paper, we tackle the Bayesian estimation of point process intensity as a function of covariates. We propose a novel augmentation of permanental process called augmented permanental process, a doubly-stochastic point process that uses a Gaussian process on covariate space to describe the Bayesian a priori uncertainty present in the square root of intensity, and derive a fast Bayesian estimation algorithm that scales linearly with data size without relying on either domain discretization or Markov Chain Monte Carlo computation. The proposed algorithm is based on a non-trivial finding that the representer theorem, one of the most desirable mathematical property for machine learning problems, holds for the augmented permanental process, which provides us with many significant computational advantages. We evaluate our algorithm on synthetic and real-world data, and show that it outperforms state-of-the-art methods in terms of predictive accuracy while being substantially faster than a conventional Bayesian method.
Accept
The paper looks a the Augmented Permanental point process as a model of (spatial) point phenomena. One reviewer was unconvinced that the method was needed at all, which the authors refuted. There was a long exchange but I'm on the side of the authors here - the method is clearly distinct from a point process defined on the covariate space. I'm happy that the reviewer was able to make their point and that the discussion was enabled, and I applaud the authors for their patient responses. I'm disregarding that reviewer's score. The reviewers suggest that an important and interesting contribution is the representer theorem for squared processes. I also appreciated the authors discussion on the approximation error on the integral operator. These should be highlighted in the manuscript. There was the occasional confusion from the reviewers on notations: for example, the reviewers failed to spot the the performance of the method was presented in the paper using the $\tau$ column in the tables. Please, double check all the reviewer feedback for clarifications. Overall, I think that there are a couple of interesting ideas in the paper that people working on Point Process data will be impacted by, and am recommending that this is just above the acceptance threshold.
train
[ "andzm55onHH", "kEC2ZuQMwDI", "qQBwv8DAND-", "d31fPMbas4", "mWaxJvVlzXn", "xRfNdLpBdK", "bc_iDWWxvSk", "523bp_QOWpg", "95e_Vi4JnrK", "ARpj54X5LeV", "JoCcb35-vMd", "S_drFgBeW-", "XrPg3wHBJg7", "AdPO3KMkcz", "VJwrek3fOJz", "Kz-qpW7I2u", "wd7ojPJbdK9", "SFw-S9rjIu", "_v4iPOGBkelq", ...
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_revie...
[ " Though you were not able to convince me of the need for this work, I appreciate the effort you put into this discussion and responses to other reviewers' concerns. I offer the following final remarks.\n\n**1.** As previously mentioned your toy example is a parametric example, and it doesn't make much sense to mod...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 7, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 2, 3, 4 ]
[ "mWaxJvVlzXn", "qQBwv8DAND-", "bc_iDWWxvSk", "ARpj54X5LeV", "S_drFgBeW-", "95e_Vi4JnrK", "S_drFgBeW-", "AdPO3KMkcz", "KvzAYbz_iq0", "Kz-qpW7I2u", "UymPxdquh_N", "VJwrek3fOJz", "UymPxdquh_N", "KvzAYbz_iq0", "wd7ojPJbdK9", "SFw-S9rjIu", "9p94HNSPDVt", "8Vz-T75jb1", "cfmUwfHeEz3", ...
nips_2022_z0M3qHDqH20
HUMUS-Net: Hybrid Unrolled Multi-scale Network Architecture for Accelerated MRI Reconstruction
In accelerated MRI reconstruction, the anatomy of a patient is recovered from a set of undersampled and noisy measurements. Deep learning approaches have been proven to be successful in solving this ill-posed inverse problem and are capable of producing very high quality reconstructions. However, current architectures heavily rely on convolutions, that are content-independent and have difficulties modeling long-range dependencies in images. Recently, Transformers, the workhorse of contemporary natural language processing, have emerged as powerful building blocks for a multitude of vision tasks. These models split input images into non-overlapping patches, embed the patches into lower-dimensional tokens and utilize a self-attention mechanism that does not suffer from the aforementioned weaknesses of convolutional architectures. However, Transformers incur extremely high compute and memory cost when 1) the input image resolution is high and 2) when the image needs to be split into a large number of patches to preserve fine detail information, both of which are typical in low-level vision problems such as MRI reconstruction, having a compounding effect. To tackle these challenges, we propose HUMUS-Net, a hybrid architecture that combines the beneficial implicit bias and efficiency of convolutions with the power of Transformer blocks in an unrolled and multi-scale network. HUMUS-Net extracts high-resolution features via convolutional blocks and refines low-resolution features via a novel Transformer-based multi-scale feature extractor. Features from both levels are then synthesized into a high-resolution output reconstruction. Our network establishes new state of the art on the largest publicly available MRI dataset, the fastMRI dataset. We further demonstrate the performance of HUMUS-Net on two other popular MRI datasets and perform fine-grained ablation studies to validate our design.
Accept
Four reviewers generally favour accepting the paper, and I agree. The authors have done a good job of addressing the most pressing concerns of the reviewers in the rebuttal period.
train
[ "t6C8_LYR_d5", "IByyht2yQ27", "BWh3rKw_BQR", "Mhm4je7mRSp", "h2o4sSqjtTK", "wXKx_sUMx6B", "CMdYFi1E9i", "2pT6e1zomRx", "rHZgGBmpj8T", "5pfdD37qb-LB", "sKXgSSEuO2K", "SGtC5TkMS0l", "HabPKcp-yV", "qbQapmyW9O", "CqTmKA8hX2E", "82OXkXaftze", "08qaaHnTxtK", "8C0OZA4RGg", "TLowLcbyxH3"...
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for answering reviewers’ questions and addressing the feedback. I stand by my initial rating and consider this paper a nice contribution to the community.", " Thanks for the reply. \n\nIt is true that more parameters may not necessarily translate into matching performance as we see in the comparison b...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "SGtC5TkMS0l", "BWh3rKw_BQR", "CMdYFi1E9i", "wXKx_sUMx6B", "CqTmKA8hX2E", "5pfdD37qb-LB", "2pT6e1zomRx", "rHZgGBmpj8T", "TLowLcbyxH3", "sKXgSSEuO2K", "8C0OZA4RGg", "08qaaHnTxtK", "qbQapmyW9O", "82OXkXaftze", "nips_2022_z0M3qHDqH20", "nips_2022_z0M3qHDqH20", "nips_2022_z0M3qHDqH20", ...
nips_2022_pBz3h8VibKY
Interaction-Grounded Learning with Action-inclusive Feedback
Consider the problem setting of Interaction-Grounded Learning (IGL), in which a learner's goal is to optimally interact with the environment with no explicit reward to ground its policies. The agent observes a context vector, takes an action, and receives a feedback vector, using this information to effectively optimize a policy with respect to a latent reward function. Prior analyzed approaches fail when the feedback vector contains the action, which significantly limits IGL’s success in many potential scenarios such as Brain-computer interface (BCI) or Human-computer interface (HCI) applications. We address this by creating an algorithm and analysis which allows IGL to work even when the feedback vector contains the action, encoded in any fashion. We provide theoretical guarantees and large-scale experiments based on supervised datasets to demonstrate the effectiveness of the new approach.
Accept
This paper addresses the problem of learning to behave optimally when actions result only in new observations but no rewards. Feedback is provided in the shape of a vector. This problem, known as IGL, has already been described in previous works which had to make the assumption that the action was not included in the feedback. This paper gets rid of this assumption and provides theoretical guarantees. The discussion has been quite extensive and the main issue raised by reviewers concerned the experimental setups. They were considered toy-ish and too far from a real application. Especially, the authors mentioned BCI and HCI in their intro (mainly focusing on the fact that having the action in the observation is mandatory with humans in the loop) but didn't provide experiments involving actual BCI or HCI. The authors tried to address this issue by providing synthetic experiments simulating BCI and fMRI. As the authors stated, the cost of real experiments in that setup would be prohibitive. Given the effort made by authors to provide experimental results supporting them, the algorithmic and theoretical contributions seem good enough to reach the acceptance bar.
train
[ "5somzmqQxS-", "nKnEOEWMD3m", "udSbIThlDs", "LbP8JodjoLb", "2XwMfauSErL", "3at6Tk_W1f2m", "4MPSOISm3pi", "3kLi4DftEiv", "YOVRZ1OXyLp", "ylIXo4mh4gf", "UnTPuUN6v5n", "3YN6cEGBd6M" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " As the discussion period closes, we thank the reviewers for their attention. The paper has definitely improved due to the constructive feedback. We are especially pleased with the more realistic simulation, which was the direct result of reviewer comments.", " We thank the reviewer for correctly pointing out th...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 1, 3 ]
[ "nips_2022_pBz3h8VibKY", "LbP8JodjoLb", "3kLi4DftEiv", "2XwMfauSErL", "4MPSOISm3pi", "nips_2022_pBz3h8VibKY", "3YN6cEGBd6M", "UnTPuUN6v5n", "ylIXo4mh4gf", "nips_2022_pBz3h8VibKY", "nips_2022_pBz3h8VibKY", "nips_2022_pBz3h8VibKY" ]
nips_2022_PGQrtAnF-h
Identifiability of deep generative models without auxiliary information
We prove identifiability of a broad class of deep latent variable models that (a) have universal approximation capabilities and (b) are the decoders of variational autoencoders that are commonly used in practice. Unlike existing work, our analysis does not require weak supervision, auxiliary information, or conditioning in the latent space. Specifically, we show that for a broad class of generative (i.e. unsupervised) models with universal approximation capabilities, the side information $u$ is not necessary: We prove identifiability of the entire generative model where we do not observe $u$ and only observe the data $x$. The models we consider match autoencoder architectures used in practice that leverage mixture priors in the latent space and ReLU/leaky-ReLU activations in the encoder, such as VaDE and MFC-VAE. Our main result is an identifiability hierarchy that significantly generalizes previous work and exposes how different assumptions lead to different ``strengths'' of identifiability, and includes certain ``vanilla'' VAEs with isotropic Gaussian priors as a special case. For example, our weakest result establishes (unsupervised) identifiability up to an affine transformation, and thus partially resolves an open problem regarding model identifiability raised in prior work. These theoretical results are augmented with experiments on both simulated and real data.
Accept
All three reviewers gave solid recommendations for acceptance. Authors provided a fairly detailed response to each review; each reviewer confirmed they would maintain their positive scores. Clear accept.
train
[ "5MlvjIr7pcx", "MHC1WNmehvE", "0cXFY1mg19", "Gt_Gz2Or8fT", "1WT-vx-ZiGM", "S5PAngYwKD3", "OYAnbdaIj8P", "kh6xsWAV4CN" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response! I will maintain my positive score.", " I read the response and maintain my score. I congratulate the authors again on their valuable contribution.", " We thank the reviewer for their careful reading of our work and their insightful questions. While we agree that these directions are o...
[ -1, -1, -1, -1, -1, 8, 7, 6 ]
[ -1, -1, -1, -1, -1, 3, 2, 2 ]
[ "0cXFY1mg19", "1WT-vx-ZiGM", "kh6xsWAV4CN", "OYAnbdaIj8P", "S5PAngYwKD3", "nips_2022_PGQrtAnF-h", "nips_2022_PGQrtAnF-h", "nips_2022_PGQrtAnF-h" ]
nips_2022_2clwrA2tfik
Dataset Distillation using Neural Feature Regression
Dataset distillation aims to learn a small synthetic dataset that preserves most of the information from the original dataset. Dataset distillation can be formulated as a bi-level meta-learning problem where the outer loop optimizes the meta-dataset and the inner loop trains a model on the distilled data. Meta-gradient computation is one of the key challenges in this formulation, as differentiating through the inner loop learning procedure introduces significant computation and memory costs. In this paper, we address these challenges using neural Feature Regression with Pooling (FRePo), achieving the state-of-the-art performance with an order of magnitude less memory requirement and two orders of magnitude faster training than previous methods. The proposed algorithm is analogous to truncated backpropagation through time with a pool of models to alleviate various types of overfitting in dataset distillation. FRePo significantly outperforms the previous methods on CIFAR100, Tiny ImageNet, and ImageNet-1K. Furthermore, we show that high-quality distilled data can greatly improve various downstream applications, such as continual learning and membership inference defense. Please check out our webpage at https://sites.google.com/view/frepo.
Accept
The paper proposes a new algorithm for dataset distillation, based on two key ideas: (1) train a linear layer given the fixed feature extractor, and (2) use a diverse set of modes as feature extractors. The paper has received overwhelmingly positive reviews. Many reviewers find the algorithm effective, the paper well-written, and the results compelling. The rebuttal further addressed the concerns regarding the backbone models and missing experiments as well as provided additional clarifications. The AC agreed with the reviewers’ consensus and recommended accepting the paper.
train
[ "FsO65lg7gs9", "C57BgsPgQUy", "vF7_AFz442-", "Y9FwwmBDpnd", "99m4EOIjItB", "HvkcX3vcGa", "ifmKkQl2m8b", "-BQIGsuusnO", "pz5Ld_GIBFY", "Kfvuf0kAMxEe", "8y8CMOFGu8B", "PPSz1OJ0Dwc", "AtkP4X-lwn", "pmetLOwCCi", "E5xTcjmNwr", "Nzlpw95xEou", "hvraONYXrCR", "jFMVFV-q7lA", "p6q0_pnjIK7"...
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", ...
[ " Thank you for the explanations and responses. This paper is a great work on making KIP and bi-level optimization algorithms practical, and should be known by the field.", " Dear reviewer,\n\nThanks for your efforts in the reviewing process! Let us know if you have any further questions before the end of the aut...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 8, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 5, 4, 3 ]
[ "HvkcX3vcGa", "ifmKkQl2m8b", "E5xTcjmNwr", "bTYbyNBujPl", "ifmKkQl2m8b", "ifmKkQl2m8b", "ok8TDhqtIik", "pz5Ld_GIBFY", "Kfvuf0kAMxEe", "Nzlpw95xEou", "nips_2022_2clwrA2tfik", "bTYbyNBujPl", "bTYbyNBujPl", "ok8TDhqtIik", "jFMVFV-q7lA", "p6q0_pnjIK7", "p6q0_pnjIK7", "nips_2022_2clwrA2...
nips_2022_0SgKq4ZC9r
Decomposed Knowledge Distillation for Class-Incremental Semantic Segmentation
Class-incremental semantic segmentation (CISS) labels each pixel of an image with a corresponding object/stuff class continually. To this end, it is crucial to learn novel classes incrementally without forgetting previously learned knowledge. Current CISS methods typically use a knowledge distillation (KD) technique for preserving classifier logits, or freeze a feature extractor, to avoid the forgetting problem. The strong constraints, however, prevent learning discriminative features for novel classes. We introduce a CISS framework that alleviates the forgetting problem and facilitates learning novel classes effectively. We have found that a logit can be decomposed into two terms. They quantify how likely an input belongs to a particular class or not, providing a clue for a reasoning process of a model. The KD technique, in this context, preserves the sum of two terms ($\textit{i.e.}$, a class logit), suggesting that each could be changed and thus the KD does not imitate the reasoning process. To impose constraints on each term explicitly, we propose a new decomposed knowledge distillation (DKD) technique, improving the rigidity of a model and addressing the forgetting problem more effectively. We also introduce a novel initialization method to train new classifiers for novel classes. In CISS, the number of negative training samples for novel classes is not sufficient to discriminate old classes. To mitigate this, we propose to transfer knowledge of negatives to the classifiers successively using an auxiliary classifier, boosting the performance significantly. Experimental results on standard CISS benchmarks demonstrate the effectiveness of our framework.
Accept
This submission deals with incremental semantic segmentation. The authors propose two contributions: 1/ a new knowledge distillation loss based on two separate loss functions for positive and negative class logits; 2/ a dedicated initialization strategy for the classifiers of novel classes. They present strong results on several standard benchmarks for incremental semantic segmentation. This submission received initial diverging ratings. Reviewers have raised important concerns about the strategy and some missing experiments. After rebuttal, the active reviewers appreciated the answers and additional experiences provided. Following discussions, the final scores of active reviewers have increased and are clearly positive on this submission. On the whole, the novelty and the interest of the proposal stand out clearly. The AC agrees that the strengths in this case outweigh the weaknesses. Authors are encouraged to consider all comments for their final version.
train
[ "HQzci0deRyO", "f3nkqBCgTOD", "vSNSj0rPYbI", "Y9x8RGzSOqV", "tDOCVH0ozCl", "bGW8otATF28", "2DHhy5te2An", "mFnNdszBwcV", "7z4MTqmpcnL", "4zKKuLoDZN2", "btRqR3UYMz", "PiXF2LP-sG" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your reply. We have obtained the results in Table R7 using the official code provided by the authors of MiB [5], PLOP [9], and SSUL [6]. We further integrate MiB into our own codebase to compare the results under identical simulation settings. Specifically, we train MiB with the same hyperparameters...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "f3nkqBCgTOD", "Y9x8RGzSOqV", "PiXF2LP-sG", "PiXF2LP-sG", "PiXF2LP-sG", "btRqR3UYMz", "btRqR3UYMz", "btRqR3UYMz", "4zKKuLoDZN2", "nips_2022_0SgKq4ZC9r", "nips_2022_0SgKq4ZC9r", "nips_2022_0SgKq4ZC9r" ]
nips_2022_YsRH6uVcx2l
On Learning Fairness and Accuracy on Multiple Subgroups
We propose an analysis in fair learning that preserves the utility of the data while reducing prediction disparities under the criteria of group sufficiency. We focus on the scenario where the data contains multiple or even many subgroups, each with limited number of samples. As a result, we present a principled method for learning a fair predictor for all subgroups via formulating it as a bilevel objective. Specifically, the subgroup specific predictors are learned in the lower-level through a small amount of data and the fair predictor. In the upper-level, the fair predictor is updated to be close to all subgroup specific predictors. We further prove that such a bilevel objective can effectively control the group sufficiency and generalization error. We evaluate the proposed framework on real-world datasets. Empirical evidence suggests the consistently improved fair predictions, as well as the comparable accuracy to the baselines.
Accept
All reviewers acknowledged the contribution of the paper and its focus on the multiple (potentially a large number of) subgroup’s sufficiency gap. This is also a practically challenging setting, with each subgroup having only a very limited number of samples. The paper presents a bi-level optimization solution that shows favorable performance and is believed to be a solid and novel contribution to the relevant literature. The reviewers are unanimously happy with the reported results and the one added during the rebuttal.
train
[ "MOZY9whw0o", "uaMwgCW09vK", "GLl4MLzR7R", "eSiwmxC6AO", "7j55WJ3ILed", "6FCGKYY04Ct", "jWGju9NLsJE", "KlpfBmEgsQ", "fAMpFCz-HYs", "DRGisGIo3j", "tnm8u6jzauk", "Ylh3KZfTgDd", "UVKwy76BZHI", "JpRHdd5Ke3TG", "e4hH5wQK84", "EgZArpPcwc", "fbCrXphbHX6", "g1_-YTEliQ", "d8vcf0IyJS", "...
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your feedback ! We will include your suggestions in the next version.", " Thanks for your feedback ! We will include your suggestions and comments in the next version.", " Thanks for your feedback! When facing many subgroups, we think there exists a tradeoff between the memory complexity and traini...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 8, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3, 3 ]
[ "d8vcf0IyJS", "g1_-YTEliQ", "7j55WJ3ILed", "6FCGKYY04Ct", "e4hH5wQK84", "jWGju9NLsJE", "KlpfBmEgsQ", "DRGisGIo3j", "nips_2022_YsRH6uVcx2l", "tnm8u6jzauk", "Xl5pH_x9wTs", "d8vcf0IyJS", "JpRHdd5Ke3TG", "g1_-YTEliQ", "EgZArpPcwc", "fbCrXphbHX6", "nips_2022_YsRH6uVcx2l", "nips_2022_YsR...
nips_2022_IILJ0KWZMy9
Low-rank lottery tickets: finding efficient low-rank neural networks via matrix differential equations
Neural networks have achieved tremendous success in a large variety of applications. However, their memory footprint and computational demand can render them impractical in application settings with limited hardware or energy resources. In this work, we propose a novel algorithm to find efficient low-rank subnetworks. Remarkably, these subnetworks are determined and adapted already during the training phase and the overall time and memory resources required by both training and evaluating them is significantly reduced. The main idea is to restrict the weight matrices to a low-rank manifold and to update the low-rank factors rather than the full matrix during training. To derive training updates that are restricted to the prescribed manifold, we employ techniques from dynamic model order reduction for matrix differential equations. Moreover, our method automatically and dynamically adapts the ranks during training to achieve a desired approximation accuracy. The efficiency of the proposed method is demonstrated through a variety of numerical experiments on fully-connected and convolutional networks.
Accept
This paper proposes a Dynamic Low Rank Training Scheme (DLRT) to optimize the neural network weight matrix imposing low rank constraint and hence yields better computational and memory efficiency. The obtained solution becomes low rank and thus it can realize factorization of the weights by its nature. The optimization procedure can be equipped with an adaptive rank selection scheme. The proposed method is mainly justified by numerical experiments. The optimization method is basically derived from the gradient flow along the low rank matrix manifold. This paper gives a rather theoretically solid optimization scheme. The presentation is overall good. It is well organized and contents required to understand the contribution are appropriately presented. There are still some weakness. First, the paper would benefit from adding some more related topics and giving more discussions about connection to them. Second, it would enhance the paper if the additional numerical experiments on larger data and models (Imagenet/Resnet50) as presented in the rebuttal phase would be included. In summary, although there are some weakness, this paper gives a novel and solid methodology to obtain a low rank weight matrices. So, I recommend acceptance. On the other hand, I strongly recommend the authors to address the issues pointed out by the reviewers in the final version.
val
[ "3h0EFX42SHe", "jGmseEkcDMK", "JWvxNL5xgmk", "QTSzGg7_pxc", "didAcIcbEPNG", "OllQtOqCiqX", "vfO8AMd5-Pl", "UDTP5ix5Imo", "VQ5cjqmV19N", "OSuK9g06gU", "0tYWYfjbS2kn", "gKdPN6wU1lH", "wdgQipyFOtJ", "afE4TVbdn8H", "S_YqbUQuNxp" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the detailed response. My only concern is about the experiments. Thanks for the authors to provide CIFAR experiments in appendix and it seems good to me. Since my rating is already 7, I would like to keep it if no further problems are issued by the other reviewers. \n\nThank you again for your nice ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "0tYWYfjbS2kn", "QTSzGg7_pxc", "OllQtOqCiqX", "VQ5cjqmV19N", "OllQtOqCiqX", "UDTP5ix5Imo", "UDTP5ix5Imo", "gKdPN6wU1lH", "OSuK9g06gU", "S_YqbUQuNxp", "afE4TVbdn8H", "wdgQipyFOtJ", "nips_2022_IILJ0KWZMy9", "nips_2022_IILJ0KWZMy9", "nips_2022_IILJ0KWZMy9" ]
nips_2022_hzbguA9zMJ
If Influence Functions are the Answer, Then What is the Question?
Influence functions efficiently estimate the effect of removing a single training data point on a model's learned parameters. While influence estimates align well with leave-one-out retraining for linear models, recent works have shown this alignment is often poor in neural networks. In this work, we investigate the specific factors that cause this discrepancy by decomposing it into five separate terms. We study the contributions of each term on a variety of architectures and datasets and how they vary with factors such as network width and training time. While practical influence function estimates may be a poor match to leave-one-out retraining for nonlinear networks, we show that they are often a good approximation to a different object we term the proximal Bregman response function (PBRF). Since the PBRF can still be used to answer many of the questions motivating influence functions, such as identifying influential or mislabeled examples, our results suggest that current algorithms for influence function estimation give more informative results than previous error analyses would suggest.
Accept
All reviewers felt that this paper deserves to be accepted given its interesting analysis of an important problem and compelling empirical evaluations. The authors should consider incorporating the reviewers' comments, particularly around motivating the PBRF as a "gold standard" and clarifying aspects of the presentation, in the final version of the paper.
train
[ "m6BwGcNpsn0", "culpKMUDyAA", "REUcDQ4FDOn", "VNk0JvkeJBm", "04RmfZ1Pnn0", "0XXVFmkWQ7E", "iBi_xKdjUsw", "nF4XJ7XSEEG", "Tv1WOXyN4l", "gycP5562nFE", "eT7eeunBeYW" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response! My concerns have been addressed.", " Thank you for the rebuttal response. \n\nI feel this work solves a major problem in evaluating influence functions, hence I would recommend a strong acceptance.\n\nTo make this work more impactful, I would still urge to (a) Include experiments on mo...
[ -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 2, 5 ]
[ "VNk0JvkeJBm", "REUcDQ4FDOn", "eT7eeunBeYW", "gycP5562nFE", "Tv1WOXyN4l", "nF4XJ7XSEEG", "nips_2022_hzbguA9zMJ", "nips_2022_hzbguA9zMJ", "nips_2022_hzbguA9zMJ", "nips_2022_hzbguA9zMJ", "nips_2022_hzbguA9zMJ" ]
nips_2022_NYF6jNTAui
Locally Hierarchical Auto-Regressive Modeling for Image Generation
We propose a locally hierarchical auto-regressive model with multiple resolutions of discrete codes. Our model represents an image with a pyramid of codes via Hierarchically Quantized Variational AutoEncoder (HQ-VAE) in the first stage, and disentangles the information contained in the multi-level codes. For an example of two-level codes, we create two separate pathways to carry high-level coarse structures of input images using top codes while compensating for missing fine details by constructing a residual connection for bottom codes. An appropriate selection of resizing operations for code embedding maps enables top codes to capture maximal information within images and the first stage algorithm achieves better performance on both vector quantization and image generation. Hierarchically Quantized Transformer (HQ-Transformer) in the second stage processes a sequence of local pyramids, which consist of a single top code and its corresponding bottom codes. Contrary to other hierarchical models, we sample bottom codes in parallel by exploiting the conditional independence assumption on the bottom codes. This assumption is naturally harvested from our first-stage model, HQ-VAE, where the bottom code learns to describe local details. On class-conditional and text-conditional generation benchmarks, our model shows competitive performance to previous AR models in terms of fidelity of generated images while enjoying lighter computational budgets.
Accept
The work proposes hierarchical generation as an approach to mitigate the increasing costs of modelling each pixel autoregressively as images increase to higher resolutions. The paper is easy to read, has strong empirical results, and has impressive ablation studies to understand the idea. I think the paper could most be improved by clarifying its relationship to prior work, as well as being more specific about generalizability. 1. There are already quite a few works proposing hierarchical generation based on vector quantization such as VQVAE-2, RQ-Transformer, and RQ-VAE. The authors should include any discussion of differences in the paper. In the rebuttal, the authors argue the major difference is parallel decoding based on conditionally independent generation of bottom features given top. These approaches are conceptually so similar that they're worth comparing to in the experiments in a controlled setup, and not simply comparing paper numbers which use completely different setups like code size and number of parameters. 2. I share Reviewer CKx9's concern about the generalizability of the method as it's difficult to see how it could extend to higher resolutions than what it currently studies (256x256. The computational complexity analyzed in Sec 3.2.3 is incorrect as, say, the Prediction Head Transformer can dominate the complexity significantly more than the main Transformer that is assumed to be the most expensive in the analysis.
train
[ "cjLpdo_iyA3", "qQnJmKvdFx", "mv34s5z7Sh", "9gqnPmyikb7C", "pZwonH7vAUn", "wnlYhjn1LE0", "zjt2DCI_-5", "vC1oxAJewt2G", "XGrhddr667_", "aLEU26wQEvG", "NY5mb_2G9Y", "2T1-D8QTgpu", "jhnE0kFnqt6", "N1hGkg6YF5c" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " ### Summary of the contributions of the proposed method\nWe appreciate all reviewers for their efforts in constructive comments and discussions. We summarize the main contributions of the proposed method below.\n\n- We propose a locally hierarchical auto-regressive (AR) model leveraging **a pyramid of discrete vi...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "nips_2022_NYF6jNTAui", "vC1oxAJewt2G", "NY5mb_2G9Y", "XGrhddr667_", "nips_2022_NYF6jNTAui", "zjt2DCI_-5", "vC1oxAJewt2G", "2T1-D8QTgpu", "aLEU26wQEvG", "N1hGkg6YF5c", "jhnE0kFnqt6", "nips_2022_NYF6jNTAui", "nips_2022_NYF6jNTAui", "nips_2022_NYF6jNTAui" ]
nips_2022_exDlhqs1Qr
Amortized Proximal Optimization
We propose a framework for online meta-optimization of parameters that govern optimization, called Amortized Proximal Optimization (APO). We first interpret various existing neural network optimizers as approximate stochastic proximal point methods which trade off the current-batch loss with proximity terms in both function space and weight space. The idea behind APO is to amortize the minimization of the proximal point objective by meta-learning the parameters of an update rule. We show how APO can be used to adapt a learning rate or a structured preconditioning matrix. Under appropriate assumptions, APO can recover existing optimizers such as natural gradient descent and KFAC. It enjoys low computational overhead and avoids expensive and numerically sensitive operations required by some second-order optimizers, such as matrix inverses. We empirically test APO for online adaptation of learning rates and structured preconditioning matrices for regression, image reconstruction, image classification, and natural language translation tasks. Empirically, the learning rate schedules found by APO generally outperform optimal fixed learning rates and are competitive with manually tuned decay schedules. Using APO to adapt a structured preconditioning matrix generally results in optimization performance competitive with second-order methods. Moreover, the absence of matrix inversion provides numerical stability, making it effective for low-precision training.
Accept
This paper proposes a technique for online adaptation of optimization hyper-parameters. The key advance seems to be the simultaneous minimization of function and weight space discrepancy, in addition to the minimization of the minibatch loss. All reviewers recommended this paper be accepted. One reviewer raised their score by two points, and went on to "moderately champion" the paper during discussion. The paper appears to have additionally meaningfully improved through the course of the review process. Based upon the reviewer consensus, I also recommend paper acceptance.
train
[ "q1HkMqZnxd", "SO7ZqfPtr9Q", "fGkSzCuyXkF", "p33xKH0ww_", "ekziElwZeh-", "BKiNeAfPolK", "XfOiIbeINlB", "cpo8ME1iRT6g", "ti36w3WadEgO", "SEx9FZuxwg", "n_AEceUxYnw", "AocYwfamExA", "CDbDEE9bYxU", "5LlaBXnToaR" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the authors' efforts for those additional experiments. The results look satisfactory. Actually my original intention was to check whether APO can be applied to other types of hyperparameters than optimizer parameters, but the RMSProp experiments seem interesting as well. I'm keeping my score, but can...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3 ]
[ "SO7ZqfPtr9Q", "fGkSzCuyXkF", "cpo8ME1iRT6g", "ekziElwZeh-", "BKiNeAfPolK", "5LlaBXnToaR", "CDbDEE9bYxU", "ti36w3WadEgO", "SEx9FZuxwg", "AocYwfamExA", "nips_2022_exDlhqs1Qr", "nips_2022_exDlhqs1Qr", "nips_2022_exDlhqs1Qr", "nips_2022_exDlhqs1Qr" ]
nips_2022_JXY11Tc9mwY
Submodular Maximization in Clean Linear Time
In this paper, we provide the first deterministic algorithm that achieves $1/2$-approximation for monotone submodular maximization subject to a knapsack constraint, while making a number of queries that scales only linearly with the size of the ground set $n$. Moreover, our result automatically paves the way for developing a linear-time deterministic algorithm that achieves the tight $1-1/e$ approximation guarantee for monotone submodular maximization under a cardinality (size) constraint. To complement our positive results, we also show strong information-theoretic lower bounds. More specifically, we show that when the maximum cardinality allowed for a solution is constant, no deterministic or randomized algorithm making a sub-linear number of function evaluations can guarantee any constant approximation ratio. Furthermore, when the constraint allows the selection of a constant fraction of the ground set, we show that any algorithm making fewer than $\Omega(n/\log(n))$ function evaluations cannot perform better than an algorithm that simply outputs a uniformly random subset of the ground set of the right size. We extend our results to the general case of maximizing a monotone submodular function subject to the intersection of a $p$-set system and multiple knapsack constraints. Finally, we evaluate the performance of our algorithms on multiple real-life applications, including movie recommendation, location summarization, Twitter text summarization, and video summarization.
Accept
Overall, this paper achieves strong and interesting results regarding the query complexity of submodular maximization. One reviewer was concerned that the lower bound result was maybe a folklore result that is easy to prove. However, sufficient evidence to justify that claim was not provided and other reviewers did not share the same concern.
test
[ "RumhAojr8_r", "1U1u3OTE_xj", "2eQqJPVixcF", "Y2Jb9nsdMk", "mxftUH_6W5W", "NfDuhGfVW-y", "FWWVJqzZvyj", "lslCpt3zVU2", "QBc8fXBOdFp", "QitLvUMQU2W", "wLykreleleV", "JrXOQeKfqyN", "3depLLb350_", "xAiN_qT2Mbu", "tV-bVfJHrE1", "g5-XzzRmLD8", "5zYrwsN46S6", "dv9bV3YVHgg", "P2OTBF6SFf...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_...
[ " Dear reviewer d5nd,\n\nYou refer to the lower bound as being folklore and already known. If this is the case, could you provide a reference mentioning this?\n\nThank you for your review and discussion about this paper.", " Thanks for your reply.\n\nIt is nothing to do with the sentence, it just my personal sugg...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "1U1u3OTE_xj", "2eQqJPVixcF", "Y2Jb9nsdMk", "WmgYNX_nnyv", "WmgYNX_nnyv", "WmgYNX_nnyv", "WmgYNX_nnyv", "vySq3fdPtQV", "vySq3fdPtQV", "vySq3fdPtQV", "P2OTBF6SFf", "P2OTBF6SFf", "P2OTBF6SFf", "P2OTBF6SFf", "dv9bV3YVHgg", "dv9bV3YVHgg", "dv9bV3YVHgg", "nips_2022_JXY11Tc9mwY", "nips...
nips_2022_L2Niz4Olng
MMC Transformer: Multiscale Multigrid Comparator Transformer for Few-Shot Video Segmentation
Learning to compare support and query feature sets for few-shot image and video understanding has been shown to be a powerful approach. Typically, methods limit feature comparisons to a single feature layer and thus ignore potentially valuable information. In particular, comparators that operate with early network layer features support precise localization, but lack sufficient semantic abstraction. At the other extreme, operating with deeper layer features provide richer descriptors, but sacrifice localization. In this paper, we address this scale selection challenge with a meta-learned Multiscale Multigrid Comparator (MMC) transformer that combines information across scales. The multiscale, multigrid operations encompassed by our architecture provide bidirectional information transfer between deep and shallow features (i.e. coarse-to-fine and fine-to-coarse). Thus, the overall comparisons among query and support features benefit from both rich semantics and precise localization. Additionally, we present a novel multiscale memory learning in the decoder within a meta-learning framework. This augmented memory preserves the detailed feature maps during the information exchange across scales and reduces confusion among the background and novel class. To demonstrate the efficacy of our approach, we consider two related tasks, few-shot video object and actor/action segmentation. Empirically, our model outperforms state-of-the-art approaches on both tasks.
Reject
The paper develops a multigrid variant of the transformer architecture and applies it to video segmentation tasks. After the author response and discussion phase, one reviewer recommends accept, but three of the four reviewers lean towards rejecting the paper. In discussion, these three reviewers all acknowledged having read the author rebuttal and chose not to improve their scores. The common concerns voiced across these reviews center on questionable novelty, clarity of explanation, and incremental experimental impact. The Area Chair has also taken a detailed look at the paper, reviews, and author responses, and agrees with the concerns raised by these three reviewers. Reviewer HsqG notes that "the idea of reasoning across multiple feature levels of a CNN/similarity tensors is not novel (as pointed out by the authors too)" and "Section 2.2 seems to be inflating the contribution here." Section 2.2 and the author response highlights "bidirectional information exchange across scales, inspired by multigrid methods" as a key contribution. However, reference [14] (Ke et al., Multigrid neural architectures, CVPR'17) explores exactly this idea: bidirectional information flow across scales, within a CNN architecture and even utilizes the same "multigrid" terminology. From the point of neural network architecture design, the current paper's novelty appears limited to adapting previously established ideas to transformers. So as not to appear to overclaim, the paper needs a broader discussion of the relationship of the proposed design to [14] as well as other prior work spanning multiscale, multiresolution, and feature pyramid architectures. Reviewer concerns over experiments include marginal gains over ablated variants of the system (Table 1), and mixed results in comparison to DANet provided in the author response. Overall, the response left unresolved questions over of contribution novelty, presentation clarity, and practical impact.
test
[ "W34LNCVO28", "v6Wg7rS70P", "JDwvEKYnBPE", "KJRoWNSz9ix", "UZ2ZOyGtjdd", "Xg1F24teyfxC", "UmEE2a3SYi", "50WMyVR4sQk", "ImFt3l-J3d1", "b4RXCULAnbZ", "JvWOM-eW5K3", "xPGI67I6LDe" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We wanted to clarify on the AVOS experiments that we are outperforming the state-of-the-art methods even in the fully supervised VOS task by evaluating with ResNet101 backbone to make it similar to SOA methods MATNet[A] and RTNet[B] shown in Table 5. Even with the weaker backbone of ResNet101 we show our method o...
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 7, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "UZ2ZOyGtjdd", "JDwvEKYnBPE", "KJRoWNSz9ix", "xPGI67I6LDe", "JvWOM-eW5K3", "b4RXCULAnbZ", "ImFt3l-J3d1", "nips_2022_L2Niz4Olng", "nips_2022_L2Niz4Olng", "nips_2022_L2Niz4Olng", "nips_2022_L2Niz4Olng", "nips_2022_L2Niz4Olng" ]
nips_2022_rOimdw0-sx9
Distributionally Adaptive Meta Reinforcement Learning
Meta-reinforcement learning algorithms provide a data-driven way to acquire policies that quickly adapt to many tasks with varying rewards or dynamics functions. However, learned meta-policies are often effective only on the exact task distribution on which they were trained and struggle in the presence of distribution shift of test-time rewards or transition dynamics. In this work, we develop a framework for meta-RL algorithms that are able to behave appropriately under test-time distribution shifts in the space of tasks. Our framework centers on an adaptive approach to distributional robustness that trains a population of meta-policies to be robust to varying levels of distribution shift. When evaluated on a potentially shifted test-time distribution of tasks, this allows us to choose the meta-policy with the most appropriate level of robustness, and use it to perform fast adaptation. We formally show how our framework allows for improved regret under distribution shift, and empirically show its efficacy on simulated robotics problems under a wide range of distribution shifts.
Accept
The paper introduces a new framework for meta RL and validates it on navigation and goal reaching tasks. The authors addressed several of reviewers’ concerns in the rebuttal and significantly improved the quality of the paper. I think the paper is novel and interesting for the community. It would be interesting to see the performance of the proposed algorithm in more continuous tasks as suggested by reviewer Asdp.
train
[ "7FJFhsP934n", "Eb3YSY0-gRd", "c9Ef7MxG_Yp", "OX078l-NN_w_", "pF_jfF32Z9K", "7Cbr7xAxbruX", "HYOz1vgoO9", "WJ4syJMxz_T", "MFE43R5ESOmT", "eujhdlCPfM8", "_g7dJMCe21v", "vcsb652DuEJp", "rHr_2V4Z6v", "Cm8UVzcjG8W", "aF8jGvi3e2r", "SfL-7l5I7ye", "zILFD3W8gBa", "yjXUvjUtQPY", "m14LCIY...
[ "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer Z27V,\n\nSince we are coming to the end of the reviewer-author discussion period, we would like to know if you have remaining questions or concerns following our response. Please let us know. We would be happy to offer further clarification in the time remaining.\n\nBest,\n\nAuthors ", " Dear Revi...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5, 4 ]
[ "m14LCIY1Ds", "Cm8UVzcjG8W", "rHr_2V4Z6v", "nips_2022_rOimdw0-sx9", "m14LCIY1Ds", "HXIRSyFZ3z", "vcsb652DuEJp", "MFE43R5ESOmT", "eujhdlCPfM8", "_g7dJMCe21v", "m14LCIY1Ds", "GdctoKfYKwS", "HXIRSyFZ3z", "aF8jGvi3e2r", "yjXUvjUtQPY", "zILFD3W8gBa", "nips_2022_rOimdw0-sx9", "nips_2022_...
nips_2022_Nx4gNemvNvx
Byzantine-tolerant federated Gaussian process regression for streaming data
In this paper, we consider Byzantine-tolerant federated learning for streaming data using Gaussian process regression (GPR). In particular, a cloud and a group of agents aim to collaboratively learn a latent function where some agents are subject to Byzantine attacks. We develop a Byzantine-tolerant federated GPR algorithm, which includes three modules: agent-based local GPR, cloud-based aggregated GPR and agent-based fused GPR. We derive the upper bounds on prediction error between the mean from the cloud-based aggregated GPR and the target function provided that Byzantine agents are less than one quarter of all the agents. We also characterize the lower and upper bounds of the predictive variance. Experiments on a synthetic dataset and two real-world datasets are conducted to evaluate the proposed algorithm.
Accept
This paper proposes a novel Byzantine-robust method for performing Gaussian process regression in a Federated learning setup. The contributions include the proposed method (novel), an accompanying theoretical analysis, and experiments. The reviewers were generally favorable about this paper, finding that the approach is novel, and appreciating the theoretical guarantees. Several criticisms were raised in the initial reviews and these were resolved in the post-rebuttal discussion. While the paper may have some limitations, there was consensus that the paper should be accepted. When preparing a camera ready version, please take care to improve the presentation and address common concerns that came up in the reviews, such as more clearly describing the relationship to previous work, and establishing the relationship between the attack model considered in this paper and that of previous work.
train
[ "m67SQ0gqyB8", "8IApyYxXEg0", "heY2tJDFgzR", "TBSzROrdL8y", "GZ9gvO6LIKP", "66KdHkgktQ", "qd7P3VrLhOX", "CKsGnMtTQeA", "tz_O2VDgYCa", "V6ai9oVq0QqE", "wGpdmgt_Be", "kBZbU2Xnhh", "OwMgQUlwa3F", "OJ7nxfdZCtL", "6-OmNzTQcfY", "gM1E9OJD0eQ", "MfqpRZ9qXqx" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " $Answer:$ Thank you so much for the suggestion, and we modify it in our revised paper. In this paper, we train GPR using standard method and using all the local data. But the local GPR makes prediction for each test inputs using only one data point that is nearest to the test point. This reduces the computation c...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3 ]
[ "GZ9gvO6LIKP", "66KdHkgktQ", "66KdHkgktQ", "qd7P3VrLhOX", "kBZbU2Xnhh", "wGpdmgt_Be", "V6ai9oVq0QqE", "MfqpRZ9qXqx", "MfqpRZ9qXqx", "gM1E9OJD0eQ", "gM1E9OJD0eQ", "gM1E9OJD0eQ", "6-OmNzTQcfY", "6-OmNzTQcfY", "nips_2022_Nx4gNemvNvx", "nips_2022_Nx4gNemvNvx", "nips_2022_Nx4gNemvNvx" ]
nips_2022_uAIQymz0Qp
DMAP: a Distributed Morphological Attention Policy for learning to locomote with a changing body
Biological and artificial agents need to deal with constant changes in the real world. We study this problem in four classical continuous control environments, augmented with morphological perturbations. Learning to locomote when the length and the thickness of different body parts vary is challenging, as the control policy is required to adapt to the morphology to successfully balance and advance the agent. We show that a control policy based on the proprioceptive state performs poorly with highly variable body configurations, while an (oracle) agent with access to a learned encoding of the perturbation performs significantly better. We introduce DMAP, a biologically-inspired, attention-based policy network architecture. DMAP combines independent proprioceptive processing, a distributed policy with individual controllers for each joint, and an attention mechanism, to dynamically gate sensory information from different body parts to different controllers. Despite not having access to the (hidden) morphology information, DMAP can be trained end-to-end in all the considered environments, overall matching or surpassing the performance of an oracle agent. Thus DMAP, implementing principles from biological motor control, provides a strong inductive bias for learning challenging sensorimotor tasks. Overall, our work corroborates the power of these principles in challenging locomotion tasks. The code is available at the following link: https://github.com/amathislab/dmap
Accept
This paper examines the problem of learning locomotion for a simulated robot body, when the length and thickness of the body parts are varied. The proposed method generates a distributed policy with controllers for each joint, and an attention mechanism that dynamically gates sensory information from different body parts. The method performed well when compared to a policy based on proprioceptive state, and performs comparably to an oracle method given an embedding of the perturbed parameters. Strengths of the paper noted by the reviewers included the clear writing, the relevance of the problem, and the convincing results. The author response addressed potential limitations of the work raised by each of the reviewers. The reviewers were satisfied with the clarifications provided by the authors, and raised no additional concerns. Four reviewers indicate to accept the paper for its clear contributions on how to learn locomotion policies when the body parameters are changed. The paper is therefore accepted.
train
[ "TwKxPa9wTFg", "sCKV5FtcSNm", "TdwguKBYpK", "X3dt72_Usix", "JtqtpFLWSxW", "RQM9H0dsZdce", "zrP2Fuyj0oD", "T1WMhweempE", "luS7KNdFVC", "pBGXMIcy8mL", "Bxbnj_yVBKg", "m6HcoLRE_yo", "w5EsP4ouYQV", "50cGMs5orCt" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks to the authors for the detailed reply to each of my concerns. As I initially stated, I think this work has no flows and could be of high interest to the community. My main concern was related to the amount of information crammed in the original paper, and lack of some important details.\nI have therefore u...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "pBGXMIcy8mL", "RQM9H0dsZdce", "JtqtpFLWSxW", "w5EsP4ouYQV", "m6HcoLRE_yo", "m6HcoLRE_yo", "Bxbnj_yVBKg", "50cGMs5orCt", "50cGMs5orCt", "50cGMs5orCt", "nips_2022_uAIQymz0Qp", "nips_2022_uAIQymz0Qp", "nips_2022_uAIQymz0Qp", "nips_2022_uAIQymz0Qp" ]
nips_2022_jRrpiqxtrWm
Simplified Graph Convolution with Heterophily
Recent work has shown that a simple, fast method called Simple Graph Convolution (SGC) (Wu et al., 2019), which eschews deep learning, is competitive with deep methods like graph convolutional networks (GCNs) (Kipf & Welling, 2017) in common graph machine learning benchmarks. The use of graph data in SGC implicitly assumes the common but not universal graph characteristic of homophily, wherein nodes link to nodes which are similar. Here we confirm that SGC is indeed ineffective for heterophilous (i.e., non-homophilous) graphs via experiments on synthetic and real-world datasets. We propose Adaptive Simple Graph Convolution (ASGC), which we show can adapt to both homophilous and heterophilous graph structure. Like SGC, ASGC is not a deep model, and hence is fast, scalable, and interpretable; further, we can prove performance guarantees on natural synthetic data models. Empirically, ASGC is often competitive with recent deep models at node classification on a benchmark of real-world datasets. The SGC paper questioned whether the complexity of graph neural networks is warranted for common graph problems involving homophilous networks; our results similarly suggest that, while deep learning often achieves the highest performance, heterophilous structure alone does not necessitate these more involved methods.
Accept
This paper proposes a new feature smoothing graph learning algorithm handling heterophilous graphs. The proposed idea is an adaptive node feature smoothing technique which essentially uses a regularized least square to find a weighted sum of the Krylov matrix columns (smoothed features at different orders) as an approximation of the original features. Careful theoretical and empirical analyses are carried out on a stochastic block model with 2 communities. These analyses, although in a rather restrictive setting, provide valuable insights. Most reviewers appreciated the novelty of the idea and appreciated the theoretical and empirical insights on the 2-cluster SBM setting. The benefit over previous method (SGC) on heterophilous graphs are well established. These feature-smoothing methods are generally interesting and important; they can better reveal the mechanism of graph learning and potentially improve efficiency and representation learning. Using non-deep Most technical concerns (formulation, evaluation, baselines, datasets) are addressed and clarified during rebuttal and discussion period. The theoretical results, although restricted to a relatively limited setting, provide a good intuition of how the idea works and help build good heuristics in real world setting. This is valuable and will benefit NeurIPS audience.
train
[ "4DyQRRgZvbe", "LygN2NFHJ8", "ws7gMqe8nUx", "R3-aTGt-Piv", "FdUPRG9wzrN", "-OGc13mV6R", "mwjjYiRzBdG", "qhpi5t6VSF", "Ym5Eko9f6DB", "m3tmlWNfj8o", "NItaFduB0YK", "KgwqoJMS7G", "2228UtcZwrr", "WTSUi1bK862", "_R2UjCc2CMh", "wDtqaSeoAI", "b3Na0RJuwUI" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your continued engagement. We address each of these three points:\n\n1. In short, ASGC works by replacing the fixed feature propagation step of SGC with an adaptive one, which may or may not be smoothing based on the feature and graph; this corresponds to learning (rather than asserting) a polynomia...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "LygN2NFHJ8", "m3tmlWNfj8o", "R3-aTGt-Piv", "FdUPRG9wzrN", "-OGc13mV6R", "mwjjYiRzBdG", "qhpi5t6VSF", "Ym5Eko9f6DB", "b3Na0RJuwUI", "wDtqaSeoAI", "_R2UjCc2CMh", "WTSUi1bK862", "nips_2022_jRrpiqxtrWm", "nips_2022_jRrpiqxtrWm", "nips_2022_jRrpiqxtrWm", "nips_2022_jRrpiqxtrWm", "nips_20...
nips_2022_ANkIj-WI2XA
Learning Generalized Policy Automata for Relational Stochastic Shortest Path Problems
Several goal-oriented problems in the real-world can be naturally expressed as Stochastic Shortest Path problems (SSPs). However, the computational complexity of solving SSPs makes finding solutions to even moderately sized problems intractable. State-of-the-art SSP solvers are unable to learn generalized solutions or policies that would solve multiple problem instances with different object names and/or quantities. This paper presents an approach for learning \emph{Generalized Policy Automata} (GPA): non-deterministic partial policies that can be used to catalyze the solution process. GPAs are learned using relational, feature-based abstractions, which makes them applicable on broad classes of related problems with different object names and quantities. Theoretical analysis of this approach shows that it guarantees completeness and hierarchical optimality. Empirical analysis shows that this approach effectively learns broadly applicable policy knowledge in a few-shot fashion and significantly outperforms state-of-the-art SSP solvers on test problems whose object counts are far greater than those used during training.
Accept
While the paper's approach is applicable only to MDPs with a known relational model expressible in the PPDDL language, the reviewers unanimously find it to be an interesting, novel advance for this class of settings.
train
[ "T5wa_rT-49o", "qQ5bx6_xjuE", "L-zEgRSD4W", "For9Mwxds2E", "2ezHCXN52ty", "l-rZEvNI4Yz", "SyK856NNqJB", "ft0Xbl-NBEc", "1IirHtQVpbX", "psYFTqPcDmi", "xF1UszT8Fzk", "9xSslQ6Uez", "33IbuLG057U", "3UrMKxTBRcZ" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your review. We appreciate the comments pertaining to improving the clarity of our paper and are happy that the updated appendix and discussion helped in addressing your concerns.", " Thank you for your feedback and constructive comments. We are glad that the illustrations helped improve the clari...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 8, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3, 3 ]
[ "For9Mwxds2E", "l-rZEvNI4Yz", "2ezHCXN52ty", "1IirHtQVpbX", "psYFTqPcDmi", "SyK856NNqJB", "3UrMKxTBRcZ", "9xSslQ6Uez", "33IbuLG057U", "xF1UszT8Fzk", "nips_2022_ANkIj-WI2XA", "nips_2022_ANkIj-WI2XA", "nips_2022_ANkIj-WI2XA", "nips_2022_ANkIj-WI2XA" ]
nips_2022_GbpEszOdiTV
A Combinatorial Perspective on the Optimization of Shallow ReLU Networks
The NP-hard problem of optimizing a shallow ReLU network can be characterized as a combinatorial search over each training example’s activation pattern followed by a constrained convex problem given a fixed set of activation patterns. We explore the implications of this combinatorial aspect of ReLU optimization in this work. We show that it can be naturally modeled via a geometric and combinatoric object known as a zonotope with its vertex set isomorphic to the set of feasible activation patterns. This assists in analysis and provides a foundation for further research. We demonstrate its usefulness when we explore the sensitivity of the optimal loss to perturbations of the training data. Later we discuss methods of zonotope vertex selection and its relevance to optimization. Overparameterization assists in training by making a randomly chosen vertex more likely to contain a good solution. We then introduce a novel polynomial-time vertex selection procedure that provably picks a vertex containing the global optimum using only double the minimum number of parameters required to fit the data. We further introduce a local greedy search heuristic over zonotope vertices and demonstrate that it outperforms gradient descent on underparameterized problems.
Accept
Thank you for your submission to NeurIPS. This work presents a combinatorial view of training two-layer ReLU networks. The reviewers and I, after the author response, are in agreement that there are interesting and strong contributions in this work. Namely, it is shown that global optimization of a two-layer ReLU network can be characterized as a combinatorial search over the vertices of a zonotope. This is used to show that the global optimum of two-layer ReLU network is discontinuous with respect to the training data. Four knowledgeable reviewers recommend accept/borderline accept, and I concur, in light of the contributions made. The reviewers also noted some minor weaknesses: In particular, the reviewers noted that (1) results only apply to two-layer ReLU networks (2) proposed greedy local search heuristic is too expensive to be applied on modern datasets. Please take into account the updated reviewer comments, including suggested additional references, when preparing the final version to accommodate the requested changes.
train
[ "d7CyfVAl_6", "DGQ3vRXEOB", "57yfzizCfPSf", "TSkJ0N7N3LW", "tbgRICcceAS", "yS3ZPfDpRN9", "LNhuJ_AK_eW", "Sgp7f5COUXh", "oQ_M8DLZkJK", "H5VZevpZDLn", "QKztkYbfI_n" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response. I retain my original score (6: Weak Accept).", " The authors addressed my concerns. Adding the mentioned examples will improve the clarity of the paper. I also agree with Reviewer 6i9S \n that stating the theoretical results clearly as theorems would further improve clarity. The additi...
[ -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 2, 4, 3, 4 ]
[ "LNhuJ_AK_eW", "tbgRICcceAS", "TSkJ0N7N3LW", "H5VZevpZDLn", "QKztkYbfI_n", "oQ_M8DLZkJK", "Sgp7f5COUXh", "nips_2022_GbpEszOdiTV", "nips_2022_GbpEszOdiTV", "nips_2022_GbpEszOdiTV", "nips_2022_GbpEszOdiTV" ]
nips_2022_PPlAVQDeL6
Physics-Informed Implicit Representations of Equilibrium Network Flows
Flow networks are ubiquitous in natural and engineered systems, and in order to understand and manage these networks, one must quantify the flow of commodities across their edges. This paper considers the estimation problem of predicting unlabeled edge flows from nodal supply and demand. We propose an implicit neural network layer that incorporates two fundamental physical laws: conservation of mass, and the existence of a constitutive relationship between edge flows and nodal states (e.g., Ohm's law). Computing the edge flows from these two laws is a nonlinear inverse problem, which our layer solves efficiently with a specialized contraction mapping. Using implicit differentiation to compute the solution's gradients, our model is able to learn the constitutive relationship within a semi-supervised framework. We demonstrate that our approach can accurately predict edge flows in several experiments on AC power networks and water distribution systems.
Accept
This paper proposes Implicit Flow Network to estimate flow over network edges under certain physical constraints (conversation law and constitutive relationship such as Ohm’s law). Most reviewers agreed that the proposed idea of augmenting IFNs with physical constraints is novel and interesting, and experimental validation is sufficiently convincing.
val
[ "S8NkQpAKPyK", "X0YODasBdw", "J11vzPPU9tb", "jFa1-hTJJr1", "-XJba-SkLc1", "vOXTG6VAeK4", "tC_eqVIpTfG", "UwRQZxZKpqF", "51K_76bdFg", "vat1UIJQ9Aq", "_da2Fcnv9_k", "D9mH-dzla5R", "9DGmIHQvZuc" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the clarification!\n\nRegarding process uncertainty, we would like to point out how we only assume the existence of a constitutive relation as in Eq. (2), without characterizing it explicitly. Since no additional constraint is enforced, IFN does not try to reconstruct flows according to a predetermi...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 4, 3 ]
[ "X0YODasBdw", "tC_eqVIpTfG", "-XJba-SkLc1", "9DGmIHQvZuc", "D9mH-dzla5R", "_da2Fcnv9_k", "vat1UIJQ9Aq", "vat1UIJQ9Aq", "nips_2022_PPlAVQDeL6", "nips_2022_PPlAVQDeL6", "nips_2022_PPlAVQDeL6", "nips_2022_PPlAVQDeL6", "nips_2022_PPlAVQDeL6" ]
nips_2022_eHePKMLuNmy
Revisiting Optimal Convergence Rate for Smooth and Non-convex Stochastic Decentralized Optimization
While numerous effective decentralized algorithms have been proposed with theoretical guarantees and empirical successes, the performance limits in decentralized optimization, especially the influence of network topology and its associated weight matrix on the optimal convergence rate, have not been fully understood. While Lu and Sa have recently provided an optimal rate for non-convex stochastic decentralized optimization using weight matrices associated with linear graphs, the optimal rate with general weight matrices remains unclear. This paper revisits non-convex stochastic decentralized optimization and establishes an optimal convergence rate with general weight matrices. In addition, we also establish the first optimal rate when non-convex loss functions further satisfy the Polyak-Lojasiewicz (PL) condition. Following existing lines of analysis in literature cannot achieve these results. Instead, we leverage the Ring-Lattice graph to admit general weight matrices while maintaining the optimal relation between the graph diameter and weight matrix connectivity. Lastly, we develop a new decentralized algorithm to attain the above two optimal rates up to logarithm factors.
Accept
This paper presents some new results on near-optimum algorithms for distributed optimization, nearly matching lower bounds. Most of the reviewers are positive about the contributions of this work. However, one issue that came up is the assumption of bounded gradient dissimilarity, which is essentially a gap between upper and lower bounds. While I am recommending to accept this paper, I believe this gap should be more prominently discussed in the abstract and introduction.
train
[ "ct3NQkKsyqI", "RqUjIjHLwR7", "kC7HK9mBMtR", "nRRllVcg79H", "AXc_IC0JC3", "FUcUUpZllD", "JRRtoz60JkP", "KacqG2svdM", "eV06y1UY8WsO", "30URzu2kDIY", "My579Q4nyM0", "rM8PwJsRStr", "psXOyzN8BVp", "TuoH5tq9RT", "Uj3bY9U_26o", "YhxrLcle0Fu", "ZFBtSy4XouZF", "c08_tcObUJU", "j32MifwxW9c...
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_r...
[ " \\\nDear Reviewer o2Rv, \n\n\\\nThanks very much for all the valuable comments, helpful discussions, and useful suggestions during the rebuttal, with which our paper has been significantly improved. We really appreciate your time and efforts! \n\n\\\nBest,\\\nAuthors", " Thanks to the authors for the new versio...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3, 5 ]
[ "RqUjIjHLwR7", "kC7HK9mBMtR", "FUcUUpZllD", "FUcUUpZllD", "TuoH5tq9RT", "psXOyzN8BVp", "KacqG2svdM", "eV06y1UY8WsO", "Uj3bY9U_26o", "c08_tcObUJU", "ehW3jgtLmsq", "ehW3jgtLmsq", "WSzVRiWkmSz", "WSzVRiWkmSz", "ZFBtSy4XouZF", "j32MifwxW9c", "j32MifwxW9c", "nips_2022_eHePKMLuNmy", "n...
nips_2022_MXX18i8puEk
Nest Your Adaptive Algorithm for Parameter-Agnostic Nonconvex Minimax Optimization
Adaptive algorithms like AdaGrad and AMSGrad are successful in nonconvex optimization owing to their parameter-agnostic ability – requiring no a priori knowledge about problem-specific parameters nor tuning of learning rates. However, when it comes to nonconvex minimax optimization, direct extensions of such adaptive optimizers without proper time-scale separation may fail to work in practice. We provide such an example proving that the simple combination of Gradient Descent Ascent (GDA) with adaptive stepsizes can diverge if the primal-dual stepsize ratio is not carefully chosen; hence, a fortiori, such adaptive extensions are not parameter-agnostic. To address the issue, we formally introduce a Nested Adaptive framework, NeAda for short, that carries an inner loop for adaptively maximizing the dual variable with controllable stopping criteria and an outer loop for adaptively minimizing the primal variable. Such mechanism can be equipped with off-the-shelf adaptive optimizers and automatically balance the progress in the primal and dual variables. Theoretically, for nonconvex-strongly-concave minimax problems, we show that NeAda with AdaGrad stepsizes can achieve the near-optimal $\widetilde{O}(\epsilon^{-2})$ and $\widetilde{O}(\epsilon^{-4})$ gradient complexities respectively in the deterministic and stochastic settings, without prior information on the problem's smoothness and strong concavity parameters. To the best of our knowledge, this is the first algorithm that simultaneously achieves near-optimal convergence rates and parameter-agnostic adaptation in the nonconvex minimax setting. Numerically, we further illustrate the robustness of the NeAda family with experiments on simple test functions and a real-world application.
Accept
During the rebuttal it was agreed that this paper provides good explanations regarding the machineries that allow to guarantee convergence for minimax non-convex - strongly-concave problems. The nesting technique together with the adagrad stepsize are nicely composed, and the proof technique is non-trivial. I therefore recommend to accept.
train
[ "AeSaJEfjw-r", "HOunTdM84Lq", "g9Mpub7FJoI", "0g_lsZO2SU", "XvUVzzuPDJ", "BtoHHFydBFl", "tgKsX9OgwCd", "0TJkY7b9Xt", "tFjTtjR8Psw", "F1F5IYMqOuu", "PwkhCkMrkK0", "llA0SPzCEwM", "pqpiKZ5V8gK", "rBBpGWXJcTX", "0n-Qui2cxK", "H2asX4Wq_sG", "NAfeFuw09dj", "m0NzWmQwfxe" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I upgrade the score accordingly.", " **Other related works of adaptive methods do not seem to have such a component.**\n\nTwo works mentioned in our last response have similar components (in the theorems we cited).\n\n* Theorem 7 in [1]: $\\\\left(\\\\frac{1}{\\\\textcolor{red}{\\\\eta}} \\\\\\|x^*\\\\\\|^2_2 +...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 2 ]
[ "HOunTdM84Lq", "g9Mpub7FJoI", "BtoHHFydBFl", "XvUVzzuPDJ", "tFjTtjR8Psw", "tgKsX9OgwCd", "pqpiKZ5V8gK", "m0NzWmQwfxe", "H2asX4Wq_sG", "PwkhCkMrkK0", "H2asX4Wq_sG", "pqpiKZ5V8gK", "m0NzWmQwfxe", "NAfeFuw09dj", "nips_2022_MXX18i8puEk", "nips_2022_MXX18i8puEk", "nips_2022_MXX18i8puEk", ...
nips_2022_4n1PS9WvdYv
On Deep Generative Models for Approximation and Estimation of Distributions on Manifolds
Deep generative models have experienced great empirical successes in distribution learning. Many existing experiments have demonstrated that deep generative networks can efficiently generate high-dimensional complex data from a low-dimensional easy-to-sample distribution. However, this phenomenon can not be justified by existing theories. The widely held manifold hypothesis speculates that real-world data sets, such as natural images and signals, exhibit low-dimensional geometric structures. In this paper, we take such low-dimensional data structures into consideration by assuming that data distributions are supported on a low-dimensional manifold. We prove approximation and estimation theories of deep generative networks for estimating distributions on a low-dimensional manifold under the Wasserstein-1 loss. We show that the Wasserstein-1 loss converges to zero at a fast rate depending on the intrinsic dimension instead of the ambient data dimension. Our theory leverages the low-dimensional geometric structures in data sets and justifies the practical power of deep generative models. We require no smoothness assumptions on the data distribution which is desirable in practice.
Accept
This paper presents a theory of generative neural networks when it is assumed that the real-world data studied lies on a low-dimensional manifold. In particular, they prove that feedforward ReLU networks can approximate probability distributions that are supported on a low dimensional manifold embedded in a higher dimensional Euclidean space, a common assumption in machine learning. The clear consensus among reviewers is that this constitutes a valuable (and mathematically impressive) contribution to the theoretical literature on manifold fitting with GAN models, giving a strong foundation to the well-known fact that networks such as GAN can approximate real data extremely well. The authors have also clearly answered the question raised by the various reviewers.
train
[ "1UjrqiPHHLd", "K2Lj0UGJVeO", "H6jNiA_zkaS", "cOBoM76bjDr", "XXtjDvs5DKa", "gXitS6ZgNq", "hCL8Kp2o_Gk", "PytkezdPhfF", "HMOtT4okiOa", "SCwLoGRMgBL", "mzfD0GyXqRq" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your reply. I do agree that the theoretical results stand on their own and do not exactly require experimental justification, but my thinking was simply that providing some basic experiments (such as the one from Weakness #1) could help ground some of the theory and provide more motivation. At any rate...
[ -1, -1, -1, -1, -1, -1, 6, 7, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 3, 1, 3 ]
[ "XXtjDvs5DKa", "mzfD0GyXqRq", "SCwLoGRMgBL", "HMOtT4okiOa", "PytkezdPhfF", "hCL8Kp2o_Gk", "nips_2022_4n1PS9WvdYv", "nips_2022_4n1PS9WvdYv", "nips_2022_4n1PS9WvdYv", "nips_2022_4n1PS9WvdYv", "nips_2022_4n1PS9WvdYv" ]
nips_2022_sjaQ2bHpELV
Online Reinforcement Learning for Mixed Policy Scopes
Combination therapy refers to the use of multiple treatments -- such as surgery, medication, and behavioral therapy - to cure a single disease, and has become a cornerstone for treating various conditions including cancer, HIV, and depression. All possible combinations of treatments lead to a collection of treatment regimens (i.e., policies) with mixed scopes, or what physicians could observe and which actions they should take depending on the context. In this paper, we investigate the online reinforcement learning setting for optimizing the policy space with mixed scopes. In particular, we develop novel online algorithms that achieve sublinear regret compared to an optimal agent deployed in the environment. The regret bound has a dependency on the maximal cardinality of the induced state-action space associated with mixed scopes. We further introduce a canonical representation for an arbitrary subset of interventional distributions given a causal diagram, which leads to a non-trivial, minimal representation of the model parameters.
Accept
The reviewers appreciated the direction of the work, but they all had rather low confidence despite each one being a foremost expert in RL, which may indicate a potential mismatch of interest with NeurIPS as a venue. They also found the potential applications limited, especially given the great amount of formalization. Nonetheless, they judged the developments on this new problem insightful and having the potential to inspire further work in the area and also found no reason to reject. In view of this, I recommend acceptance as I think the potential benefits outweigh the concerns of fit or practical relevance, and given there are no concerns regarding validity.
test
[ "IL38uJhD9-8", "RgF1_U5TH5y", "SHQJB8z_pTZ", "UBarReF57JP", "5cN5rrIH-I", "HBIvNttmGvg0", "PtLPOcIaO8z", "zhJjy0g80im", "chyi235UtBb", "I5R8HZjEWi", "YU2iyaLCiVp", "ycgYSr9r4Y_" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We appreciate the reviewer’s response and will update the manuscript accordingly to highlight the main contributions of this paper. Here we would like to respectfully provide further contexts for the impact and significance of this work. First, we agree that there are many important challenges in RL, and one of t...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 2, 2 ]
[ "RgF1_U5TH5y", "PtLPOcIaO8z", "ycgYSr9r4Y_", "YU2iyaLCiVp", "YU2iyaLCiVp", "I5R8HZjEWi", "I5R8HZjEWi", "chyi235UtBb", "nips_2022_sjaQ2bHpELV", "nips_2022_sjaQ2bHpELV", "nips_2022_sjaQ2bHpELV", "nips_2022_sjaQ2bHpELV" ]
nips_2022_VVsNTPK1FBp
Diversified Recommendations for Agents with Adaptive Preferences
When an Agent visits a platform recommending a menu of content to select from, their choice of item depends not only on immutable preferences, but also on their prior engagements with the platform. The Recommender's primary objective is typically to encourage content consumption which optimizes some reward, such as ad revenue, but they often additionally aim to ensure that a sufficiently wide variety of content is consumed by the Agent over time. We formalize this problem as an adversarial bandit task. At each step, the Recommender presents a menu of $k$ (out of $n$) items to the Agent, who selects one item in the menu according to their unknown {\it preference model}, which maps their history of past items to relative selection probabilities. The Recommender then observes the Agent's selected item and receives bandit feedback of the item's (adversarial) reward. In addition to optimizing reward from the selected items at each step, the Recommender must also ensure that the total distribution of chosen items has sufficiently high entropy. We define a class of preference models which are {\it locally learnable}, i.e.\ behavior over the entire domain can be estimated by only observing behavior in a small region; this includes models representable by bounded-degree polynomials as well as functions with a sparse Fourier basis. For this class, we give an algorithm for the Recommender which obtains $\tilde{O}(T^{3/4})$ regret against all item distributions satisfying two conditions: they are sufficiently diversified, and they are {\it instantaneously realizable} at any history by some distribution over menus. We show that these conditions are closely connected: all sufficiently high-entropy distributions are instantaneously realizable at any history of selected items. We also give a set of negative results justifying our assumptions, in the form of a runtime lower bound for non-local learning and linear regret lower bounds for alternate benchmarks.
Accept
This paper studies a sequential recommendation problem where user preferences change over time based on the items selected by the user. The authors show that no sublinear regret algorithm exists for this setting. Under the constraint on preference diversity, they derive an algorithm with $O(T^{3 / 4})$ regret. The original ratings of the paper were 7, 7, 5, and 5; and they did not change after the rebuttal. The reviewers generally like the paper because it studies both adaptive users and diverse recommendations, while most other works try to avoid these topics. This is the strongest point of the paper and my recommendation for acceptance is based on that. On the other hand, the paper is overly theoretical and lacks experiments. Therefore, it is unlikely to appeal to a general recommender systems audience.
test
[ "Rfnz7Yx7uLX", "PIefUC280-i", "L2hlFLixEx0", "wrt_0ZVpLgk", "hq23O2-wZI", "tHHNpeAqkY9", "8BPrUNLO4e0", "DGq-SAz3Zy0", "E2X15V4N2V", "rKB3NBYIp8t" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the rebuttal. I have read the rebuttal and my opinion has not changed. ", " \n### Adversarial Rewards:\nFor no-regret learning, the term \"adversarial rewards\" is in contrast to fixed or stochastic losses, meaning that the loss functions can vary arbitrarily over time (as if chosen by an \"advers...
[ -1, -1, -1, -1, -1, -1, 5, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, 1, 4, 1 ]
[ "wrt_0ZVpLgk", "L2hlFLixEx0", "rKB3NBYIp8t", "E2X15V4N2V", "DGq-SAz3Zy0", "8BPrUNLO4e0", "nips_2022_VVsNTPK1FBp", "nips_2022_VVsNTPK1FBp", "nips_2022_VVsNTPK1FBp", "nips_2022_VVsNTPK1FBp" ]
nips_2022_df1g_KeEjQ
VectorAdam for Rotation Equivariant Geometry Optimization
The Adam optimization algorithm has proven remarkably effective for optimization problems across machine learning and even traditional tasks in geometry processing. At the same time, the development of equivariant methods, which preserve their output under the action of rotation or some other transformation, has proven to be important for geometry problems across these domains. In this work, we observe that Adam — when treated as a function that maps initial conditions to optimized results — is not rotation equivariant for vector-valued parameters due to per-coordinate moment updates. This leads to significant artifacts and biases in practice. We propose to resolve this deficiency with VectorAdam, a simple modification which makes Adam rotation-equivariant by accounting for the vector structure of optimization variables. We demonstrate this approach on problems in machine learning and traditional geometric optimization, showing that equivariant VectorAdam resolves the artifacts and biases of traditional Adam when applied to vector-valued data, with equivalent or even improved rates of convergence.
Accept
Ratings: 5/8/4/7. Confidence: 4/3/4/4. Discussion among reviewers: Yes. Summary: this paper introduces a variant of Adam where instead of keeping EMAs of the squared individual gradients, the algorithm keeps an EMAs of the squared L2 norm of vectors of gradients. This EMA of the squared L2 norm is then used as normalizer, making sure that the L2 norm of weight updates are normalized. This has as advantage that the algorithm becomes equivariant to rotations in parameter space, which is crucial for certain types of problems. The reviewers noted the clear presentation and the encouraging results. There's one reviewer with a reject rating, whose main reason for rejection is that the proposed algorithm itself is simple, in the sense that it's only a small change from baseline Adam. Other reviewers disagree that this is a reason for rejection, and I agree with the other reviewers here. Simplicity isn't bad if the method is novel, the motivation is clear, the results are good, and the empirical results are strong. Recommendation: I recommend to accept this paper.
train
[ "yu5N5GHjPip", "cbYGazhg4hJ", "tiRY2CAczvo", "situa_3RJGY", "YjPCFxMB8GnB", "2OObawKmt0H", "kB7TaGMIbUa", "anSb9Ue8epQ", "XnBtli-W7a", "t29RcjwIxEM", "nqAn7qUS960", "fNWy351CIcn", "HVnbQiFOlB2", "zj2EH1o_5v8", "l4XUlIIoG8U", "W6XFxQzkgZn" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their reply. \n\nI understand that simplicity is often a strength, and unnecessary complexity is not a desired feature. However, I am still of the opinion that the novelty of the proposed methodology is too limited, as well as its filed of applicability. Not the topic in itself, so previou...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 8, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "kB7TaGMIbUa", "anSb9Ue8epQ", "situa_3RJGY", "2OObawKmt0H", "XnBtli-W7a", "t29RcjwIxEM", "W6XFxQzkgZn", "l4XUlIIoG8U", "zj2EH1o_5v8", "HVnbQiFOlB2", "fNWy351CIcn", "nips_2022_df1g_KeEjQ", "nips_2022_df1g_KeEjQ", "nips_2022_df1g_KeEjQ", "nips_2022_df1g_KeEjQ", "nips_2022_df1g_KeEjQ" ]
nips_2022_WaKGmSI2-8g
Scalable design of Error-Correcting Output Codes using Discrete Optimization with Graph Coloring
We study the problem of scalable design of Error-Correcting Output Codes (ECOC) for multi-class classification. Prior works on ECOC-based classifiers are limited to codebooks with small number of rows (classes) or columns, and do not provide optimality guarantees for the codebook design problem. We address these limitations by developing a codebook design approach based on a Mixed-Integer Quadratically Constrained Program (MIQCP). This discrete formulation is naturally suited for maximizing the error-correction capability of ECOC-based classifiers and incorporates various design criteria in a flexible manner. Our solution approach is tractable in that it incrementally increases the codebook size by adding columns to maximize the gain in error-correcting capability. In particular, we show that the maximal gain in error-correction can be upper bounded by solving a graph-coloring problem. As a result, we can efficiently generate near-optimal codebooks for very large problem instances. These codebooks provide competitive multi-class classification performance on small class datasets such as MNIST and CIFAR10. Moreover, by leveraging transfer-learned binary classifiers, we achieve better classification performance over transfer-learned multi-class CNNs on large class datasets such as CIFAR100, Caltech-101/256. Our results highlight the advantages of simple and modular ECOC-based classifiers in improving classification accuracy without the risk of overfitting.
Accept
The paper furthers the understanding of designing codebooks in the context of using Error correcting codes for Multi-class problems. As opposed to continuous relaxation, the state of the art, it advocates a graph colouring approach which finally yields an alternative to codebooks. To the best of understanding the methodology will be hard to apply for large number of classes. Though the experimental results only show marginal improvement, the methods does improve upon the state of the art.
val
[ "EyEHvheWCT", "SpD_YK1xL9D", "77pJTCA0ifY", "QWOQ5BNYMHP", "Ur443He7RUU", "AxolI78vIL", "0OMNt1oAjQ4", "XIpLjiVZYQ", "2YCUVHYjlF0", "QhCStBTOpcn", "aqY6uBlGZRm", "UX5mhhTa_0a", "yXXeGeg441" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " As the author-reviewer discussion period comes to an end, we sincerely thank all the reviewers for their detailed review and follow-up discussions. Thank You.", " We thank the reviewer for their subsequent follow-up comments. In the final manuscript we will carefully include all the comments and revisions which...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 2 ]
[ "nips_2022_WaKGmSI2-8g", "QWOQ5BNYMHP", "UX5mhhTa_0a", "UX5mhhTa_0a", "UX5mhhTa_0a", "UX5mhhTa_0a", "UX5mhhTa_0a", "aqY6uBlGZRm", "yXXeGeg441", "yXXeGeg441", "nips_2022_WaKGmSI2-8g", "nips_2022_WaKGmSI2-8g", "nips_2022_WaKGmSI2-8g" ]
nips_2022_lhLEGeBC-ru
Polynomial time guarantees for the Burer-Monteiro method
The Burer-Monteiro method is one of the most widely used techniques for solving large-scale semidefinite programs (SDP). The basic idea is to solve a nonconvex program in $Y$, where $Y$ is an $n \times p$ matrix such that $X = Y Y^T$. We show that this method can solve SDPs in polynomial time in a smoothed analysis setting. More precisely, we consider an SDP whose domain satisfies some compactness and smoothness assumptions, and slightly perturb the cost matrix and the constraints. We show that if $p \gtrsim \sqrt{2(1{+}\eta)m}$, where $m$ is the number of constraints and $\eta>0$ is any fixed constant, then the Burer-Monteiro method can solve SDPs to any desired accuracy in polynomial time, in the setting of smooth analysis. The bound on $p$ approaches the celebrated Barvinok-Pataki bound in the limit as $\eta$ goes to zero, beneath which it the nonconvex program can be suboptimal. Our main technical contribution, which is key for our tight bound on $p$, is to connect spurious approximately critical points of the nonconvex program to tubular neighborhoods of certain algebraic varieties, and then estimate the volume of such tubes.
Accept
This paper gives polynomial time smoothed analysis guarantees for the Burer-Monteiro method. The result is new and interesting. Prior works fall short of this goal for various reasons. Boumal et al. which started this line of work gives non-robust generic guarantees, while Bhojanapalli et al. look at a penalized version that's different (and suffers from ill-conditioning). While the basic approach is to take the arguments in Boumal et al. and make them robust in a natural way, there is effort in formalizing this using volumes of certain tubes and machinery from algebraic geometry. This paper is a solid contribution to the literature on the Burer-Monteiro method.
train
[ "ITMwl_q8sH", "koMngSwFUOb", "Ot8vxg589XM", "UxVYVeWle5v", "SIjTq0i5qD", "R4vQu4Lww9R", "IAtAZD1Kb4_", "8MAuuTEeuUK3", "ipEb8v3Fcde", "WeLGejCRT6I", "4dyPREVxTvY", "W_QZIJSShII" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Please let us know if you still have any doubts about our paper, so we can clarify them before the end of the discussion period.", " Please let us know if you still have any doubts about our paper, so we can clarify them before the end of the discussion period.", " Thanks again for your feedback on the paper....
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 2, 4 ]
[ "IAtAZD1Kb4_", "SIjTq0i5qD", "UxVYVeWle5v", "8MAuuTEeuUK3", "ipEb8v3Fcde", "WeLGejCRT6I", "4dyPREVxTvY", "W_QZIJSShII", "nips_2022_lhLEGeBC-ru", "nips_2022_lhLEGeBC-ru", "nips_2022_lhLEGeBC-ru", "nips_2022_lhLEGeBC-ru" ]
nips_2022_PM5gVmG2Jj
Conformal Prediction with Temporal Quantile Adjustments
We develop Temporal Quantile Adjustment (TQA), a general method to construct efficient and valid prediction intervals (PIs) for regression on cross-sectional time series data. Such data is common in many domains, including econometrics and healthcare. A canonical example in healthcare is predicting patient outcomes using physiological time-series data, where a population of patients composes a cross-section. Reliable PI estimators in this setting must address two distinct notions of coverage: cross-sectional coverage across a cross-sectional slice, and longitudinal coverage along the temporal dimension for each time series. Recent works have explored adapting Conformal Prediction (CP) to obtain PIs in the time series context. However, none handles both notions of coverage simultaneously. CP methods typically query a pre-specified quantile from the distribution of nonconformity scores on a calibration set. TQA adjusts the quantile to query in CP at each time $t$, accounting for both cross-sectional and longitudinal coverage in a theoretically-grounded manner. The post-hoc nature of TQA facilitates its use as a general wrapper around any time series regression model. We validate TQA's performance through extensive experimentation: TQA generally obtains efficient PIs and improves longitudinal coverage while preserving cross-sectional coverage.
Accept
In this paper, the authors propose the temporal quantile adjustments (TQA) that can improve longitudinal coverage and preserve cross-sectional coverage for the prediction interval built for regression on cross-sectional time series data. While previous works focus on either of the coverage guarantees, a major contribution of this paper is to achieve both. The research questions addressed in this paper are of critical importance for practitioners in relevant areas. The paper is well written. The presented approach has solid empirical support and decent theoretical guarantees. Including a simulation study to demonstrate the validity of the proposed approach under specific (and controlled) setup will further improve the paper.
train
[ "MWEc3JRJnnf", "xnj2usBHbK", "emno3RcUInn", "q-nrA58Fh63", "YSpXXPQI_Bh", "RArIxbXL0v1", "r1P-y9zi3fn", "PaIDnePErLu" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the positive and thorough review. \nWe would like to clarify the questions raised below.\n\n> (Eq.2): ... the distribution on the RHS is $Y\\_{N+1, t} | S\\_{N+1,:t-1}$, shouldn't it be $S\\_{N+1, t} | S\\_{N+1,:t-1}$ instead?\n\nHere, we assumed that $\\hat{C}\\_{N+1,t}$ typically takes $X\\_{N+1,t...
[ -1, -1, -1, -1, 8, 5, 6, 7 ]
[ -1, -1, -1, -1, 5, 4, 3, 1 ]
[ "PaIDnePErLu", "r1P-y9zi3fn", "RArIxbXL0v1", "YSpXXPQI_Bh", "nips_2022_PM5gVmG2Jj", "nips_2022_PM5gVmG2Jj", "nips_2022_PM5gVmG2Jj", "nips_2022_PM5gVmG2Jj" ]
nips_2022_giOus054WOy
Neurosymbolic Deep Generative Models for Sequence Data with Relational Constraints
There has been significant recent progress designing deep generative models that generate realistic sequence data such as text or music. Nevertheless, it remains difficult to incorporate high-level structure to guide the generative process, and many such models perform well on local coherence, but less so on global coherence. We propose a novel approach for incorporating global structure in the form of relational constraints between different subcomponents of an example (e.g., lines of a poem or measures of music). Our generative model has two parts: (i) one model to generate a realistic set of relational constraints, and (ii) a second model to generate realistic data satisfying these constraints. For model (i), we propose a constrained optimization algorithm that infers the relational constraints present in the training data, and then learn a generative model based on the resulting constraint data. In our experiments, we show that our approach significantly improves over state-of-the-art in terms of capturing high-level structure in the data, while performing comparably or better in terms of low-level structure. We also show that using constrained optimization for part (ii) as well leads to increased controllability with little decrease in quality compared to pure learning-based models.
Accept
This paper proposes a deep generative model for sequence data by extracting relational constraints from data. The model is overall interesting as a neurosymbolic deep model, which is claimed to model the global coherence better than typical sequence neural models and users could more or less control the generation through specifying the constraints. Experiments including human study confirm its superiority over normal NNs on generation such as MusicAutobot and GPT2. All reviewers enjoy the novel neurosymbolic approach and appreciate the user feedback.
train
[ "GLQoEiH0ASd", "8JlzvndhorM", "yA1pYTqRFZ_", "cXxfpwoJns", "Zz8bnurb9xB", "PXnevX4hOVE", "BiAWksgUtG", "4FitfqpZidv", "awRkzcnU2I", "2bCEd1beouv", "jpyJVEIFatg", "w6h3gN-TJBi" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Specifically, I found the description of Z3 helpful for my understanding.", " Thank you for sharing the reference, we will take a look and do our best to incorporate it.", " Thank you for the response which addressed most of my concerns. The update of the paper is also appreciated. I still have one minor comm...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 8, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "PXnevX4hOVE", "yA1pYTqRFZ_", "4FitfqpZidv", "nips_2022_giOus054WOy", "w6h3gN-TJBi", "jpyJVEIFatg", "2bCEd1beouv", "awRkzcnU2I", "nips_2022_giOus054WOy", "nips_2022_giOus054WOy", "nips_2022_giOus054WOy", "nips_2022_giOus054WOy" ]
nips_2022_kUOm0Fdtvh
AdaFocal: Calibration-aware Adaptive Focal Loss
Much recent work has been devoted to the problem of ensuring that a neural network's confidence scores match the true probability of being correct, i.e. the calibration problem. Of note, it was found that training with focal loss leads to better calibration than cross-entropy while achieving similar level of accuracy \cite{mukhoti2020}. This success stems from focal loss regularizing the entropy of the model's prediction (controlled by the parameter $\gamma$), thereby reining in the model's overconfidence. Further improvement is expected if $\gamma$ is selected independently for each training sample (Sample-Dependent Focal Loss (FLSD-53) \cite{mukhoti2020}). However, FLSD-53 is based on heuristics and does not generalize well. In this paper, we propose a calibration-aware adaptive focal loss called AdaFocal that utilizes the calibration properties of focal (and inverse-focal) loss and adaptively modifies $\gamma_t$ for different groups of samples based on $\gamma_{t-1}$ from the previous step and the knowledge of model's under/over-confidence on the validation set. We evaluate AdaFocal on various image recognition and one NLP task, covering a wide variety of network architectures, to confirm the improvement in calibration while achieving similar levels of accuracy. Additionally, we show that models trained with AdaFocal achieve a significant boost in out-of-distribution detection.
Accept
Prior work has shown that training with focal loss improves the uncertainly calibration of the models. In this work, authors show that the adoptive choice of the hyper-parameter used in the focal loss would further improve the calibration. Extensive empirical results showed in the paper suggest that the proposed method, AdaFocal, improves over the baselines in several domains/datasets and architectures in terms of calibration error and OOD detection. The paper is well-written and easy to follow. The proposed method is novel and its relationship with prior work is properly discussed. Furthermore, empirical results show that this method outperforms existing baselines. Therefore, I am recommending acceptance. Given lack of any theoretical insights or motivation for the proposed method, it would certainly help to improve the empirical results particularly by adding results for a few other architectures trained on ImageNet. Furthermore, please add discussions and ablations on the sensitivity of hyper-parameter choices for the camera-ready version.
train
[ "3k2A2L0KbDD", "_MSju7QrMS", "y76Ao3IsF8k", "6kp3iEOQhJQ", "L9r3eefZHyr", "M9_9zd3NwcU", "ZpDXNK6_cbn", "AXYKmnIHuE", "lMYceZjwEly", "Y_TicqVEkMg", "WDxZ49Mau_b", "q1Eadx62s3W", "TZmt5njKTRy", "OqP1OfpkrQK", "GIeneJt5_gW", "0i4HmZdNFW" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I've read the author response and the other reviews. I still have some concerns regarding the empirical evaluation. There is still only one model being tested on ImageNet, though I do appreciate the addition of the BERT model for Newsgroup 20 and the additional ResNet for TinyImageNet. I am also still concerned a...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "AXYKmnIHuE", "y76Ao3IsF8k", "Y_TicqVEkMg", "M9_9zd3NwcU", "nips_2022_kUOm0Fdtvh", "0i4HmZdNFW", "0i4HmZdNFW", "lMYceZjwEly", "TZmt5njKTRy", "GIeneJt5_gW", "q1Eadx62s3W", "OqP1OfpkrQK", "nips_2022_kUOm0Fdtvh", "nips_2022_kUOm0Fdtvh", "nips_2022_kUOm0Fdtvh", "nips_2022_kUOm0Fdtvh" ]
nips_2022_bDyLgfvZ0qJ
SIXO: Smoothing Inference with Twisted Objectives
Sequential Monte Carlo (SMC) is an inference algorithm for state space models that approximates the posterior by sampling from a sequence of target distributions. The target distributions are often chosen to be the filtering distributions, but these ignore information from future observations, leading to practical and theoretical limitations in inference and model learning. We introduce SIXO, a method that instead learns target distributions that approximate the smoothing distributions, incorporating information from all observations. The key idea is to use density ratio estimation to fit functions that warp the filtering distributions into the smoothing distributions. We then use SMC with these learned targets to define a variational objective for model and proposal learning. SIXO yields provably tighter log marginal lower bounds and offers more accurate posterior inferences and parameter estimates in a variety of domains.
Accept
This paper proposes a new method to adjust the proposal used by sequential Monte Carlo. Existing methods struggle for this problem since the resampling step poses difficulties for reparameterization-based methods. This paper proposes a "twisting" method to learn a density ratio by training a classifier to distinguish between two distributions. Reviewers agreed the method was novel, relevant, and sufficiently supported by experiments.
test
[ "Nk43pV4MZq", "01qOyiuOqx", "s5WHDhhxmXz", "Nkg-PVPReFC", "CLrRBkOw8F-", "Wyye1p58pl-", "routegGfM6L", "Y8d2tv78tyE", "e3MWrO6SwYD", "K2pF0otX0MS", "KMXh20TCZcl" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Yes, that is our understanding. Thank you again for your review.", " Thank you for the thorough response! Your answers are very clear, and I appreciate the new experiments, which demonstrate that SIXO was tested on inference tasks that do require some form of resampling (justifying the learning of twists). I co...
[ -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "01qOyiuOqx", "Wyye1p58pl-", "Nkg-PVPReFC", "KMXh20TCZcl", "K2pF0otX0MS", "routegGfM6L", "e3MWrO6SwYD", "nips_2022_bDyLgfvZ0qJ", "nips_2022_bDyLgfvZ0qJ", "nips_2022_bDyLgfvZ0qJ", "nips_2022_bDyLgfvZ0qJ" ]
nips_2022_maSvlkPHc-k
On the Discrimination Risk of Mean Aggregation Feature Imputation in Graphs
In human networks, nodes belonging to a marginalized group often have a disproportionate rate of unknown or missing features. This, in conjunction with graph structure and known feature biases, can cause graph feature imputation algorithms to predict values for unknown features that make the marginalized group's feature values more distinct from the the dominant group's feature values than they are in reality. We call this distinction the discrimination risk. We prove that a higher discrimination risk can amplify the unfairness of a machine learning model applied to the imputed data. We then formalize a general graph feature imputation framework called mean aggregation imputation and theoretically and empirically characterize graphs in which applying this framework can yield feature values with a high discrimination risk. We propose a simple algorithm to ensure mean aggregation-imputed features provably have a low discrimination risk, while minimally sacrificing reconstruction error (with respect to the imputation objective). We evaluate the fairness and accuracy of our solution on synthetic and real-world credit networks.
Accept
The work discusses some of the problems related to fairness that occur when a machine learning model is applied to data with missing values in graphs. The authors propose a methodology to compensate for the discrimination across groups. All the reviewers and the AC agree that the paper overall idea of the paper is strong and very interesting. The paper is extremely well-written, easy to follow and contirbutions and limitation well described. The main concern with the paper is its practicality as results on real-world datasets are not providing any promising lift in terms of fairness. This is also acknowledged as a feature direciton by the authors in the rebuttal. The AC believes that this is a promsing and impactful line of work and will ignite interesting discussions in the NeurIPS community. As another reviewer pointed out, there is not much work connecting the imputation and fairness in the graph context. Acceptance is recommended.
train
[ "wLYTNmg8x7P", "4eo6sjMYgIR", "euWAo4bLm4", "ahRJZ6j19t", "RSPJX1ylma6", "Qtg0jGTGbhf7", "eZBkA35ASPX", "9ltZvxHma6y", "g-hLl76C6rh", "zWNhWkFjZZW", "rj5ZPemwhQ", "MxkIv6m9oCC", "OWBS0n0ADwq", "IbO3bWwt9Zs" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewers,\n\nWe are truly thankful for your valuable feedback! We have tried to address all of your concerns in our responses. As the author-reviewer discussion period will end soon (until Aug. 9), we would love to hear if you still have any concerns and we are more than happy to discuss them.", " We deep...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 4, 4, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 3, 4, 3 ]
[ "nips_2022_maSvlkPHc-k", "nips_2022_maSvlkPHc-k", "IbO3bWwt9Zs", "OWBS0n0ADwq", "MxkIv6m9oCC", "rj5ZPemwhQ", "zWNhWkFjZZW", "g-hLl76C6rh", "nips_2022_maSvlkPHc-k", "nips_2022_maSvlkPHc-k", "nips_2022_maSvlkPHc-k", "nips_2022_maSvlkPHc-k", "nips_2022_maSvlkPHc-k", "nips_2022_maSvlkPHc-k" ]
nips_2022_ytnwPTrpl38
Generalizing Goal-Conditioned Reinforcement Learning with Variational Causal Reasoning
As a pivotal component to attaining generalizable solutions in human intelligence, reasoning provides great potential for reinforcement learning (RL) agents' generalization towards varied goals by summarizing part-to-whole arguments and discovering cause-and-effect relations. However, how to discover and represent causalities remains a huge gap that hinders the development of causal RL. In this paper, we augment Goal-Conditioned RL (GCRL) with Causal Graph (CG), a structure built upon the relation between objects and events. We novelly formulate the GCRL problem into variational likelihood maximization with CG as latent variables. To optimize the derived objective, we propose a framework with theoretical performance guarantees that alternates between two steps: using interventional data to estimate the posterior of CG; using CG to learn generalizable models and interpretable policies. Due to the lack of public benchmarks that verify generalization capability under reasoning, we design nine tasks and then empirically show the effectiveness of the proposed method against five baselines on these tasks. Further theoretical analysis shows that our performance improvement is attributed to the virtuous cycle of causal discovery, transition modeling, and policy training, which aligns with the experimental evidence in extensive ablation studies.
Accept
Promising direction for incorporating casual reasoning and RL although the scope of theory and experiments seems limited.
train
[ "IX1DXv-pmk4", "IrTNJogoDcg", "ydfEDJPXO9U", "KGn9C1VEqQ", "C4u0hbsMqd2", "q1NP9C_0n-p", "KmwE5Z2AJ6", "Bq5UfnAmLkK", "X7XEQYAZvV9", "93-YbUyO4vy", "qwpwlzEp3DT", "WV4oQ4S1q79", "03cupmJ1KF8", "HU_dzb60loQ", "HulbKN77VGr", "e1X3bs0VeGX", "m89YdHwbca5" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are glad that our response addressed the reviewer's concern. We agree that the Chemistry environment improves the quality of the experiment part and we will add more discussion about the experiment results in the revised version.", " We are glad that our response addressed most of your concerns. \nAs for the...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "ydfEDJPXO9U", "KGn9C1VEqQ", "HulbKN77VGr", "93-YbUyO4vy", "m89YdHwbca5", "e1X3bs0VeGX", "HulbKN77VGr", "nips_2022_ytnwPTrpl38", "m89YdHwbca5", "e1X3bs0VeGX", "WV4oQ4S1q79", "HulbKN77VGr", "HU_dzb60loQ", "nips_2022_ytnwPTrpl38", "nips_2022_ytnwPTrpl38", "nips_2022_ytnwPTrpl38", "nips...
nips_2022_Ri3T9dwZ_rG
Free Probability for predicting the performance of feed-forward fully connected neural networks
Gradient descent during the learning process of a neural network can be subject to many instabilities. The spectral density of the Jacobian is a key component for analyzing stability. Following the works of Pennington et al., such Jacobians are modeled using free multiplicative convolutions from Free Probability Theory (FPT). We present a reliable and very fast method for computing the associated spectral densities, for given architecture and initialization. This method has a controlled and proven convergence. Our technique is based on an homotopy method: it is an adaptative Newton-Raphson scheme which chains basins of attraction. We find contiguous lilypad-like basins and step from one to the next, heading towards the objective. In order to demonstrate the relevance of our method we show that the relevant FPT metrics computed before training are highly correlated to final test losses – up to 85%. We also give evidence that a very desirable feature for neural networks is the hyperbolicity of their Jacobian at initialization, while remaining at the edge of chaos.
Accept
The main contribution of this paper is an algorithm to compute the spectral density of the free multiplicative convolution which appears in the expression of the Jacobian of neural networks. Some concerns have been raised regarding the impact on a general ML audience (reviewers PUn2 and 1DWD), and about the presentation of the material (especially abstract and introduction, see review GQUV). I share part of these concerns. At the same time, the proposed “Newton lilypads” algorithm not only provides a speed-up over prior work by Pennington et al., but it appears to be an original and interesting contribution in free probability beyond its ML application. Therefore, I ultimately agree with the reviewers, who have reached a consensus of accepting this paper. The novelty aspect of this paper will make it an interesting addition to the NeurIPS 2022 technical program. As a final note, I would like to strongly encourage the authors to include in the camera ready the additional experiments on FashionMNIST and CIFAR10, as well as the discussions related to the feedback from the reviewers (including, if possible, the suggested title/abstract change).
test
[ "KyNspDGitY3", "OsugbgMERV5", "PEJDHwggZFl", "KkCvrSxZZYb", "ZOHFz9zj7N5", "vZG9khoKVGr", "H89vC3V3oJr", "XT3RGjFR-WE", "ccF26ugB-H", "0Xy5UpVZw97", "bt0vW1is6WF", "9ajl6SpKH8l", "Ibv4rbg394qm", "rK6X4DqhQ21", "JI176pdpoWH", "-kco8550W7j", "vXEpEDXBWn", "QUnmChp4I1x" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_rev...
[ " Thanks for the response! I myself do not think the main theme of this work is focusing on architecture search, but the title/abstract is somewhat misleading. I appreciate the authors' effort to modify the title/abstract to make it more consistent with the main content. Thank you!", " Dear Reviewer PUn2,\n\nthan...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 3, 3, 3 ]
[ "PEJDHwggZFl", "ZOHFz9zj7N5", "XT3RGjFR-WE", "vZG9khoKVGr", "Ibv4rbg394qm", "9ajl6SpKH8l", "ccF26ugB-H", "0Xy5UpVZw97", "QUnmChp4I1x", "vXEpEDXBWn", "-kco8550W7j", "JI176pdpoWH", "rK6X4DqhQ21", "nips_2022_Ri3T9dwZ_rG", "nips_2022_Ri3T9dwZ_rG", "nips_2022_Ri3T9dwZ_rG", "nips_2022_Ri3T...
nips_2022_fJ924S1j5xh
Syndicated Bandits: A Framework for Auto Tuning Hyper-parameters in Contextual Bandit Algorithms
The stochastic contextual bandit problem, which models the trade-off between exploration and exploitation, has many real applications, including recommender systems, online advertising and clinical trials. As many other machine learning algorithms, contextual bandit algorithms often have one or more hyper-parameters. As an example, in most optimal stochastic contextual bandit algorithms, there is an unknown exploration parameter which controls the trade-off between exploration and exploitation. A proper choice of the hyper-parameters is essential for contextual bandit algorithms to perform well. However, it is infeasible to use offline tuning methods to select hyper-parameters in contextual bandit environment since there is no pre-collected dataset and the decisions have to be made in real time. To tackle this problem, we first propose a two-layer bandit structure for auto tuning the exploration parameter and further generalize it to the Syndicated Bandits framework which can learn multiple hyper-parameters dynamically in contextual bandit environment. We derive the regret bounds of our proposed Syndicated Bandits framework and show it can avoid its regret dependent exponentially in the number of hyper-parameters to be tuned. Moreover, it achieves optimal regret bounds under certain scenarios. Syndicated Bandits framework is general enough to handle the tuning tasks in many popular contextual bandit algorithms, such as LinUCB, LinTS, UCB-GLM, etc. Experiments on both synthetic and real datasets validate the effectiveness of our proposed framework.
Accept
Thank you for submitting your paper to NeurIPS! This paper studies the important practical challenge of parameter tuning in real-world contextual bandit implementations. The authors propose a bandit-over-bandit framework: the bottom layer is the original bandit algorithm (e.g. LinUCB or LinTs), and the top layer is an adversarial EXP3 algorithm for choosing the best hyperparameter. The resulting regret bound has a linear (and not exponential) dependence on the number of hyper-parameters. There was consensus that the algorithm is intuitive and novel, and the experiments support its promise. I am pleased to recommend acceptance.
train
[ "zZumlDlDaCs", "4TDv9rMrL3K", "J87TnVTGTHW", "SMvlvDy1S9H", "sznm3ouEnkk", "9vUrEWPzK6", "YAQr2vEezz", "9fB_0hHfXPF", "FrmdkJORj5w", "CGsz8gZK3GN" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your insightful suggestions! We will definitely add discussions about the regret dependency on $d$ in the final revision.", " I thank authors for the answering my questions and revising the paper. I am increasing my score accordingly. I suggest the authors clearly discuss the regret dependency on ...
[ -1, -1, -1, -1, -1, -1, 6, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "4TDv9rMrL3K", "J87TnVTGTHW", "YAQr2vEezz", "9fB_0hHfXPF", "FrmdkJORj5w", "CGsz8gZK3GN", "nips_2022_fJ924S1j5xh", "nips_2022_fJ924S1j5xh", "nips_2022_fJ924S1j5xh", "nips_2022_fJ924S1j5xh" ]
nips_2022_XSNfXG9HBAu
Domain Adaptation meets Individual Fairness. And they get along.
Many instances of algorithmic bias are caused by distributional shifts. For example, machine learning (ML) models often perform worse on demographic groups that are underrepresented in the training data. In this paper, we leverage this connection between algorithmic fairness and distribution shifts to show that algorithmic fairness interventions can help ML models overcome distribution shifts, and that domain adaptation methods (for overcoming distribution shifts) can mitigate algorithmic biases. In particular, we show that (i) enforcing suitable notions of individual fairness (IF) can improve the out-of-distribution accuracy of ML models under the covariate shift assumption and that (ii) it is possible to adapt representation alignment methods for domain adaptation to enforce individual fairness. The former is unexpected because IF interventions were not developed with distribution shifts in mind. The latter is also unexpected because representation alignment is not a common approach in the individual fairness literature.
Accept
The initial reviews were divergent. During the rebuttal and discussion phase, however, many of the raised concerns are addressed propertly, leading slightly towards accept. While some of the issues are not checked yet for whether to address them, I believe the authors' response answer them adequately. Hence I recommend the acceptance of this paper.
val
[ "LNyTngFO0nN", "EEvTiUBZa3d", "qH5Ia2fOLo", "lb1YZVRqEe0", "n6VTsUsuJMi", "Dszwonuexbj", "dzMwWZLrnEp", "7k67ecXPR7n", "PfwhnTJn7K8", "oK729Gn3GLD", "YpSqM25r1HC", "KkWUkEUvJzY", "kTD5uK9w8kY" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **Very minor point** - I somewhat agree with the reviewer who flagged the \"wedding dress\" example. The authors are correct that they do not make an association with the economic status of a country and wedding traditions. However, I think that the reviewer correctly identifies that the language is clunky here.\...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "nips_2022_XSNfXG9HBAu", "YpSqM25r1HC", "n6VTsUsuJMi", "nips_2022_XSNfXG9HBAu", "7k67ecXPR7n", "nips_2022_XSNfXG9HBAu", "nips_2022_XSNfXG9HBAu", "kTD5uK9w8kY", "KkWUkEUvJzY", "YpSqM25r1HC", "nips_2022_XSNfXG9HBAu", "nips_2022_XSNfXG9HBAu", "nips_2022_XSNfXG9HBAu" ]