paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nips_2022_htR7ZXXe_TY | On Elimination Strategies for Bandit Fixed-Confidence Identification | Elimination algorithms for bandit identification, which prune the plausible correct answers sequentially until only one remains, are computationally convenient since they reduce the problem size over time. However, existing elimination strategies are often not fully adaptive (they update their sampling rule infrequently) and are not easy to extend to combinatorial settings, where the set of answers is exponentially large in the problem dimension. On the other hand, most existing fully-adaptive strategies to tackle general identification problems are computationally demanding since they repeatedly test the correctness of every answer, without ever reducing the problem size. We show that adaptive methods can be modified to use elimination in both their stopping and sampling rules, hence obtaining the best of these two worlds: the algorithms (1) remain fully adaptive, (2) suffer a sample complexity that is never worse of their non-elimination counterpart, and (3) provably eliminate certain wrong answers early. We confirm these benefits experimentally, where elimination improves significantly the computational complexity of adaptive methods on common tasks like best-arm identification in linear bandits. | Accept | This paper has initially received borderline scores: the reviewers appreciated the general algorithmic framework and the high technical quality, but some of them lamented the relative weakness of the contribution (in particular the lack of hard improvements over existing results) and pointed out that the presentation could use some improvements. Some of these concerns were addressed in a revised version of the paper and a series of well-written author responses, which eventually convinced several reviewers to raise their scores.
Eventually, all reviewers agreed that the paper is acceptable for publication. The authors are encouraged to do another pass of revision when preparing the final version of the paper, and take all the reviewers' comments into account in the process. | train | [
"268QFRgz-m",
"GAUO2ez5_S4",
"7LG07wFaB6Y",
"02qhkCSKlld",
"Udt7aC6vbcT",
"4ik4q5cjJsb",
"fbk0KQ-7spb",
"TqzdwMDwo1jF",
"3mtbuO0OVxj",
"Q4kDJ2BlE5n",
"VE4H2fQ_bSv",
"es8mff7YTwC",
"XRMEIweoL0"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear authors, \n\nthank you very much for the detailed answer. However, I will still keep my score as it is, as I think the overall theoretical contribution is modest. ",
" thank you for addressing the issue. I have raised the score accordingly.",
" Thanks for the response. I think the new algorithm makes the... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"Udt7aC6vbcT",
"TqzdwMDwo1jF",
"4ik4q5cjJsb",
"nips_2022_htR7ZXXe_TY",
"XRMEIweoL0",
"es8mff7YTwC",
"VE4H2fQ_bSv",
"3mtbuO0OVxj",
"Q4kDJ2BlE5n",
"nips_2022_htR7ZXXe_TY",
"nips_2022_htR7ZXXe_TY",
"nips_2022_htR7ZXXe_TY",
"nips_2022_htR7ZXXe_TY"
] |
nips_2022__S9amb2-M-I | Influencing Long-Term Behavior in Multiagent Reinforcement Learning | The main challenge of multiagent reinforcement learning is the difficulty of learning useful policies in the presence of other simultaneously learning agents whose changing behaviors jointly affect the environment's transition and reward dynamics. An effective approach that has recently emerged for addressing this non-stationarity is for each agent to anticipate the learning of other agents and influence the evolution of future policies towards desirable behavior for its own benefit. Unfortunately, previous approaches for achieving this suffer from myopic evaluation, considering only a finite number of policy updates. As such, these methods can only influence transient future policies rather than achieving the promise of scalable equilibrium selection approaches that influence the behavior at convergence. In this paper, we propose a principled framework for considering the limiting policies of other agents as time approaches infinity. Specifically, we develop a new optimization objective that maximizes each agent's average reward by directly accounting for the impact of its behavior on the limiting set of policies that other agents will converge to. Our paper characterizes desirable solution concepts within this problem setting and provides practical approaches for optimizing over possible outcomes. As a result of our farsighted objective, we demonstrate better long-term performance than state-of-the-art baselines across a suite of diverse multiagent benchmark domains. | Accept | Reviewers found the long-term influence problem and proposed solution interesting and novel. During the rebuttal, important additional baselines and clarifying comments were added, addressing the most important reviewer concerns. Although the scores look borderline for this paper, all reviewers who engaged with the authors during the rebuttal are in favor of acceptance, and I agree. | train | [
"J65TvFMkzY5",
"qkXh1USfWHz",
"-7cxbftq8v",
"iAlBnfeDha",
"FlsWmNoedBH",
"hifNdoMHbW-",
"7zgrKeQZtNs",
"kXlKtlzSb2u",
"-3b05o4WZTi",
"o3HD31o18Kt",
"w5hIKSLhZXa",
"w0UUVAKi9K_",
"b2c7mHK33ba",
"C8VT-4pZFrH",
"2u029lqk8Hq",
"CeL-njMpTuu",
"g8OIiceVzqi"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate your positive evaluation of our paper. Yes, following your helpful suggestion, we will make the presentation clearer in the final paper by adding discussions on the equivalence between standard episodic and continual RL settings. ",
" I appreciate the explanations. I remain positive about this pap... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5,
4
] | [
"qkXh1USfWHz",
"o3HD31o18Kt",
"iAlBnfeDha",
"7zgrKeQZtNs",
"o3HD31o18Kt",
"w0UUVAKi9K_",
"-3b05o4WZTi",
"b2c7mHK33ba",
"2u029lqk8Hq",
"w5hIKSLhZXa",
"g8OIiceVzqi",
"CeL-njMpTuu",
"C8VT-4pZFrH",
"nips_2022__S9amb2-M-I",
"nips_2022__S9amb2-M-I",
"nips_2022__S9amb2-M-I",
"nips_2022__S9a... |
nips_2022_ripJhpwlA2v | When Does Group Invariant Learning Survive Spurious Correlations? | By inferring latent groups in the training data, recent works introduce invariant learning to the case where environment annotations are unavailable. Typically, learning group invariance under a majority/minority split is empirically shown to be effective in improving out-of-distribution generalization on many datasets. However, theoretical guarantee for these methods on learning invariant mechanisms is lacking. In this paper, we reveal the insufficiency of existing group invariant learning methods in preventing classifiers from depending on spurious correlations in the training set. Specifically, we propose two criteria on judging such sufficiency. Theoretically and empirically, we show that existing methods can violate both criteria and thus fail in generalizing to spurious correlation shifts. Motivated by this, we design a new group invariant learning method, which constructs groups with statistical independence tests, and reweights samples by group label proportion to meet the criteria. Experiments on both synthetic and real data demonstrate that the new method significantly outperforms existing group invariant learning methods in generalizing to spurious correlation shifts. | Accept | This paper presents interesting new theory and algorithms to address group-invariant learning in the presence of spurious correlations between unimportant features and the target. A robust discussion between reviewers and authors lends confidence that the paper should be accepted.
I'd encourage the authors to go over language carefully before the camera-ready deadline; there are many small grammatical issues remaining (eg, plurals). | train | [
"U6MlfIIrTLn",
"fVPdlNA0OSI",
"SGKUNG-K7la",
"MFUHsnrMUgB",
"1wDzjdIGqFK",
"QLaLrFFmf73",
"VqP-gL7Z_BF",
"TX2DykUacv-",
"RAdwF2CEVYs",
"eSQmERupG7w",
"zBbtdOXldE-",
"lgf_364rRQP",
"JMNP0ebrxrm",
"xQ3rRCyX4if",
"8AP_hh-0ekkq",
"jg4_E3JElz7",
"5Oo46bnMadK",
"dUk_A6_qDZZ"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are happy to know that our response has addressed the reviewer's concerns. We would like to thank the reviewer again for taking the time and effort to review this paper.",
" We thank the reviewer for the reassessment and further suggestions for this paper. As recommended, in the new revision, we have made ef... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"MFUHsnrMUgB",
"1wDzjdIGqFK",
"TX2DykUacv-",
"8AP_hh-0ekkq",
"QLaLrFFmf73",
"VqP-gL7Z_BF",
"JMNP0ebrxrm",
"zBbtdOXldE-",
"dUk_A6_qDZZ",
"5Oo46bnMadK",
"lgf_364rRQP",
"dUk_A6_qDZZ",
"xQ3rRCyX4if",
"5Oo46bnMadK",
"jg4_E3JElz7",
"nips_2022_ripJhpwlA2v",
"nips_2022_ripJhpwlA2v",
"nips_... |
nips_2022_TQn44YPuOR2 | Maximum Likelihood Training of Implicit Nonlinear Diffusion Model | Whereas diverse variations of diffusion models exist, extending the linear diffusion into a nonlinear diffusion process is investigated by very few works. The nonlinearity effect has been hardly understood, but intuitively, there would be promising diffusion patterns to efficiently train the generative distribution towards the data distribution. This paper introduces a data-adaptive nonlinear diffusion process for score-based diffusion models. The proposed Implicit Nonlinear Diffusion Model (INDM) learns by combining a normalizing flow and a diffusion process. Specifically, INDM implicitly constructs a nonlinear diffusion on the data space by leveraging a linear diffusion on the latent space through a flow network. This flow network is key to forming a nonlinear diffusion, as the nonlinearity depends on the flow network. This flexible nonlinearity improves the learning curve of INDM to nearly Maximum Likelihood Estimation (MLE) against the non-MLE curve of DDPM++, which turns out to be an inflexible version of INDM with the flow fixed as an identity mapping. Also, the discretization of INDM shows the sampling robustness. In experiments, INDM achieves the state-of-the-art FID of 1.75 on CelebA. We release our code at https://github.com/byeonghu-na/INDM. | Accept | The paper presents a novel non-linear diffusion model that transforms a linear diffusion in a latent space using normalizing flows. After the discussion period, three reviewers rated the paper a weak accept and one as accept. Overall, reviewers found the main idea of training flows in this latent space to be novel and interesting, and felt that the method was supported by experiments showing very good performance. Beyond that, there was a lot of discussion between reviewers and authors. Some of this focused on specific details about the training method; while quite a lot focused on other characterizations and discussions presented by the authors in the paper, which the reviewer’s found confusing or distracting (especially about optimal transport). The author response clarified several points, which prompted two reviewers to raise their scores. The authors promised to remove the discussion about optimal transport. Overall, there is support to accept the paper; the authors are encouraged to take the reviewer feedback and discussions into account to clarify the final version of the paper.
| train | [
"Ywmy1hszHYa",
"gBzf5lCl3t7",
"nh7hrVayK16",
"6UC2qkSyz28",
"8bcLe4xSQS3",
"b9Q0byfsNpI",
"-rMdzwXnEri",
"dD_jbcoFIu",
"Dk-tXEx3Uf",
"Bu0S2dRhZu",
"-ZDZc5Q9M3",
"ZZgbk-beepy",
"FZaEfsw6Er",
"xeknvHRI5fO",
"UeDIYv6cu2K",
"FlmJHxhUcsj",
"TWJ5Z-J7aQG",
"xM55uXjH6I",
"K57r20HctZr",
... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" We greatly thank the reviewer for the affirmative review. Thank you so much.",
" I think that the extension to non-linear diffusion process is necessary and contributing. \nI agree that the FID improvement in CelebA is impressive enough.\nI keep my original rating for this paper.",
" We hugely thank the revie... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3,
4
] | [
"gBzf5lCl3t7",
"ERC-84VKRou",
"ZZgbk-beepy",
"8bcLe4xSQS3",
"dD_jbcoFIu",
"-rMdzwXnEri",
"Dk-tXEx3Uf",
"Bu0S2dRhZu",
"-ZDZc5Q9M3",
"UeDIYv6cu2K",
"xM55uXjH6I",
"FZaEfsw6Er",
"xeknvHRI5fO",
"AdQL-xbJKHe",
"KuLpZvzW45h",
"KuLpZvzW45h",
"KuLpZvzW45h",
"KuLpZvzW45h",
"KuLpZvzW45h",
... |
nips_2022_wUctlvhsNWg | Personalized Online Federated Learning with Multiple Kernels | Multi-kernel learning (MKL) exhibits well-documented performance in online non-linear function approximation. Federated learning enables a group of learners (called clients) to train an MKL model on the data distributed among clients to perform online non-linear function approximation. There are some challenges in online federated MKL that need to be addressed: i) Communication efficiency especially when a large number of kernels are considered ii) Heterogeneous data distribution among clients. The present paper develops an algorithmic framework to enable clients to communicate with the server to send their updates with affordable communication cost while clients employ a large dictionary of kernels. Utilizing random feature (RF) approximation, the present paper proposes scalable online federated MKL algorithm. We prove that using the proposed online federated MKL algorithm, each client enjoys sub-linear regret with respect to the RF approximation of its best kernel in hindsight, which indicates that the proposed algorithm can effectively deal with heterogeneity of the data distributed among clients. Experimental results on real datasets showcase the advantages of the proposed algorithm compared with other online federated kernel learning ones. | Accept | The authors carefully designed the algorithm and presented their results.
The reviewers commented on the analysis of running costs, discussion on related works, and experimental comparisons.
The authors, as far as I can see, have addressed these appropriately.
The reviewers did not respond to the updates, but the scores should be increased, and I have included them in my decision. | val | [
"_f0AbcPm_WJ",
"P2IIlTYP46I",
"sCcATe43B-X",
"ppaURy2tgBn",
"K-60W7qp6tb",
"k0o1YHufAo-",
"YEW3hyfuMW5",
"WMuz98Cy0lO",
"sAISuHVbwnB",
"CZGXRX_CQsi",
"ne9jjQGC6a0",
"O2C0-CN1BcZ",
"RIxCghhCVl",
"Hp-dQH4HX9",
"xLTWwL4f4Vj",
"I8IAgreSIi"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you so much for your comment and taking time to review our paper. In Table 1 of section 4, we reported the run time of algorithms. The reported run time refers to average total run time of clients to perform online learning task on the entire data samples that they observe. Therefore, reported run time show... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"sCcATe43B-X",
"ppaURy2tgBn",
"O2C0-CN1BcZ",
"K-60W7qp6tb",
"k0o1YHufAo-",
"ne9jjQGC6a0",
"I8IAgreSIi",
"I8IAgreSIi",
"xLTWwL4f4Vj",
"Hp-dQH4HX9",
"Hp-dQH4HX9",
"RIxCghhCVl",
"nips_2022_wUctlvhsNWg",
"nips_2022_wUctlvhsNWg",
"nips_2022_wUctlvhsNWg",
"nips_2022_wUctlvhsNWg"
] |
nips_2022_CZZFRxbOLC | Patching open-vocabulary models by interpolating weights | Open-vocabulary models like CLIP achieve high accuracy across many image classification tasks. However, there are still settings where their zero-shot performance is far from optimal. We study model patching, where the goal is to improve accuracy on specific tasks without degrading accuracy on tasks where performance is already adequate. Towards this goal, we introduce PAINT, a patching method that uses interpolations between the weights of a model before fine-tuning and the weights after fine-tuning on a task to be patched. On nine tasks where zero-shot CLIP performs poorly, PAINT increases accuracy by 15 to 60 percentage points while preserving accuracy on ImageNet within one percentage point of the zero-shot model. PAINT also allows a single model to be patched on multiple tasks and improves with model scale. Furthermore, we identify cases of broad transfer, where patching on one task increases accuracy on other tasks even when the tasks have disjoint classes. Finally, we investigate applications beyond common benchmarks such as counting or reducing the impact of typographic attacks on CLIP. Our findings demonstrate that it is possible to expand the set of tasks on which open-vocabulary models achieve high accuracy without re-training them from scratch. | Accept | The reviewers had some concerns about clarity of motivation and baselines. My own opinion is that this work is valuable for the community because of the simplicity of the method and depth of experiments. | train | [
"nVrNQ80hR0L",
"RwwZ6Oz8FVu",
"7g5rztgTMRK",
"hFd2GVb_-cV",
"6MyLmVKnFjP",
"HNzXbRpOwU",
"a2k7PZS1fZ0",
"t4suBM9c87D",
"jvj-0SF858b",
"dfy-dk93pTF",
"nGiOaCLi5Ut",
"v9WkcswcFIi",
"5cY5n9odFLm",
"qv4I4pSpFMb",
"8XxlGUHyXTO",
"hyfOaL7ado0",
"ds-6JUcAax",
"iyoUBwlOKns"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for engaging with us to improve our work. We offer to revisit the paper organization if the paper is accepted, and thank you again for considering our responses.",
" Thank you for your detailed responses to each of the concerns, and the efforts for all the additional results.\n\nTo me, the most useful... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"RwwZ6Oz8FVu",
"qv4I4pSpFMb",
"ds-6JUcAax",
"hyfOaL7ado0",
"HNzXbRpOwU",
"jvj-0SF858b",
"8XxlGUHyXTO",
"iyoUBwlOKns",
"iyoUBwlOKns",
"ds-6JUcAax",
"hyfOaL7ado0",
"hyfOaL7ado0",
"hyfOaL7ado0",
"hyfOaL7ado0",
"nips_2022_CZZFRxbOLC",
"nips_2022_CZZFRxbOLC",
"nips_2022_CZZFRxbOLC",
"ni... |
nips_2022_e65KZ0ixi0 | Evaluating Graph Generative Models with Contrastively Learned Features | A wide range of models have been proposed for Graph Generative Models, necessitating effective methods to evaluate their quality. So far, most techniques use either traditional metrics based on subgraph counting, or the representations of randomly initialized Graph Neural Networks (GNNs). We propose using representations from constrastively trained GNNs, rather than random GNNs, and show this gives more reliable evaluation metrics. Neither traditional approaches nor GNN-based approaches dominate the other, however: we give examples of graphs that each approach is unable to distinguish. We demonstrate that Graph Substructure Networks (GSNs), which in a way combine both approaches, are better at distinguishing the distances between graph datasets. | Accept | This paper proposed a new way of evaluating the quality of graph generative models, by leveraging the contrastively learned representations. The reviewers generally found the study is interesting and important, and the idea of leveraging contrastive learning for evaluation is also sound and novel. I also believe the work would be impactful for future research in the graph generative model domain, and I highly encourage the authors to properly release easy to use benchmark tools/datasets around it, if possible.
| train | [
"oOCKrGscxe",
"e9qESaFcpOX",
"mRiCwcL7zls",
"DiakkA8lncG",
"4lA505dlfEr",
"8CPJWHpaXOL",
"wtOKndRCk91",
"pUrZokM23xOe",
"Gt4LS7JScYJ",
"gZX9kMyo5cJ",
"v1Rgz-iME_j",
"EyEfQN0kMd",
"kZQNDkpkPLe"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response and for elaborating on the issues. To address these problems, we have made some changes to the previous version as well as the new version just submitted. However, please note that the rebuttal period is limited. It is our intention to continue working on a final version that will addr... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5
] | [
"e9qESaFcpOX",
"wtOKndRCk91",
"Gt4LS7JScYJ",
"nips_2022_e65KZ0ixi0",
"8CPJWHpaXOL",
"pUrZokM23xOe",
"kZQNDkpkPLe",
"EyEfQN0kMd",
"gZX9kMyo5cJ",
"v1Rgz-iME_j",
"nips_2022_e65KZ0ixi0",
"nips_2022_e65KZ0ixi0",
"nips_2022_e65KZ0ixi0"
] |
nips_2022_ipAz7H8pPnI | Differentially Private Learning Needs Hidden State (Or Much Faster Convergence) | Prior work on differential privacy analysis of randomized SGD algorithms relies on composition theorems, where the implicit (unrealistic) assumption is that the internal state of the iterative algorithm is revealed to the adversary. As a result, the R\'enyi DP bounds derived by such composition-based analyses linearly grow with the number of training epochs. When the internal state of the algorithm is hidden, we prove a converging privacy bound for noisy stochastic gradient descent (on strongly convex smooth loss functions). We show how to take advantage of privacy amplification by sub-sampling and randomized post-processing, and prove the dynamics of privacy bound for ``shuffle and partition'' and ``sample without replacement'' stochastic mini-batch gradient descent schemes. We prove that, in these settings, our privacy bound converges exponentially fast and is substantially smaller than the composition bounds, notably after a few number of training epochs. Thus, unless the DP algorithm converges fast, our privacy analysis shows that hidden state analysis can significantly amplify differential privacy. | Accept | This work shows that for strongly convex and smooth loss functions, running DPSGD has a privacy cost that stops growing at some point. The answers a question that has been open and is of significant theoretical and practical interest. They show empirically that this new result can allows one to get better privacy-accuracy trade-offs in some cases.
This work is a big step ahead in analysis of DPSGD and I recommend acceptance. | train | [
"SS2QgmuLLQ_",
"Jcp2b51GkFp",
"3N3nUULMvXk",
"krTa4Nz-DT",
"rweo6_l9DY",
"XwpEM-4Pty",
"03gC--3ftw6",
"PLj7mp0brzW",
"9lhIHLO5TcD",
"2w2n98-XjRy",
"H6kZ88xUt4"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks again for your time and comments. We just want to reach out to see if our response addresses your main concerns. We are also happy to discuss any further questions or comments that you may have after our response. ",
" \n> Typos in line 108-109\n\nThanks for pointing out these typos. We have corrected li... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
2,
3
] | [
"2w2n98-XjRy",
"H6kZ88xUt4",
"H6kZ88xUt4",
"2w2n98-XjRy",
"2w2n98-XjRy",
"9lhIHLO5TcD",
"PLj7mp0brzW",
"nips_2022_ipAz7H8pPnI",
"nips_2022_ipAz7H8pPnI",
"nips_2022_ipAz7H8pPnI",
"nips_2022_ipAz7H8pPnI"
] |
nips_2022_pHdiaqgh_nf | Mind Reader: Reconstructing complex images from brain activities | Understanding how the brain encodes external stimuli and how these stimuli can be decoded from the measured brain activities are long-standing and challenging questions in neuroscience. In this paper, we focus on reconstructing the complex image stimuli from fMRI (functional magnetic resonance imaging) signals. Unlike previous works that reconstruct images with single objects or simple shapes, our work aims to reconstruct image stimuli that are rich in semantics, closer to everyday scenes, and can reveal more perspectives. However, data scarcity of fMRI datasets is the main obstacle to applying state-of-the-art deep learning models to this problem. We find that incorporating an additional text modality is beneficial for the reconstruction problem compared to directly translating brain signals to images. Therefore, the modalities involved in our method are: (i) voxel-level fMRI signals, (ii) observed images that trigger the brain signals, and (iii) textual description of the images. To further address data scarcity, we leverage an aligned vision-language latent space pre-trained on massive datasets. Instead of training models from scratch to find a latent space shared by the three modalities, we encode fMRI signals into this pre-aligned latent space. Then, conditioned on embeddings in this space, we reconstruct images with a generative model. The reconstructed images from our pipeline balance both naturalness and fidelity: they are photo-realistic and capture the ground truth image contents well. | Accept | In this paper, the authors propose a new method to reconstruct the images from fMRI signals by leveraging the text descriptions of the input images and using that as a training signal for the model. The fMRI data for this task is limited and therefore this paper addresses the data scarcity problem by leveraging an aligned vision-language latent space that is pretrained on large-scale datasets. fMRI signals are then aligned to this latent space, leading to reconstructions that are photo-realistic and capture the semantic content of the ground truth image.
The reviewers agreed that the proposed idea to leverage the latent space of a multimodal model (CLIP) in reconstruction from fMRI signal is new and interesting. At the same time, they voiced concerns about some of the evaluation metrics and baselines. Over the course of the rebuttal, many of the reviewer concerns were addressed (at least mostly) and reviewers increased their scores. In the end, all of the reviewers agreed in favor of acceptance.
| train | [
"cSyoFoBAuo",
"Asj_M2DRQ3X",
"A9RTuV6BJiO",
"tG8eFtCOKdp",
"3ksK5JujQ-H",
"HYFo1wNYW0a",
"T8_rkwreknL1",
"COB3cU-Fg92",
"DogrN4o4-RO",
"mMnp-A6rtzi",
"Iv8lzAIVvxo",
"5HnLld8P0Ct"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We greatly appreciate your positive response to the revision and acknowledging our work.",
" Thank you for including a better discussion of failure points, and for expanding the reconstruction results. The new quantitative evidence also improves the soundness of the paper. As such, I have made the following adj... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3,
4
] | [
"Asj_M2DRQ3X",
"COB3cU-Fg92",
"tG8eFtCOKdp",
"T8_rkwreknL1",
"Iv8lzAIVvxo",
"mMnp-A6rtzi",
"DogrN4o4-RO",
"5HnLld8P0Ct",
"nips_2022_pHdiaqgh_nf",
"nips_2022_pHdiaqgh_nf",
"nips_2022_pHdiaqgh_nf",
"nips_2022_pHdiaqgh_nf"
] |
nips_2022_dC_Cho7PzT | Adapting to Online Label Shift with Provable Guarantees | The standard supervised learning paradigm works effectively when training data shares the same distribution as the upcoming testing samples. However, this stationary assumption is often violated in real-world applications, especially when testing data appear in an online fashion. In this paper, we formulate and investigate the problem of \emph{online label shift} (OLaS): the learner trains an initial model from the labeled offline data and then deploys it to an unlabeled online environment where the underlying label distribution changes over time but the label-conditional density does not. The non-stationarity nature and the lack of supervision make the problem challenging to be tackled. To address the difficulty, we construct a new unbiased risk estimator that utilizes the unlabeled data, which exhibits many benign properties albeit with potential non-convexity. Building upon that, we propose novel online ensemble algorithms to deal with the non-stationarity of the environments. Our approach enjoys optimal \emph{dynamic regret}, indicating that the performance is competitive with a clairvoyant who knows the online environments in hindsight and then chooses the best decision for each round. The obtained dynamic regret bound scales with the intensity and pattern of label distribution shift, hence exhibiting the adaptivity in the OLaS problem. Extensive experiments are conducted to validate the effectiveness and support our theoretical findings.
| Accept | This paper considers online learning under a label shift scenario: an initial model is trained on labeled data from the first distribution, and then the learner receives unlabeled data from shifting target distributions in rounds. The assumption here is that the underlying label-conditional densities (d(x|y)) do not change, only the "weighting" of the various labels changes between the rounds. This submission introduces the notion of dynamic regret for this setting where the learner is round-wise compared to the best predictor for the round-specific task. The submission provides an algorithm and bounds on this dynamic regret (that, naturally, involves a term measuring the amount of label-weighting-shift) as well as empirical evaluations of the proposed algorithm (both on synthetic and real datasets).
Overall this appears as a well-rounded submission on a problem setup that is clearly relevant to the NeurIPS community.
Since data is provided in batches per learning round, it would be appropriate to also compare to life-long learning setups, algorithms and guarantees, rather than just to online learning. It seems that the studied framework is more commonly referred to as life-long learning, but this connection and the corresponding literature is entirely ignored in this submission. This should be fixed/clarified before publication. | train | [
"Wnut_4ynzYb",
"BsvDki6YfNo",
"ylqSP_BUrj",
"vUsjfLNY-Un",
"sW02FxM_rQL",
"CVlhMaxq1EC",
"ORQB4XGi2UA",
"X6Fw6K1lzgv",
"46vk_79FmWv",
"r9F5z1pUxkI",
"S9qeYfL3hzM"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Once again, we are grateful to the reviewer for the constructive feedback, and for all the time spent during the reviewing and the discussion periods. We will add those clarifications in the revised version to avoid potential misunderstandings. Thanks!",
" I appreciate the clarification of authors. I would like... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
9,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
2
] | [
"BsvDki6YfNo",
"ylqSP_BUrj",
"vUsjfLNY-Un",
"sW02FxM_rQL",
"S9qeYfL3hzM",
"r9F5z1pUxkI",
"X6Fw6K1lzgv",
"46vk_79FmWv",
"nips_2022_dC_Cho7PzT",
"nips_2022_dC_Cho7PzT",
"nips_2022_dC_Cho7PzT"
] |
nips_2022_gKe_A-DxzkH | Data-Driven Model-Based Optimization via Invariant Representation Learning | We study the problem of data-driven model-based optimization, where the goal is to find the optimal design, provided access to only a static dataset, with no active data collection. The central challenge in data-driven model-based optimization is distributional shift, where the optimizer is fooled into producing out-of-distribution (OOD) designs that erroneously appear promising under a model trained on the provided data. To address this issue, we formulate model-based optimization as domain adaptation, where the goal is to make accurate predictions for the value of designs during optimization ("target domain"), when training only on the dataset ("source domain"). This perspective leads to invariant objective models (IOM), our approach for addressing distributional shift by enforcing invariance between the learned representations of the training dataset and optimized designs. In IOM, if the optimized designs are too different from the training dataset, the representation will be forced to lose much of the information that distinguishes good designs from bad ones, making all choices seem mediocre. Critically, when the optimizer is aware of this representational tradeoff, it should choose not to stray too far from the training distribution, leading to a natural trade-off between distributional shift and learning performance. | Accept | Reviewers are all positively inclined for the paper. Most of them have stated that the paper
proposes some novel and relevant ideas with theoretical support and empirical evidences
of theirs soundness. As such, we think that the paper can be accepted to the conference, and
expect some revisions that would improve clarity of the work.
| train | [
"iCr5PywYcV",
"ePZq6iJXT51",
"I9J69Q6EiD",
"6udfmOKlmeh",
"xovH4fJr1mx",
"QxkS3YieKoa",
"pThO8n1ks9M",
"B6jsrPIFOb",
"aexa26LVR6u",
"rlKno9D8lJc",
"938wKd9Uku",
"CYVogx_SxvN",
"oj2q9_t1tBK",
"OFCNJf9V_f",
"L-4hFOpf-p",
"T09jDia1Epl",
"nby6S2t8FnA",
"2uDbX0aU-HN",
"STh3pHH64uJ",
... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_r... | [
" Thanks for the response. I don't have any additional concerns. I would like to keep the rating unchanged.",
" The author's response and revisions have addressed the majority of my concerns/questions and hence I updated my rating. ",
" I am satisfied with your responses and my concerns are mostly addressed. I ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
5,
2
] | [
"OFCNJf9V_f",
"QxkS3YieKoa",
"T09jDia1Epl",
"xovH4fJr1mx",
"pThO8n1ks9M",
"B6jsrPIFOb",
"2JqcBNngV2U",
"A0rWMyJxYGN",
"nips_2022_gKe_A-DxzkH",
"938wKd9Uku",
"A0rWMyJxYGN",
"oj2q9_t1tBK",
"2JqcBNngV2U",
"STh3pHH64uJ",
"2uDbX0aU-HN",
"nby6S2t8FnA",
"nips_2022_gKe_A-DxzkH",
"nips_2022... |
nips_2022_Ih2bG6h1r4S | Atlas: Universal Function Approximator For Memory Retention | Artificial neural networks (ANNs), despite their universal function approximation capability and practical success, are subject to catastrophic forgetting. Catastrophic forgetting refers to the abrupt unlearning of a previous task when a new task is learned. It is an emergent phenomenon that plagues ANNs and hinders continual learning. Existing universal function approximation theorems for ANNs guarantee function approximation ability but seldom touch on the model details and do not predict catastrophic forgetting. This paper presents a novel universal approximation theorem for multi-variable functions using only single-variable functions and exponential functions. Furthermore, we present ATLAS—a novel ANN architecture based on the exponential approximation theorem and B-splines. It is shown that ATLAS is a universal function approximator capable of memory retention and, therefore, continual learning. The memory retention of ATLAS is imperfect, with some off-target effects during continual learning, but it is well-behaved and predictable. An efficient implementation of ATLAS is provided. Experiments were conducted to evaluate both the function approximation and memory retention capabilities of ATLAS. | Reject | The submission proposes a novel type of neural network based on B-splines and exponential functions designed to reduce catastrophic forgetting, proves universal approximation results, and provides some experimental results. The reviewers find the submission interesting, but believe that the submission could be improved significantly in a number of respects, including better practices in use of the test dataset, comparison against baselines, and distinguishing between properties of the network class and properties of the learning method. Accordingly, I cannot recommend the present paper for acceptance. | train | [
"ATd1dMWcAZ",
"FfqzHOOSC7",
"IEJ80Sauxb5",
"D1CyUHnXtJSD",
"xTOkARmR0G7",
"PLM2gSZ30-z",
"vlB8prjo-3w",
"cEtnydgBf_AO",
"KqscnX7TF0_i",
"n4tpz_Hp8uC",
"Zw8sHISLdYm",
"BU6haAAgtlx"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer\n\nWe worked through the two papers you specifically asked for and identified two experiments that would be most insightful: Incremental Domain Learning, and Incremental class learning, from the paper \"Re-evaluating Continual Learning Scenarios: A Categorization and Case for Strong Baselines.\" The... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"FfqzHOOSC7",
"IEJ80Sauxb5",
"D1CyUHnXtJSD",
"PLM2gSZ30-z",
"Zw8sHISLdYm",
"BU6haAAgtlx",
"BU6haAAgtlx",
"Zw8sHISLdYm",
"n4tpz_Hp8uC",
"nips_2022_Ih2bG6h1r4S",
"nips_2022_Ih2bG6h1r4S",
"nips_2022_Ih2bG6h1r4S"
] |
nips_2022_B2rqx0w63U | Provably Feedback-Efficient Reinforcement Learning via Active Reward Learning | An appropriate reward function is of paramount importance in specifying a task in reinforcement learning (RL). Yet, it is known to be extremely challenging in practice to design a correct reward function for even simple tasks. Human-in-the-loop (HiL) RL allows humans to communicate complex goals to the RL agent by providing various types of feedback. However, despite achieving great empirical successes, HiL RL usually requires \emph{too much} feedback from a human teacher and also suffers from insufficient theoretical understanding. In this paper, we focus on addressing this issue from a theoretical perspective, aiming to provide provably feedback-efficient algorithmic frameworks that take human-in-the-loop to specify rewards of given tasks. We provide an \emph{active-learning}-based RL algorithm that first explores the environment without specifying a reward function and then asks a human teacher for only a few queries about the rewards of a task at some state-action pairs. After that, the algorithm guarantees to provide a nearly optimal policy for the task with high probability. We show that, even with the presence of random noise in the feedback, the algorithm only takes $\tilde{O}(H{\dim_{R}^2})$ queries on the reward function to provide an $\epsilon$-optimal policy for any $\epsilon > 0$. Here $H$ is the horizon of the RL environment, and $\dim_{R}$ specifies the complexity of the function class representing the reward function. In contrast, standard RL algorithms require to query the reward function for at least $\Omega(\operatorname{poly}(d, 1/\epsilon))$ state-action pairs where $d$ depends on the complexity of the environmental transition. | Accept | This paper investigates human-in-the-loop RL. The framework and proposed algorithm allows an agent to reconstruct the reward function and produce a near-optimal policy after limited access to a few reward queries. The primary contributions of the work are the problem formulation, algorithm and formal results.
All reviewers agreed on acceptance. Most importantly, there was consensus that problem setting is relevant and interesting and the math is correct. The reviewers noted the techniques used are not new and the results not surprising but the formulation is novel. This is not necessarily a bad thing. There was some discussion on some of the assumptions required (binary feedback and bounded noise). In the end there was clear consensus that the paper adds a much needed theoretical framework to HIL-RL and should inspire further algorithmic work.
Things to address for camera ready:
- all the reviewers thought it was a bad idea to have related work in the appendix. The AC agrees
- the experiments in the appendix are easy to miss; more clearly reference them in the main text
- the text is not great in places; especially in the additions made to the paper in response to the reviewers. | train | [
"v3eNctHtyGe",
"NHt7HMtpa5",
"h4s9pdHPPFi",
"HEIm8-hg9uY",
"iCkC1Fc5gvZ",
"OW2YWGIjzSG",
"pBrJ3Yyci_n",
"UlRvaWYgJXm",
"cC0VLsjIbxu",
"UmBnCCK6OeNF",
"R_HhA7RjbsS",
"2ZccMq5ITVA",
"F7PhcAVEwr7",
"xUwo_2DfUXc",
"2kTRWLrR64o",
"bFwpqBrT-It",
"cIAAvNPosM",
"WY28uaHd8F4",
"DITXGfqyth... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" Dear Reviewer oq5m:\n\nThank you again for your suggestion. We have uploaded a revision of our paper. In the revision, we rewrote Remark 2 and added the necessary references for deriving the $\\Omega(\\operatorname{poly}(d,1/\\epsilon))$ bound.",
" I wanted to thank the authors for the clarifications, particula... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
2
] | [
"UlRvaWYgJXm",
"xUwo_2DfUXc",
"iCkC1Fc5gvZ",
"R_HhA7RjbsS",
"pBrJ3Yyci_n",
"UlRvaWYgJXm",
"bFwpqBrT-It",
"2kTRWLrR64o",
"WY28uaHd8F4",
"nips_2022_B2rqx0w63U",
"F7PhcAVEwr7",
"-FyVBLG7HfM",
"-FyVBLG7HfM",
"DITXGfqyth",
"WY28uaHd8F4",
"cIAAvNPosM",
"nips_2022_B2rqx0w63U",
"nips_2022_... |
nips_2022_4FSfANJp8Qx | Sharp Analysis of Stochastic Optimization under Global Kurdyka-Lojasiewicz Inequality | We study the complexity of finding the global solution to stochastic nonconvex optimization when the objective function satisfies global Kurdyka-{\L}ojasiewicz (KL) inequality and the queries from stochastic gradient oracles satisfy mild expected smoothness assumption. We first introduce a general framework to analyze Stochastic Gradient Descent (SGD) and its associated nonlinear dynamics under the setting. As a byproduct of our analysis, we obtain a sample complexity of $\mathcal{O}(\epsilon^{-(4-\alpha)/\alpha})$ for SGD when the objective satisfies the so called $\alpha$-P{\L} condition, where $\alpha$ is the degree of gradient domination. Furthermore, we show that a modified SGD with variance reduction and restarting (PAGER) achieves an improved sample complexity of $\mathcal{O}(\epsilon^{-2/\alpha})$ when the objective satisfies the average smoothness assumption. This leads to the first optimal algorithm for the important case of $\alpha=1$ which appears in applications such as policy optimization in reinforcement learning. | Accept | This paper offers an analysis for SGD, and then a variance reduced method PAGER, under the general KL assumption. This is a very large family of functions, that includes many interesting non-convex objectives, and thus is interesting for the community. And quoting one of the reviewers "The rates provided by the paper (in Corr. 1 and Thm. 3) recover the best prior rates when specialised to the setting of (\alpha = 2), and extend these settings for general (\alpha) and for SGD, under assumption 4 specifically. This means we gain more generality at no additional cost." The reviewers also eventually agreed that the technical novelties introduced to establish the proof are also interesting and new. | test | [
"9d6Q4bePcy",
"Y9w6Q8TKlR",
"S8PZa_0k_3A",
"mcNE3lJU3H",
"mBWRecZXZsq",
"LT6IBVk3Vv_",
"4xSktAK610X",
"EAygiK-UA3y",
"Wd5eYQcMHxt",
"NGY5UeizuQ",
"SMWJAD_27YM",
"zr8shnWV7b",
"5QHNkbcglY6",
"MB988hen082",
"z5236qSedy5",
"3M8rcaisawC",
"bGfKKgHseH4",
"ALAeX2gHFtT",
"G2_Y_T6hMK6",
... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"autho... | [
" We thank the reviewer for reading our responses and the follow-up questions. \n\n1. Note that our analysis on SGD and PAGER under the global KL condition throughout the paper does not require boundedness on the iterates. This side remark we had on the potential connection to stochastic convex optimization says th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
5
] | [
"Y9w6Q8TKlR",
"ALAeX2gHFtT",
"mcNE3lJU3H",
"G2_Y_T6hMK6",
"pvuYudOuKsT",
"EBD57DQqbdF",
"gvuRcFktbZT",
"nips_2022_4FSfANJp8Qx",
"pvuYudOuKsT",
"pvuYudOuKsT",
"pvuYudOuKsT",
"pvuYudOuKsT",
"pvuYudOuKsT",
"m7pMNdjWT9t",
"m7pMNdjWT9t",
"EBD57DQqbdF",
"EBD57DQqbdF",
"gvuRcFktbZT",
"g... |
nips_2022_XSV1T9jMuz9 | GALOIS: Boosting Deep Reinforcement Learning via Generalizable Logic Synthesis | Despite achieving superior performance in human-level control problems, unlike humans, deep reinforcement learning (DRL) lacks high-order intelligence (e.g., logic deduction and reuse), thus it behaves ineffectively than humans regarding learning and generalization in complex problems. Previous works attempt to directly synthesize a white-box logic program as the DRL policy, manifesting logic-driven behaviors. However, most synthesis methods are built on imperative or declarative programming, and each has a distinct limitation, respectively. The former ignores the cause-effect logic during synthesis, resulting in low generalizability across tasks. The latter is strictly proof-based, thus failing to synthesize programs with complex hierarchical logic. In this paper, we combine the above two paradigms together and propose a novel Generalizable Logic Synthesis (GALOIS) framework to synthesize hierarchical and strict cause-effect logic programs. GALOIS leverages the program sketch and defines a new sketch-based hybrid program language for guiding the synthesis. Based on that, GALOIS proposes a sketch-based program synthesis method to automatically generate white-box programs with generalizable and interpretable cause-effect logic. Extensive evaluations on various decision-making tasks with complex logic demonstrate the superiority of GALOIS over mainstream baselines regarding the asymptotic performance, generalizability, and great knowledge reusability across different environments. | Accept | All reviewers liked the presented GALOIS framework to synthesize hierarchical and cause-effect logic programs that represent policy for solving a task. The use of program sketches to represent the search space and using policy gradients to learn the programs is also quite interesting. However, there were some concerns about the generality of the approach, the amount of human knowledge required in the provided sketch, limited evaluation environments, and scalability of the synthesis approach. The author response helped quite a bit towards many of these concerns. It would be great to incorporate the additional discussions and evaluations from the response in the next paper version. | train | [
"gMHB-vtNGs9",
"idw0-au5TFn",
"JXtoiPh7Ig",
"qBnLT99bC6U",
"MKpEZDi36ec",
"tZpAXDz-Rys",
"000aP3sGHxA",
"5FTDFKkFNVV",
"jVUdX49ong",
"KBsBMEFPuV",
"MxPRCQYHRb9",
"sHm2nc2tWxf",
"aK4Y2zKle00",
"Jf7nsceOLKv",
"8TTtHUYYx4C",
"xj2KyjreKU",
"Spj80rqXv5e",
"vsepat1_bU"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the recognition of our work!\n\nThis work aims at synthesizing a non-strict declarative program (to our knowledge) for the first time. For this, the program sketch-based synthesis paradigm is required as it is one of the most effective ways to generate complex programs (with control flow... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
5
] | [
"JXtoiPh7Ig",
"qBnLT99bC6U",
"tZpAXDz-Rys",
"8TTtHUYYx4C",
"vsepat1_bU",
"Spj80rqXv5e",
"xj2KyjreKU",
"jVUdX49ong",
"vsepat1_bU",
"MxPRCQYHRb9",
"sHm2nc2tWxf",
"aK4Y2zKle00",
"Spj80rqXv5e",
"8TTtHUYYx4C",
"xj2KyjreKU",
"nips_2022_XSV1T9jMuz9",
"nips_2022_XSV1T9jMuz9",
"nips_2022_XS... |
nips_2022_flNZJ2eOet | What Can Transformers Learn In-Context? A Case Study of Simple Function Classes | In-context learning is the ability of a model to condition on a prompt sequence consisting of in-context examples (input-output pairs corresponding to some task) along with a new query input, and generate the corresponding output. Crucially, in-context learning happens only at inference time without any parameter updates to the model. While large language models such as GPT-3 exhibit some ability to perform in-context learning, it is unclear what the relationship is between tasks on which this succeeds and what is present in the training data. To investigate this, we consider the problem of training a model to in-context learn a function class (e.g., linear functions): given data derived from some functions in the class, can we train a model (e.g., a Transformer) to in-context learn most functions from that class? We show empirically that standard Transformers can be trained from scratch to perform in-context learning of linear functions---that is, the trained model is able to learn unseen linear functions from in-context examples with performance comparable to the optimal least squares estimator. In fact, in-context learning is possible even under two forms of distribution shift: (i) between the training data of the Transformer and inference-time prompts, and (ii) between the in-context examples and the query input during inference. We also show that we can train Transformers to in-context learn more complex function classes: sparse linear functions where the model outperforms least squares and nearly matches the performance of Lasso, and two-layer neural networks where the model performs comparably to neural networks trained on in-context examples using gradient descent. | Accept | This paper demonstrates compellingly that transformers are able to in-context learn simple function classes (e.g., linear functions), to the extend that they can recover solutions from algorithms like LASSO. The experiments are well designed and executed, which lead to surprising and intriguing results. While the paper does not provide any explanation for why transformers exhibit such capabilities, it will spur both empirical and theoretical work studying how transformers learn algorithms from in-context examples. Congratulations on a nice work! | train | [
"t9soEx5o9DF",
"8Cu6Vr797Fa",
"IS5vp_aC0my",
"Wr5D-lDKuYZ",
"3YG9E7LuwAu",
"-nU8BrNgMa",
"wgL7RcBuvLU",
"p-d50mApUVV",
"EZBy9imkWUY",
"KfYGp2F1872"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the additional experiment added above, in my opinion, it is quite surprising (?) that in-context-learning is working non-trivially with such small training size. ",
" We thank the reviewer for their positive comments and feedback.\n\nThe main weakness pointed out is the connection of our work to th... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"IS5vp_aC0my",
"KfYGp2F1872",
"EZBy9imkWUY",
"3YG9E7LuwAu",
"p-d50mApUVV",
"wgL7RcBuvLU",
"nips_2022_flNZJ2eOet",
"nips_2022_flNZJ2eOet",
"nips_2022_flNZJ2eOet",
"nips_2022_flNZJ2eOet"
] |
nips_2022_-Oh_TKISy89 | A Scalable Deterministic Global Optimization Algorithm for Training Optimal Decision Tree | The training of optimal decision tree via mixed-integer programming (MIP) has attracted much attention in recent literature. However, for large datasets, state-of-the-art approaches struggle to solve the optimal decision tree training problems to a provable global optimal solution within a reasonable time. In this paper, we reformulate the optimal decision tree training problem as a two-stage optimization problem and propose a tailored reduced-space branch and bound algorithm to train optimal decision tree for the classification tasks with continuous features. We present several structure-exploiting lower and upper bounding methods. The computation of bounds can be decomposed into the solution of many small-scale subproblems and can be naturally parallelized. With these bounding methods, we prove that our algorithm can converge by branching only on variables representing the optimal decision tree structure, which is invariant to the size of datasets. Moreover, we propose a novel sample reduction method that can predetermine the cost of part of samples at each BB node. Combining the sample reduction method with the parallelized bounding strategies, our algorithm can be extremely scalable. Our algorithm can find global optimal solutions on dataset with over 245,000 samples (1000 cores, less than 1% optimality gap, within 2 hours). We test 21 real-world datasets from UCI Repository. The results reveal that for datasets with over 7,000 samples, our algorithm can, on average, improve the training accuracy by 3.6% and testing accuracy by 2.8%, compared to the current state-of-the-art. | Accept | This paper proposes a new approach for computing globally optimal decision trees that is more scalable than prior work. Reviewers gshW, fTBj, and 25Mp are all in favor of accepting the paper, with the main strengths being that:
1. The algorithmic approach is novel and improves scalability of learning globally optimal decision trees.
1. Experiments show that learning globally optimal decision trees improves over baseline methods.
1. The proposed algorithm is empirically shown to be more scalable than prior work.
Reviewer v4RW argues that the paper should be rejected. Their main concern is that the authors select a single regularization parameter for the entire study, seemingly arbitrarily. In response, the authors argue that keeping the regularization parameter fixed when comparing between their proposed method and baselines is to demonstrate the increased accuracy of the learned tree due to optimally solving the optimization problem. They authors also provided new experimental results during the discussion period where the regularization parameter is tuned for each method and dataset. Overall I am convinced by the authors response, and reviewer v4RW did not respond during the discussion period.
The other weaknesses discussed by the reviewers are:
1. Reviewer fTBj argued that the experiments were missing important baselines, experiments were only performed with depth-2 trees, and that the experiments would benefit from an ablation study showing how each component of the proposed algorithm helps performance. In the discussion period the authors provided results with one additional baseline, trees of depth 3, larger datasets, and an ablation study. Reviewer fTBj was encouraged by these results and raised their score as a result.
1. Reviewer 25Mp asked for clarification on how the proposed method differs from Cao and Zavala (2019) and Hua et al. (2021). In the discussion period, the authors point out a number of technical differences between their work and the prior work, and committed to including this discussion in the paper. I am generally convinced by their arguments and reviewer 25Mp did not respond during the discussion period.
Overall, I feel that the points in favor of the paper outweigh the points against.
| train | [
"vGCSo2uAWCq",
"qrK4U6WZrHY",
"VgUfiNkc9xm",
"vFpk5l6Ni",
"DEjMXzRSS3A",
"ry_fr2vsdf",
"3soZ_CCd2v",
"1IBE9Z66XdT",
"3dZAkD47aYj",
"-RF0BucWuKR",
"_7lOIsqP8r1",
"Y96XV5ceg1I",
"Odwws8g_fG4"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We performed additional experiments (we just exhausted our HPC resources:)) and hope the new results can resolve the reviewer's concerns. We are glad to receive the valuable comments, and it is always welcome to ask more questions about our response. We hope that through the rebuttal and discussion, we will convi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
3
] | [
"ry_fr2vsdf",
"VgUfiNkc9xm",
"vFpk5l6Ni",
"DEjMXzRSS3A",
"_7lOIsqP8r1",
"3soZ_CCd2v",
"Odwws8g_fG4",
"Y96XV5ceg1I",
"-RF0BucWuKR",
"nips_2022_-Oh_TKISy89",
"nips_2022_-Oh_TKISy89",
"nips_2022_-Oh_TKISy89",
"nips_2022_-Oh_TKISy89"
] |
nips_2022_q85GV4aSpt | Understanding Square Loss in Training Overparametrized Neural Network Classifiers | Deep learning has achieved many breakthroughs in modern classification tasks. Numerous architectures have been proposed for different data structures but when it comes to the loss function, the cross-entropy loss is the predominant choice. Recently, several alternative losses have seen revived interests for deep classifiers. In particular, empirical evidence seems to promote square loss but a theoretical justification is still lacking. In this work, we contribute to the theoretical understanding of square loss in classification by systematically investigating how it performs for overparametrized neural networks in the neural tangent kernel (NTK) regime. Interesting properties regarding the generalization error, robustness, and calibration error are revealed. We consider two cases, according to whether classes are separable or not. In the general non-separable case, fast convergence rate is established for both misclassification rate and calibration error. When classes are separable, the misclassification rate improves to be exponentially fast. Further, the resulting margin is proven to be lower bounded away from zero, providing theoretical guarantees for robustness. We expect our findings to hold beyond the NTK regime and translate to practical settings. To this end, we conduct extensive empirical studies on practical neural networks, demonstrating the effectiveness of square loss in both synthetic low-dimensional data and real image data. Comparing to cross-entropy, square loss has comparable generalization error but noticeable advantages in robustness and model calibration. | Accept | This paper provides a theoretical investigation of square loss in the NTK regime due to a recent observation: empirical evidence seems to promote square loss other than cross-entropy loss. The authors provide generalization error (in this work it refers to population risk) bound, calibration error, and robustness (in terms of lower bounds of the margin). The theoretical analysis justifies the benefits of square loss. Reviewers all agree that the analysis is novel. Three reviewers think the experimental results might need further improvement. I suggest the authors to strengthen the experiments in the final version. Overall, I recommend accept.
| train | [
"nziJ_xuoM1",
"H8ijLPDyHC",
"1uYTvzGaOBm",
"Lmcs4eJI9Ft",
"-NZMYjCRP3q",
"TyQv-gig2a",
"DRKrQXg8s5k",
"mb-mpplmsDl",
"7F8-KolBwCP",
"F_Jfyx8mma1",
"eBRSqD_H1hY",
"1VRNSye6Alb"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the clarification!",
" Thank you for the question!\n\nFor the computation, we did not use gradient ascent, but a less efficient grid search method, since we are considering a very simple toy case. \nWe divide $[-1, 1]^2$ into 2000 by 2000 grids of equal size. For each grid (not on the boundary), we c... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"H8ijLPDyHC",
"1uYTvzGaOBm",
"Lmcs4eJI9Ft",
"1VRNSye6Alb",
"eBRSqD_H1hY",
"DRKrQXg8s5k",
"F_Jfyx8mma1",
"7F8-KolBwCP",
"nips_2022_q85GV4aSpt",
"nips_2022_q85GV4aSpt",
"nips_2022_q85GV4aSpt",
"nips_2022_q85GV4aSpt"
] |
nips_2022_5dHQyEcYDgA | Additive MIL: Intrinsically Interpretable Multiple Instance Learning for Pathology | Multiple Instance Learning (MIL) has been widely applied in pathology towards solving critical problems such as automating cancer diagnosis and grading, predicting patient prognosis, and therapy response. Deploying these models in a clinical setting requires careful inspection of these black boxes during development and deployment to identify failures and maintain physician trust. In this work, we propose a simple formulation of MIL models, which enables interpretability while maintaining similar predictive performance. Our Additive MIL models enable spatial credit assignment such that the contribution of each region in the image can be exactly computed and visualized. We show that our spatial credit assignment coincides with regions used by pathologists during diagnosis and improves upon classical attention heatmaps from attention MIL models. We show that any existing MIL model can be made additive with a simple change in function composition. We also show how these models can debug model failures, identify spurious features, and highlight class-wise regions of interest, enabling their use in high-stakes environments such as clinical decision-making. | Accept | generalized additive models, for performing MIL on digital pathology datasets in order to improve model interpretability.
The reviewers find that the paper is well written and describes the problem setting as well as their solution well.
The method is considered a valuable way to provide interpretability, is general and novel.
The empirical evaluation shows the benefit on three different data sets.
The reviewers agree on acceptance of the paper.
| train | [
"MOzLUtiaGnS",
"gVUfduIKUrd",
"Xhwbl6PyYoV",
"Rn6_cLpqmii",
"Qe-UemsD3Te",
"04GSqryF0gg",
"S5UGc66H3vp",
"EWSJ0NkG9Vq"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for addressing my comments. The additional experiment in the supplement is helpful, and could be worth highlighting in the main text. All of my other concerns are address, and I am keeping my original rating.",
" I appreciate the author's response. In particular the clarification of the theoretical analy... | [
-1,
-1,
-1,
-1,
-1,
6,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"Xhwbl6PyYoV",
"S5UGc66H3vp",
"EWSJ0NkG9Vq",
"S5UGc66H3vp",
"04GSqryF0gg",
"nips_2022_5dHQyEcYDgA",
"nips_2022_5dHQyEcYDgA",
"nips_2022_5dHQyEcYDgA"
] |
nips_2022_CT5KJGfX4s- | Undersampling is a Minimax Optimal Robustness Intervention in Nonparametric Classification | While a broad range of techniques have been proposed to tackle distribution shift, the simple baseline of training on an \emph{undersampled} dataset often achieves close to state-of-the-art-accuracy across several popular benchmarks. This is rather surprising, since undersampling algorithms discard excess majority group data. To understand this phenomenon, we ask if learning is fundamentally constrained by a lack of minority group samples. We prove that this is indeed the case in the setting of nonparametric binary classification. Our results show that in the worst case, an algorithm cannot outperform undersampling unless there is a high degree of overlap between the train and test distributions (which is unlikely to be the case in real-world datasets), or if the algorithm leverages additional structure about the distribution shift. In particular, in the case of label shift we show that there is always an undersampling algorithm that is minimax optimal. While in the case of group-covariate shift we show that there is an undersampling algorithm that is minimax optimal when the overlap between the group distributions is small. We also perform an experimental case study on a label shift dataset and find that in line with our theory the test accuracy of robust neural network classifiers is constrained by the number of minority samples. | Reject | This paper provides bounds and some empirical results on specific distribution shift scenarios, where there is a majority and minority group (group identity is known to the learner) and while at training time data from the two groups is unbalanced at the test distribution is assume to be a balanced mixture. The paper considers two specific scenarios, one where a type of covariate shift and one where a type of label shift is induced.
This submission considers the non-parametric setting, a one-dimensional feature space (examples are assumed to be from [0,1] x {-1, +1}) and Lipschitz continuity for the conditional label probability function. It then analyzes error rates for the above two scenarios and also provides some empirical confirmation that "undersampling", namely subsampling from the majority group so that the two groups are balanced at training time, is minimax optimal.
Given the large literature on learning bounds for domain adaptation (for parametric, but also for non-parametric learning) this submission appears suprisingly unaware of these existing studies and bounds. This literature should be acknowledged and compared to in details before publication (if the results in here are not in fact just specific cases of known results). I can therefore not support acceptance, despite the reviewers positive recommendations.
Steve Hanneke, Samory Kpotufe:
On the Value of Target Data in Transfer Learning. NeurIPS 2019: 9867-9877
Samory Kpotufe, Guillaume Martinet:
Marginal Singularity, and the Benefits of Labels in Covariate-Shift. COLT 2018: 1882-1886
Christopher Berlind, Ruth Urner:
Active Nearest Neighbors in Changing Environments. ICML 2015: 1870-1879
Shai Ben-David, Ruth Urner:
Domain adaptation-can quantity compensate for quality? Ann. Math. Artif. Intell. 70(3): 185-202 (2014)
Shai Ben-David, Ruth Urner:
On the Hardness of Domain Adaptation and the Utility of Unlabeled Target Samples. ALT 2012: 139-153
Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, Jennifer Wortman Vaughan:
A theory of learning from different domains. Mach. Learn. 79(1-2): 151-175 (2010)
Shai Ben-David, Tyler Lu, Teresa Luu, Dávid Pál:
Impossibility Theorems for Domain Adaptation. AISTATS 2010: 129-136
Shai Ben-David, John Blitzer, Koby Crammer, Fernando Pereira:
Analysis of Representations for Domain Adaptation. NIPS 2006: 137-144
| train | [
"OZwgmAsYNsF",
"_xM4GeI-3Ff",
"kXA5GxfZr1t",
"9D-qC4Tdkuc",
"7FxNZUYebly7",
"OIrr68_oN2P",
"PQGb3FBV-CW",
"O1-2f-LlFVC",
"rY-RV0qIEF",
"iosBNa0iBnm",
"k8DfxIsyXNr",
"V_3sIJlyKzv"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank the reviewer again for their thorough review and multiple suggestions!\n\nWe would like happy to make the constants explicit in Theorems 4.1 and 4.2, and shall also correct the typos identified. Also, as promised earlier we are in the process of drafting a proof sketch and adding in the min... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"_xM4GeI-3Ff",
"V_3sIJlyKzv",
"O1-2f-LlFVC",
"7FxNZUYebly7",
"OIrr68_oN2P",
"rY-RV0qIEF",
"iosBNa0iBnm",
"k8DfxIsyXNr",
"V_3sIJlyKzv",
"nips_2022_CT5KJGfX4s-",
"nips_2022_CT5KJGfX4s-",
"nips_2022_CT5KJGfX4s-"
] |
nips_2022_PLmNPSKJr8e | Minimax Optimal Fixed-Budget Best Arm Identification in Linear Bandits | We study the problem of best arm identification in linear bandits in the fixed-budget setting. By leveraging properties of the G-optimal design and incorporating it into the arm allocation rule, we design a parameter-free algorithm, Optimal Design-based Linear Best Arm Identification (OD-LinBAI). We provide a theoretical analysis of the failure probability of OD-LinBAI. Instead of all the optimality gaps, the performance of OD-LinBAI depends only on the gaps of the top $d$ arms, where $d$ is the effective dimension of the linear bandit instance. Complementarily, we present a minimax lower bound for this problem. The upper and lower bounds show that OD-LinBAI is minimax optimal up to constant multiplicative factors in the exponent, which is a significant theoretical improvement over existing methods (e.g., BayesGap, Peace, LinearExploration and GSE), and settles the question of ascertaining the difficulty of learning the best arm in the fixed-budget setting. Finally, numerical experiments demonstrate considerable empirical improvements over existing algorithms on a variety of real and synthetic datasets. | Accept | This paper proposed an algorithm and presented a good theoretical evaluation. In particular, the tight lower bound is a nice theoretical contribution.
It is also good that it appears to be descriptively and mathematically sound.
The novelty of the algorithm has been questioned by some reviewers, but even with that contribution, it is shown to have some sufficient advantage.
| train | [
"CvAJFPJiDV",
"y7YiIuB_9j0",
"ORk-OwmD1Gm",
"dlS6EYjKVBS",
"y6EjdKQzBm6",
"Bk5REC7t9zw",
"-0qsTEP504V",
"-EB0lZBfJf2",
"_ToLRD1crt6"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your prompt reply! Thanks, in particular, for your response to our query regarding weak/borderline accept. If you have any further comments or questions, we are more than happy to answer them in the author-reviewer discussion period.",
" We thank the reviewer for the meticulous reading and constructi... | [
-1,
-1,
-1,
-1,
-1,
7,
8,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"ORk-OwmD1Gm",
"_ToLRD1crt6",
"-EB0lZBfJf2",
"-0qsTEP504V",
"Bk5REC7t9zw",
"nips_2022_PLmNPSKJr8e",
"nips_2022_PLmNPSKJr8e",
"nips_2022_PLmNPSKJr8e",
"nips_2022_PLmNPSKJr8e"
] |
nips_2022_PDNEqcU-pP | Learning with little mixing | We study square loss in a realizable time-series framework with martingale difference noise. Our main result is a fast rate excess risk bound which shows that whenever a trajectory hypercontractivity condition holds, the risk of the least-squares estimator on dependent data matches the iid rate order-wise after a burn-in time. In comparison, many existing results in learning from dependent data have rates where the effective sample size is deflated by a factor of the mixing-time of the underlying process, even after the burn-in time. Furthermore, our results allow the covariate process to exhibit long range correlations which are substantially weaker than geometric ergodicity. We call this phenomenon learning with little mixing, and present several examples for when it occurs: bounded function classes for which the $L^2$ and $L^{2+\epsilon}$ norms are equivalent, finite state irreducible and aperiodic Markov chains, various parametric models, and a broad family of infinite dimensional $\ell^2(\mathbb{N})$ ellipsoids. By instantiating our main result to system identification of nonlinear dynamics with generalized linear model transitions, we obtain a nearly minimax optimal excess risk bound after only a polynomial burn-in time.
| Accept | This paper studies the problem of learning under dependent data. Existing bounds usually work by deflating the effective sample size by a factor that depends on the mixing time. Essentially when the samples are far enough away from each other, depending on the mixing time, they can be treated as independent. This paper introduces a new framework that they call the trajectory hypercontractivity condition, which stipulates that there is sublinear growth in the dependency matrix. This is a flexible perspective, and the paper derives both general results and applies them in interesting settings. There are some weaknesses, e.g. they cannot recover results in the marginally stable case or in settings with unbounded noise. For example, as the maximum eigenvalue approaches one, the burn-in blows up. I think reviewer SNkB's perfunctory review should be ignored. The paper is somewhat borderline, but in my opinion it is technical stronger and more interesting than some of the other borderline papers in my batch. I recommend acceptance. | train | [
"2UWjiWKhZcv",
"OUIzcf64pgO",
"aX6rHhznHHU",
"FfTIxNdNkBp",
"bQBff4fQCZ",
"HtyQEUGM89",
"mYnFGydOieV",
"7OX8I0N8XVo",
"DYjTc0s5JLV",
"LINI2ZYDscI"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" As the reviewer/author discussion period ends tomorrow, if there are any additional points regarding our response that the reviewers would like clarified, please don't hesitate to reach out with further questions. ",
" We thank reviewer U6WV for their time and effort to review this submission. We note that revi... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
4,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
1,
4,
3,
2
] | [
"nips_2022_PDNEqcU-pP",
"LINI2ZYDscI",
"DYjTc0s5JLV",
"7OX8I0N8XVo",
"mYnFGydOieV",
"nips_2022_PDNEqcU-pP",
"nips_2022_PDNEqcU-pP",
"nips_2022_PDNEqcU-pP",
"nips_2022_PDNEqcU-pP",
"nips_2022_PDNEqcU-pP"
] |
nips_2022_Qry8exovcNA | Explaining Graph Neural Networks with Structure-Aware Cooperative Games | Explaining machine learning models is an important and increasingly popular area of research interest. The Shapley value from game theory has been proposed as a prime approach to compute feature importance towards model predictions on images, text, tabular data, and recently graph neural networks (GNNs) on graphs. In this work, we revisit the appropriateness of the Shapley value for GNN explanation, where the task is to identify the most important subgraph and constituent nodes for GNN predictions. We claim that the Shapley value is a non-ideal choice for graph data because it is by definition not structure-aware. We propose a Graph Structure-aware eXplanation (GStarX) method to leverage the critical graph structure information to improve the explanation. Specifically, we define a scoring function based on a new structure-aware value from the cooperative game theory proposed by Hamiache and Navarro (HN). When used to score node importance, the HN value utilizes graph structures to attribute cooperation surplus between neighbor nodes, resembling message passing in GNNs, so that node importance scores reflect not only the node feature importance, but also the node structural roles. We demonstrate that GStarX produces qualitatively more intuitive explanations, and quantitatively improves explanation fidelity over strong baselines on chemical graph property prediction and text graph sentiment classification. Code: https://github.com/ShichangZh/GStarX
| Accept | The paper considered the task of identifying the most important subgraph and constituent nodes for graph-level predictions. It argued that the popular Shapley value is non-ideal since it's not structure-aware. It then proposed a structure-aware method, a scoring function based on the HN value from the cooperative game theory. Empirical results show that the method produces qualitatively and quantitatively improves over strong baselines.
The paper considers an important topic and has made novel and interesting contributions: the discussion that the Shapley value is not ideal for graph explanation, the non-trivial application of the HN value in this context, the well-design method and the thorough study showing its effectiveness. The authors have addressed well the comments by the reviewers. | train | [
"jkVNQDZUqT",
"VSnE_eeAjZ",
"Kf1nPMKMov9",
"Pi3r1jTJudm",
"5ZUOVw0r1o",
"IQ-Tt_AAgzr",
"29qlDlRqBNd",
"YINUEKe7PiaH",
"o0nWt3eJc0k",
"s1ak2Hr5_lq",
"hEUVB0RVAjP"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We truly appreciate the constructive comments from all reviewers. We briefly summarize the major changes we made during the rebuttal period below. The paper PDF is also revised accordingly with changes highlighted in red.\n \n* Add a detailed quantitative study of Figure 1(b) in Section 5.2\n* Add entropy-based s... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"nips_2022_Qry8exovcNA",
"IQ-Tt_AAgzr",
"YINUEKe7PiaH",
"hEUVB0RVAjP",
"s1ak2Hr5_lq",
"s1ak2Hr5_lq",
"o0nWt3eJc0k",
"o0nWt3eJc0k",
"nips_2022_Qry8exovcNA",
"nips_2022_Qry8exovcNA",
"nips_2022_Qry8exovcNA"
] |
nips_2022_SrwrRP3yfq8 | Global Optimal K-Medoids Clustering of One Million Samples | We study the deterministic global optimization of the K-Medoids clustering problem. This work proposes a branch and bound (BB) scheme, in which a tailored Lagrangian relaxation method proposed in the 1970s is used to provide a lower bound at each BB node. The lower bounding method already guarantees the maximum gap at the root node. A closed-form solution to the lower bound can be derived analytically without explicitly solving any optimization problems, and its computation can be easily parallelized. Moreover, with this lower bounding method, finite convergence to the global optimal solution can be guaranteed by branching only on the regions of medoids. We also present several tailored bound tightening techniques to reduce the search space and computational cost. Extensive computational studies on 28 machine learning datasets demonstrate that our algorithm can provide a provable global optimal solution with an optimality gap of 0.1\% within 4 hours on datasets with up to one million samples. Besides, our algorithm can obtain better or equal objective values than the heuristic method. A theoretical proof of global convergence for our algorithm is also presented.
| Accept | The authors solve an important theoretical problem via a nice connection to Lagrangian relaxation for k-medoids and k-medoids on samples. The paper also has significant experimental results. The paper is also well-written, and information from the rebuttal should be incorporated into the final version. I recommend acceptance.
| train | [
"wlsdExF5msG",
"lczhfjgi-Ct",
"D5uKvQ_Y_gR",
"BDwkXGT4qoA",
"d3l4d66T8M",
"RgbhtvZ_0G",
"7wpuaa8c5Ink",
"a1ZGQTETFj_",
"shd3q91LxqY",
"AThYQU2gVm",
"kS1AceCRolSG",
"YqgxTzq3VBhC",
"Cd8-fdQvFKzU",
"sN7nBna7Mmf",
"FqJtFFnhIB",
"7SQskK5FYm",
"1AGf341KU2B"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for providing more numbers. Though the guaranteed optimality could be meaningful in certain cases, I am still questioning the practical usage of the proposed method in the task of clustering, due to the slow speed. As the paper successfully connect Cornuejols' work with BB method, I update my score to bord... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
3
] | [
"BDwkXGT4qoA",
"d3l4d66T8M",
"d3l4d66T8M",
"7wpuaa8c5Ink",
"RgbhtvZ_0G",
"kS1AceCRolSG",
"a1ZGQTETFj_",
"shd3q91LxqY",
"1AGf341KU2B",
"kS1AceCRolSG",
"7SQskK5FYm",
"FqJtFFnhIB",
"sN7nBna7Mmf",
"nips_2022_SrwrRP3yfq8",
"nips_2022_SrwrRP3yfq8",
"nips_2022_SrwrRP3yfq8",
"nips_2022_SrwrR... |
nips_2022_zofwPmKL-DO | Quantum Algorithms for Sampling Log-Concave Distributions and Estimating Normalizing Constants | Given a convex function $f\colon\mathbb{R}^{d}\to\mathbb{R}$, the problem of sampling from a distribution $\propto e^{-f(x)}$ is called log-concave sampling. This task has wide applications in machine learning, physics, statistics, etc. In this work, we develop quantum algorithms for sampling log-concave distributions and for estimating their normalizing constants $\int_{\R^d}e^{-f(x)}\d x$. First, we use underdamped Langevin diffusion to develop quantum algorithms that match the query complexity (in terms of the condition number $\kappa$ and dimension $d$) of analogous classical algorithms that use gradient (first-order) queries, even though the quantum algorithms use only evaluation (zeroth-order) queries. For estimating normalizing constants, these algorithms also achieve quadratic speedup in the multiplicative error $\epsilon$. Second, we develop quantum Metropolis-adjusted Langevin algorithms with query complexity $\widetilde{O}(\kappa^{1/2}d)$ and $\widetilde{O}(\kappa^{1/2}d^{3/2}/\epsilon)$ for log-concave sampling and normalizing constant estimation, respectively, achieving polynomial speedups in $\kappa,d,\epsilon$ over the best known classical algorithms by exploiting quantum analogs of the Monte Carlo method and quantum walks. We also prove a $1/\epsilon^{1-o(1)}$ quantum lower bound for estimating normalizing constants, implying near-optimality of our quantum algorithms in $\epsilon$. | Accept | This work considers the problem of sampling and normalizing constant estimation for log-concave distributions in the quantum setting.
Starting from state-of-the-art bounds for classical algorithms, the authors show how to achieve quantum speedup in a number of settings.
A quantum lower bound for normalizing constant estimation is also derived (as a function of the desired accuracy $\epsilon$). This submission initiates a natural direction --- using quantum algorithms to speed up gradient-based MCMC algorithms --- and obtains interesting and non-trivial improvements in some base cases. The expert reviewers who read the paper in depth found this work conceptually interesting, well-written, and technically non-trivial. Consequently, I recommend acceptance. | train | [
"_O8edV8KA1V",
"O0IH5-w0ihI",
"uhmA1CeI2pS",
"4ywBDDgvIwL",
"0vjnLIR5AI",
"PiKsE2Ieyva"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their evaluation of our work., However, we strongly disagree with the characterization of our presentation of algorithms and of missing proofs of theorems/lemmas. We included detailed pseudocode for our sampling algorithms and our partition function estimation algorithms. Because we have... | [
-1,
-1,
-1,
6,
5,
3
] | [
-1,
-1,
-1,
3,
4,
5
] | [
"PiKsE2Ieyva",
"0vjnLIR5AI",
"4ywBDDgvIwL",
"nips_2022_zofwPmKL-DO",
"nips_2022_zofwPmKL-DO",
"nips_2022_zofwPmKL-DO"
] |
nips_2022_rF6zwkyMABn | Nearly Optimal Best-of-Both-Worlds Algorithms for Online Learning with Feedback Graphs | This study considers online learning with general directed feedback graphs. For this problem, we present best-of-both-worlds algorithms that achieve nearly tight regret bounds for adversarial environments as well as poly-logarithmic regret bounds for stochastic environments. As \citet{alon2015online} have shown, tight regret bounds depend on the structure of the feedback graph: \textit{strongly observable} graphs yield minimax regret of $\tilde{\Theta}( \alpha^{1/2} T^{1/2} )$, while \textit{weakly observable} graphs induce minimax regret of $\tilde{\Theta}( \delta^{1/3} T^{2/3} )$, where $\alpha$ and $\delta$, respectively, represent the independence number of the graph and the domination number of a certain portion of the graph. Our proposed algorithm for strongly observable graphs has a regret bound of $\tilde{O}( \alpha^{1/2} T^{1/2} ) $ for adversarial environments, as well as of $ {O} ( \frac{\alpha (\ln T)^3 }{\Delta_{\min}} ) $ for stochastic environments, where $\Delta_{\min}$ expresses the minimum suboptimality gap. This result resolves an open question raised by \citet{erez2021towards}. We also provide an algorithm for weakly observable graphs that achieves a regret bound of $\tilde{O}( \delta^{1/3}T^{2/3} )$ for adversarial environments and poly-logarithmic regret for stochastic environments. The proposed algorithms are based on the follow-the-perturbed-leader approach combined with newly designed update rules for learning rates. | Accept | The paper received four reviews from experts in online learning, who all strongly support acceptance. As summarized very well in the reviews, this is a well-written paper that makes a solid and elegant contribution to the line of work on best-of-both-worlds online learning. The authors have effectively addressed the (relatively minor) concerns indicated in the reviews. I wholeheartedly recommend the paper is accepted.
One issue brought in the reviews is the non-trivial overlap with a different paper entitled “A Near-Optimal Best-of-Both-Worlds Algorithm for Online Learning with Feedback Graphs”. While this situation did not affect the decision in any way, I urge the authors to properly cite this paper in their final version and discuss how it related to their contribution. | train | [
"VH6dadyXyxA",
"dbBGJ1JumfP",
"spG6dKy-wrr",
"CQqLUwCOlwX",
"whmmicsUDCj",
"0-jiszRdSlK",
"jtqgiBZLDfj",
"rd3IMlm36HE"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your time and helpful suggestions.\nWe hope the following answers address your questions and concerns.\n\n> For weakly observable graphs, the choice of the regularizer is more mysterious to me. Maybe the authors can comment more about this, for example, the intuition to add $(1-x) \\ln (1-x)$ and $\... | [
-1,
-1,
-1,
-1,
8,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
4,
5,
4,
4
] | [
"whmmicsUDCj",
"0-jiszRdSlK",
"jtqgiBZLDfj",
"rd3IMlm36HE",
"nips_2022_rF6zwkyMABn",
"nips_2022_rF6zwkyMABn",
"nips_2022_rF6zwkyMABn",
"nips_2022_rF6zwkyMABn"
] |
nips_2022_UpNCpGvD96A | Identification, Amplification and Measurement: A bridge to Gaussian Differential Privacy | Gaussian differential privacy (GDP) is a single-parameter family of privacy notions that provides coherent guarantees to avoid the exposure of sensitive individual information. Despite the extra interpretability and tighter bounds under composition GDP provides, many widely used mechanisms (e.g., the Laplace mechanism) inherently provide GDP guarantees but often fail to take advantage of this new framework because their privacy guarantees were derived under a different background. In this paper, we study the asymptotic properties of privacy profiles and develop a simple criterion to identify algorithms with GDP properties. We propose an efficient method for GDP algorithms to narrow down possible values of an optimal privacy measurement, $\mu$ with an arbitrarily small and quantifiable margin of error. For non GDP algorithms, we provide a post-processing procedure that can amplify existing privacy guarantees to meet the GDP condition. As applications, we compare two single-parameter families of privacy notions, $\epsilon$-DP, and $\mu$-GDP, and show that all $\epsilon$-DP algorithms are intrinsically also GDP. Lastly, we show that the combination of our measurement process and the composition theorem of GDP is a powerful and convenient tool to handle compositions compared to the traditional standard and advanced composition theorems. | Accept | This paper expands our understanding of a recent variant of differential privacy, called Gaussian differential privacy, and studies its relationship with the standard definition. The reviewers all agree that the results are both important and interesting, and support accepting the paper. | train | [
"oytZAu-7-t",
"-_gLu4veX-3",
"Z9jUx0vic-Z",
"dY2XJtqGbgS",
"5BbNRF_F6GT",
"s5KE2QgTNV",
"EBOlptCm4Av",
"pqdhTigdHld",
"BmVh3zs9iQ",
"mQ-Kl-KA0nC"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much. We are grateful for the time and effort you put into improving our work.",
" Apologies to the authors for responding so late.\n\nI thank the authors for clarifying my questions. I understand that they cannot include Algorithm 3 in the first 9 pages, but I would love to see the importance of... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3,
3
] | [
"-_gLu4veX-3",
"s5KE2QgTNV",
"pqdhTigdHld",
"mQ-Kl-KA0nC",
"BmVh3zs9iQ",
"EBOlptCm4Av",
"nips_2022_UpNCpGvD96A",
"nips_2022_UpNCpGvD96A",
"nips_2022_UpNCpGvD96A",
"nips_2022_UpNCpGvD96A"
] |
nips_2022_akddwRG6EGi | High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation | We study the first gradient descent step on the first-layer parameters $\boldsymbol{W}$ in a two-layer neural network: $f(\boldsymbol{x}) = \frac{1}{\sqrt{N}}\boldsymbol{a}^\top\sigma(\boldsymbol{W}^\top\boldsymbol{x})$, where $\boldsymbol{W}\in\mathbb{R}^{d\times N}, \boldsymbol{a}\in\mathbb{R}^{N}$ are randomly initialized, and the training objective is the empirical MSE loss: $\frac{1}{n}\sum_{i=1}^n (f(\boldsymbol{x}_i)-y_i)^2$. In the proportional asymptotic limit where $n,d,N\to\infty$ at the same rate, and an idealized student-teacher setting where the teacher $f^*$ is a single-index model, we compute the prediction risk of ridge regression on the conjugate kernel after one gradient step on $\boldsymbol{W}$ with learning rate $\eta$. We consider two scalings of the first step learning rate $\eta$. For small $\eta$, we establish a Gaussian equivalence property for the trained feature map, and prove that the learned kernel improves upon the initial random features model, but cannot defeat the best linear model on the input. Whereas for sufficiently large $\eta$, we prove that for certain $f^*$, the same ridge estimator on trained features can go beyond this ``linear regime'' and outperform a wide range of (fixed) kernels. Our results demonstrate that even one gradient step can lead to a considerable advantage over random features, and highlight the role of learning rate scaling in the initial phase of training. | Accept | The paper studies a two layer neural network in the setting of one step of gradient descent on the first layer (after random init) and then freezing the first layer and training the last layer. The study is in proportinal asymptotic limits of parameters going to infinity. They identify two regimes depending on the learning rate for the single step on the first layer, in the first regime the step can improve over the random features model but stays below the best linear predictor and in the higher learning rate regime it can improve over the best linear predictor. The latter is established for a class of f* (student teacher model assumed). The meta takeaway is a neat study how feature learning can be happening. Although the precise setting is somewhat restricted, the results are strong and can lead to further research in this important area.
Overall all the reviewers felt that the paper is a strong contribution to Neurips community. I am happy to recommend acceptance. | train | [
"6Jq_b5NgTSp",
"BJtXgak8Nw",
"VlVJgfjSpY",
"8JLiKxQ6dH",
"VS2CC2lct2B",
"lDkSKrtcn3X",
"80tXsJ7Edbw"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewers and Area Chair, \n\nWe deeply appreciate your continuing time and effort to provide detailed comments on our paper. To best respond to your comments, we revised our paper with additional clarifying content as suggested by the reviewers. Below is a short summary of the updates. \n\n- We included a... | [
-1,
-1,
-1,
-1,
8,
7,
7
] | [
-1,
-1,
-1,
-1,
4,
3,
2
] | [
"nips_2022_akddwRG6EGi",
"80tXsJ7Edbw",
"lDkSKrtcn3X",
"VS2CC2lct2B",
"nips_2022_akddwRG6EGi",
"nips_2022_akddwRG6EGi",
"nips_2022_akddwRG6EGi"
] |
nips_2022_vfCd1Vt8BGq | On Leave-One-Out Conditional Mutual Information For Generalization | We derive information theoretic generalization bounds for supervised learning algorithms based on a new measure of leave-one-out conditional mutual information (loo-CMI). In contrast to other CMI bounds, which may be hard to evaluate in practice, our loo-CMI bounds are easier to compute and can be interpreted in connection to other notions such as classical leave-one-out cross-validation, stability of the optimization algorithm, and the geometry of the loss-landscape. It applies both to the output of training algorithms as well as their predictions. We empirically validate the quality of the bound by evaluating its predicted generalization gap in scenarios for deep learning. In particular, our bounds are non-vacuous on image-classification tasks. | Accept | The paper derives an information theoretic generalization bound based on a mutual information measure that relies on a leave-one-out procedure to reduce the cost of previous estimators. This estimator reduces the complexity from exponential ($2^N$ where $N$ is the sample size) to linear.
Overall, the general feeling among the reviewers and myself is that the paper does make a rather novel contribution, which is backed up by theoretical bounds. I think the reduction in terms of complexity from to is significant, but one could still argue this is not quite sufficient yet for modern tasks in machine learning, where can be very large, so evaluations of a large neural network can still be very costly. This aspect seems a bit hidden in the paper where the experiments are only on very small datasets. The authors do not even seem to explicitely mention the size of the datasets.
Still, I think the theoretical contribution of the paper is significant enough so I recommend acceptance but I stronlgly encourage the authors to further discuss the computational aspect.
I would like to point out two additional relevant work:
1) https://arxiv.org/pdf/2206.14800.pdf
This paper seems to develop very similar bound also using a concept of mutual information and LOO.
2) https://arxiv.org/pdf/2203.03443.pdf
This paper reduces the computational aspect of LOO, although it seems to be only applicable to kernels. | train | [
"IWIOTEIrv8ZI",
"7yDhAkG2hcn",
"bfnYhvhnWDo",
"WZvi6jg68Ux",
"hYPc19DwYNU",
"aGyChG7aAU",
"JdbNfSb1vv5",
"GUwbBMJp27L",
"X5ZtgUlCaS6",
"GAwTpJWTq8C"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your comment. We are glad to have been able to address your main concern. \n\n> Regarding Lemma 4.1, I suggest that you move the definition of boundedness to the main paper. I see now that relates to the learning rate.\n\nYou are correct in saying it is related to the learning rate. It is also relat... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
2,
4
] | [
"7yDhAkG2hcn",
"WZvi6jg68Ux",
"JdbNfSb1vv5",
"GAwTpJWTq8C",
"GUwbBMJp27L",
"X5ZtgUlCaS6",
"nips_2022_vfCd1Vt8BGq",
"nips_2022_vfCd1Vt8BGq",
"nips_2022_vfCd1Vt8BGq",
"nips_2022_vfCd1Vt8BGq"
] |
nips_2022_H4GmqyYMxFP | Computationally Efficient Horizon-Free Reinforcement Learning for Linear Mixture MDPs | Recent studies have shown that episodic reinforcement learning (RL) is not more difficult than bandits, even with a long planning horizon and unknown state transitions. However, these results are limited to either tabular Markov decision processes (MDPs) or computationally inefficient algorithms for linear mixture MDPs. In this paper, we propose the first computationally efficient horizon-free algorithm for linear mixture MDPs, which achieves the optimal $\tilde O(d\sqrt{K} +d^2)$ regret up to logarithmic factors. Our algorithm adapts a weighted least square estimator for the unknown transitional dynamic, where the weight is both \emph{variance-aware} and \emph{uncertainty-aware}. When applying our weighted least square estimator to heterogeneous linear bandits, we can obtain an $\tilde O(d\sqrt{\sum_{k=1}^K \sigma_k^2} +d)$ regret in the first $K$ rounds, where $d$ is the dimension of the context and $\sigma_k^2$ is the variance of the reward in the $k$-th round. This also improves upon the best known algorithms in this setting when $\sigma_k^2$'s are known. | Accept | This work advances the state-of-the-art for horizon-free regret bounds for linear mixture MDP as well as heterogeneous linear bandits. Clear accept. | train | [
"uBh88rDlIqG",
"_AgT5uczP7",
"DAoiyUbFKfF",
"627bon7iuKy",
"YgpUqQSNsHps",
"rV7lPOqVOgk",
"QZdINLxhI_9",
"vSF0pzbocv",
"yZjVa6YHtyB",
"b7LJSpS3NgV"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your positive feedback!",
" After read the response, I decide to keep the score unchanged.",
" Thanks for your positive comments!\n\n**Q1** The contribution is not very significant. UCRL-VTR itself is not computationally-efficient beyond linear case. \n\n**A1** We respectfully disagree with the ... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
8,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
4
] | [
"_AgT5uczP7",
"rV7lPOqVOgk",
"b7LJSpS3NgV",
"yZjVa6YHtyB",
"vSF0pzbocv",
"QZdINLxhI_9",
"nips_2022_H4GmqyYMxFP",
"nips_2022_H4GmqyYMxFP",
"nips_2022_H4GmqyYMxFP",
"nips_2022_H4GmqyYMxFP"
] |
nips_2022_lmmKGi7zXn | Infinite Recommendation Networks: A Data-Centric Approach | We leverage the Neural Tangent Kernel and its equivalence to training infinitely-wide neural networks to devise $\infty$-AE: an autoencoder with infinitely-wide bottleneck layers. The outcome is a highly expressive yet simplistic recommendation model with a single hyper-parameter and a closed-form solution. Leveraging $\infty$-AE's simplicity, we also develop Distill-CF for synthesizing tiny, high-fidelity data summaries which distill the most important knowledge from the extremely large and sparse user-item interaction matrix for efficient and accurate subsequent data-usage like model training, inference, architecture search, etc. This takes a data-centric approach to recommendation, where we aim to improve the quality of logged user-feedback data for subsequent modeling, independent of the learning algorithm. We particularly utilize the concept of differentiable Gumbel-sampling to handle the inherent data heterogeneity, sparsity, and semi-structuredness, while being scalable to datasets with hundreds of millions of user-item interactions. Both of our proposed approaches significantly outperform their respective state-of-the-art and when used together, we observe $96-105$% of $\infty$-AE's performance on the full dataset with as little as $0.1$% of the original dataset size, leading us to explore the counter-intuitive question: Is more data what you need for better recommendation? | Accept |
This paper proposes an infinite-width autoencoder for recommendation, which trained using the NTG framework by the kernelized ridge regression algorithm. The approach struggle with the large size of the data. To this end, the authors propose a method for data set summarization, called Distill-CF, for synthesizing tiny high-fidelity data summaries.
The paper received a mixed evaluation from the reviewers.
The strengths of the paper mentioned by the reviewers were:
- A simple model with only one hyper-parameter and a closed-form solution
- A though-provoking and novel framework for recommendation
- Good performance in the experiments, relatively wide experimental study
On the other hand, the identified weaknesses were:
- The author did not verify the general effectiveness of Distill-CF beyond coupling with infinite AE, so it is not clear where is the actual gain
- Technical issues with the experiments
- Some issues with the readability | val | [
"6e-JVGVJYc",
"YUX57bFeG8",
"KRNZUe9-JYI",
"zYtWtLMueZT",
"UbZ6ylCBSZo",
"GDc1GFcj8k",
"OFriqHrsX-Lz",
"kdAknoshysl",
"S7jx3tgUQSL",
"hna1LzPld76"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The paper studies a novel recommendation framework which combines two complementary ideas. The infinite-width autoencoder, $\\infty$-AE, models the recommendation data, while DISTILL-CF creates a small set of data \"summaries\" (synthetic examples constructed from the real ones) used further for model training. T... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
5
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"nips_2022_lmmKGi7zXn",
"KRNZUe9-JYI",
"zYtWtLMueZT",
"S7jx3tgUQSL",
"S7jx3tgUQSL",
"hna1LzPld76",
"kdAknoshysl",
"nips_2022_lmmKGi7zXn",
"nips_2022_lmmKGi7zXn",
"nips_2022_lmmKGi7zXn"
] |
nips_2022__AsEqoBu3s | Giving Feedback on Interactive Student Programs with Meta-Exploration | Developing interactive software, such as websites or games, is a particularly engaging way to learn computer science. However, teaching and giving feedback on such software is time-consuming — standard approaches require instructors to manually grade student-implemented interactive programs. As a result, online platforms that serve millions, like Code.org, are unable to provide any feedback on assignments for implementing interactive programs, which critically hinders students’ ability to learn. One approach toward automatic grading is to learn an agent that interacts with a student’s program and explores states indicative of errors via reinforcement learning. However, existing work on this approach only provides binary feedback of whether a program is correct or not, while students require finer-grained feedback on the specific errors in their programs to understand their mistakes. In this work, we show that exploring to discover errors can be cast as a meta-exploration problem. This enables us to construct a principled objective for discovering errors and an algorithm for optimizing this objective, which provides fine-grained feedback. We evaluate our approach on a set of over 700K real anonymized student programs from a Code.org interactive assignment. Our approach provides feedback with 94.3% accuracy, improving over existing approaches by 17.7% and coming within 1.5% of human-level accuracy. Project web page: https://ezliu.github.io/dreamgrader. | Accept | This work presents dreamgrader that aims to provide feedback to student-authored interactive programs. Reviewers all agreed that this paper presents a novel and original idea, solid experiments on real-world programs, as well as potential impact on MOOCs. There were some minor concerns and most got resolved during the discussion stage. Thus we recommend acceptance.
| train | [
"Kd-JdVOW9qF",
"Q6I9D7Ddjre",
"LvdRY1FFNLv",
"FSqornNtmTr",
"WP6KbTzGLm_",
"xiOAJwy6vR4",
"_B2s1tOu0J",
"pnpMbw3-4BW",
"-RpdE9QSo3",
"MRRyZf7RXnl",
"lKr_-PUObv",
"VAu4375hLykz",
"A54o4wTjRXjW",
"iXp1J4t0k8",
"QudBvlb4HbA",
"FrCMPds4Xkv",
"q-ahgoby_Ir"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" That sounds very exciting and promising! This error does seem hard to detect. I'd be curious to know about the performance of the extension of Nie et al. in this setting (though, I'd expect it to be abysmal). Kudos to the authors for pushing the results further! ",
" The experiments with breakout are very inter... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"WP6KbTzGLm_",
"LvdRY1FFNLv",
"_B2s1tOu0J",
"xiOAJwy6vR4",
"pnpMbw3-4BW",
"lKr_-PUObv",
"MRRyZf7RXnl",
"A54o4wTjRXjW",
"nips_2022__AsEqoBu3s",
"q-ahgoby_Ir",
"VAu4375hLykz",
"FrCMPds4Xkv",
"iXp1J4t0k8",
"QudBvlb4HbA",
"nips_2022__AsEqoBu3s",
"nips_2022__AsEqoBu3s",
"nips_2022__AsEqoB... |
nips_2022_hGdAzemIK1X | Quantum Speedups of Optimizing Approximately Convex Functions with Applications to Logarithmic Regret Stochastic Convex Bandits | We initiate the study of quantum algorithms for optimizing approximately convex functions. Given a convex set $\mathcal{K}\subseteq\mathbb{R}^{n}$ and a function $F\colon\mathbb{R}^{n}\to\mathbb{R}$ such that there exists a convex function $f\colon\mathcal{K}\to\mathbb{R}$ satisfying $\sup_{x\in\mathcal{K}}|F(x)-f(x)|\leq \epsilon/n$, our quantum algorithm finds an $x^{*}\in\mathcal{K}$ such that $F(x^{*})-\min_{x\in\mathcal{K}} F(x)\leq\epsilon$ using $\tilde{O}(n^{3})$ quantum evaluation queries to $F$. This achieves a polynomial quantum speedup compared to the best-known classical algorithms. As an application, we give a quantum algorithm for zeroth-order stochastic convex bandits with $\tilde{O}(n^{5}\log^{2} T)$ regret, an exponential speedup in $T$ compared to the classical $\Omega(\sqrt{T})$ lower bound. Technically, we achieve quantum speedup in $n$ by exploiting a quantum framework of simulated annealing and adopting a quantum version of the hit-and-run walk. Our speedup in $T$ for zeroth-order stochastic convex bandits is due to a quadratic quantum speedup in multiplicative error of mean estimation. | Accept | Interesting paper - a little bit on the weak side in terms of presentation especially given the community. The consensus is around a weak accept and I concur. | train | [
"j3nmmKqAT2g",
"skBGxku4WQT0",
"zZu7lN8X17",
"1oIWTU-VV5B",
"4u7ojwAIZD5",
"MVE9fa1p27X",
"k_z8r8L99ou",
"sBUscvAYi6i",
"OsFxlNY4WwE"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response.\n\nI still disagree with the term 'regret', and still think 'best-arm' is the appropriate categorization. However, I also think it is counter-productive for the discussion to get stuck on naming/terminology. This disagreement is not reflected in my score.\n\nGiven a query $\\int_x \\sqrt{... | [
-1,
-1,
-1,
-1,
-1,
5,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
2,
3,
2,
1
] | [
"4u7ojwAIZD5",
"OsFxlNY4WwE",
"sBUscvAYi6i",
"k_z8r8L99ou",
"MVE9fa1p27X",
"nips_2022_hGdAzemIK1X",
"nips_2022_hGdAzemIK1X",
"nips_2022_hGdAzemIK1X",
"nips_2022_hGdAzemIK1X"
] |
nips_2022_UmDaZksRyk | A Consolidated Cross-Validation Algorithm for Support Vector Machines via Data Reduction | We propose a consolidated cross-validation (CV) algorithm for training and tuning the support vector machines (SVM) on reproducing kernel Hilbert spaces. Our consolidated CV algorithm utilizes a recently proposed exact leave-one-out formula for the SVM and accelerates the SVM computation via a data reduction strategy. In addition, to compute the SVM with the bias term (intercept), which is not handled by the existing data reduction methods, we propose a novel two-stage consolidated CV algorithm. With numerical studies, we demonstrate that our algorithm is about an order of magnitude faster than the two mainstream SVM solvers, kernlab and LIBSVM, with almost the same accuracy. | Accept | The paper proposes a new algorithm for training kernel support vector machines (SVM), that exploits a leave one out lemma to develop a significantly faster algorithm. The key advances are a data reduction approach, a consolidated cross validation method, and a warm start. Empirical comparisons to magicsvm, kernlab, and LIBSVM demonstrate an order of magnitude speedup. The paper proposes a major advance to kernel SVMs, without resorting to approximations or linearization. The combination of theoretical insight, exploitation of the structure of the method, as well as careful reuse of computation are all excellent contributions.
Four reviewers considered the submission, and three reviewers were very pleased by the novel approach for a classic algorithm. One reviewer found several issues with the submission, and did not further engage during the rebuttal period. A different reviewer actually was positively influenced by the author response to the negative review.
It gives me great pleasure to recommend this paper for acceptance to NeurIPS 2022. Congratulations! | test | [
"YN4nkFNbK7z",
"ZEM3DEdhCQS",
"kGti5SKSgkT",
"LjfmN84B5D",
"VeYSv4FdV69",
"Vx88_QqL6j9",
"W2zs2aB3vJh",
"93zBeDPJTlk",
"iqu_ydHeNKT"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" As the author-reviewer discussion period is coming to an end, we would like to thank all the reviewers for their comments and valuable time reviewing our paper, which helped us improve the quality of the paper and discuss the potential extensions. We hope the concerns have been addressed in our response. If there... | [
-1,
-1,
-1,
-1,
-1,
7,
3,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
4
] | [
"nips_2022_UmDaZksRyk",
"Vx88_QqL6j9",
"W2zs2aB3vJh",
"93zBeDPJTlk",
"iqu_ydHeNKT",
"nips_2022_UmDaZksRyk",
"nips_2022_UmDaZksRyk",
"nips_2022_UmDaZksRyk",
"nips_2022_UmDaZksRyk"
] |
nips_2022_2EUJ4e6H4OX | Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing | Self-supervised learning (SSL) for rich speech representations has achieved empirical success in low-resource Automatic Speech Recognition (ASR) and other speech processing tasks, which can mitigate the necessity of a large amount of transcribed speech and thus has driven a growing demand for on-device ASR and other speech processing. However, advanced speech SSL models have become increasingly large, which contradicts the limited on-device resources. This gap could be more severe in multilingual/multitask scenarios requiring simultaneously recognizing multiple languages or executing multiple speech processing tasks. Additionally, strongly overparameterized speech SSL models tend to suffer from overfitting when being finetuned on low-resource speech corpus. This work aims to enhance the practical usage of speech SSL models towards a win-win in both enhanced efficiency and alleviated overfitting via our proposed S$^3$-Router framework, which for the first time discovers that simply discarding no more than 10% of model weights via only finetuning model connections of speech SSL models can achieve better accuracy over standard weight finetuning on downstream speech processing tasks. More importantly, S$^3$-Router can serve as an all-in-one technique to enable (1) a new finetuning scheme, (2) an efficient multilingual/multitask solution, (3) a state-of-the-art pruning technique, and (4) a new tool to quantitatively analyze the learned speech representation. We believe S$^3$-Router has provided a new perspective for practical deployment of speech SSL models. Our codes are available at: https://github.com/GATECH-EIC/S3-Router. | Accept | reviewers like that
* paper is well written and interesting
* experiments are solid
this paper should be accepted. | test | [
"vKbMBj4LjZy",
"iFghAXw-m-i",
"iXyUDeAUHaS",
"56INndBe7CY",
"J1MrPv7RLS",
"eUmkPuVEC2s",
"ltaJnOmQP0D",
"pc3d7ECw76",
"2PsI8dbIUe",
"ddqlplmPdaw",
"fARuzLps-yud",
"oYJ9JgS4ZPs",
"RWUtwS5YM3V",
"aTZngFtGWIu",
"zFQBmLRp5c",
"aKXjY8TOedf",
"aGXjrzkFeHG",
"KqJuUP1VrcI",
"yHT-NhdepOo"... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_r... | [
" Thank you for recognizing the contributions of our work and for your constructive comments, which can further strengthen the quality of our work. We will incorporate the discussed experiments and analysis to the final version. As we promised, we will open source all the codes upon acceptance.",
" I have read th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"iFghAXw-m-i",
"aKXjY8TOedf",
"56INndBe7CY",
"J1MrPv7RLS",
"eUmkPuVEC2s",
"2PsI8dbIUe",
"pc3d7ECw76",
"zFQBmLRp5c",
"ddqlplmPdaw",
"aTZngFtGWIu",
"oYJ9JgS4ZPs",
"RWUtwS5YM3V",
"aTZngFtGWIu",
"VOuOtHRrxrJ",
"yHT-NhdepOo",
"aGXjrzkFeHG",
"KqJuUP1VrcI",
"nips_2022_2EUJ4e6H4OX",
"nip... |
nips_2022_eUy2ULXQXKs | Understanding the Evolution of Linear Regions in Deep Reinforcement Learning | Policies produced by deep reinforcement learning are typically characterised by their learning curves, but they remain poorly understood in many other respects. ReLU-based policies result in a partitioning of the input space into piecewise linear regions. We seek to understand how observed region counts and their densities evolve during deep reinforcement learning using empirical results that span a range of continuous control tasks and policy network dimensions. Intuitively, we may expect that during training, the region density increases in the areas that are frequently visited by the policy, thereby affording fine-grained control. We use recent theoretical and empirical results for the linear regions induced by neural networks in supervised learning settings for grounding and comparison of our results. Empirically, we find that the region density increases only moderately throughout training, as measured along fixed trajectories coming from the final policy. However, the trajectories themselves also increase in length during training, and thus the region densities decrease as seen from the perspective of the current trajectory. Our findings suggest that the complexity of deep reinforcement learning policies does not principally emerge from a significant growth in the complexity of functions observed on-and-around trajectories of the policy. | Accept | This paper provides an empirical analysis of RL methods via the lens of linear regions induced by NN-based agents. All the reviewers agreed the paper was well written and it was wonderful to see an understanding based paper. The paper is clear about its limitations, very detail oriented & scientific, and borrows & checks the validity of an analysis technique from supervised learning. The authors nicely added an additional environment and agent to the paper at the reviewers request.
This paper is very close. Several reviewers asked in one way or another "where is the evidence that we should care about linear regions" and "how will this inform future research?". The majority of the reviewers found the response and updated paper did not clearly address these questions. Indeed, even the most positive reviewer pointed out that the 2 research questions posed in the introduction were not well motivated. This is concern #1.
Concern #2 regards the generality of the results: what trends we see across agents and environments. With the added experiments the trends were not clear. This can be ok especially when testing an idea shown to be important in SL. It can be useful to report such a result. However, in this case we might need a good variety of agents and environments (not necessarily large scale environments, just different ones).
In the end for such a paper to have impact we need either: (a) interesting trends to come out of the results and a clear articulated connection to future algorithmic development or (b) a clear body empirical results that say "yep we tried this and its not clear it helps". The current paper does a bit of both. | train | [
"H8q5SCo6byh",
"RHpjTQP9xU",
"X9dAUyNHuoU",
"5Gy3S0bZtxG",
"hbepWC2IVRA",
"UAaHc6EgedU",
"WTrsV9Q8cs",
"BjTz73bCULf",
"6OoiNWrwF0R",
"j0Y45OK6p5y",
"qBIMp-2SRdG",
"IMnPkjqBq1U",
"PUTUr5KVlkJ",
"vSQmWl6R6b0",
"mqbwC_AnUnQ",
"pYeIt8m-2BK"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate the time you spent reviewing our updates, reading our responses, and giving us valuable feedback. We are glad that you found our overall and individual responses useful.\n\n### The choice of trajectories.\nOne of the key questions we were interested in answering in this work was **whether the densit... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
3
] | [
"hbepWC2IVRA",
"UAaHc6EgedU",
"pYeIt8m-2BK",
"PUTUr5KVlkJ",
"BjTz73bCULf",
"6OoiNWrwF0R",
"pYeIt8m-2BK",
"mqbwC_AnUnQ",
"vSQmWl6R6b0",
"PUTUr5KVlkJ",
"IMnPkjqBq1U",
"nips_2022_eUy2ULXQXKs",
"nips_2022_eUy2ULXQXKs",
"nips_2022_eUy2ULXQXKs",
"nips_2022_eUy2ULXQXKs",
"nips_2022_eUy2ULXQXK... |
nips_2022__Lz540aYDPi | What's the Harm? Sharp Bounds on the Fraction Negatively Affected by Treatment | The fundamental problem of causal inference -- that we never observe counterfactuals -- prevents us from identifying how many might be negatively affected by a proposed intervention. If, in an A/B test, half of users click (or buy, or watch, or renew, etc.), whether exposed to the standard experience A or a new one B, hypothetically it could be because the change affects no one, because the change positively affects half the user population to go from no-click to click while negatively affecting the other half, or something in between. While unknowable, this impact is clearly of material importance to the decision to implement a change or not, whether due to fairness, long-term, systemic, or operational considerations. We therefore derive the tightest-possible (i.e., sharp) bounds on the fraction negatively affected (and other related estimands) given data with only factual observations, whether experimental or observational. Naturally, the more we can stratify individuals by observable covariates, the tighter the sharp bounds. Since these bounds involve unknown functions that must be learned from data, we develop a robust inference algorithm that is efficient almost regardless of how and how fast these functions are learned, remains consistent when some are mislearned, and still gives valid conservative bounds when most are mislearned. Our methodology altogether therefore strongly supports credible conclusions: it avoids spuriously point-identifying this unknowable impact, focusing on the best bounds instead, and it permits exceedingly robust inference on these. We demonstrate our method in simulation studies and in a case study of career counseling for the unemployed. | Accept | This paper motivates and investigates a novel problem in the context of A/B testing -- specifically, it tries to estimate the fraction of negatively affected individuals beyond average treatment effects. The paper is well-written and does a good job of presenting the technical contributions with sufficient rigour as well as discussing their limitations. | val | [
"VVp4s64ECP5",
"_qdFDZoGIsc",
"JJZTPSBcKmf",
"YoDIx3Zr-p",
"qlsxpNm6hI4",
"2h3jAVyI83Y",
"cXZH1CGHYMX",
"rgvuZ_-ZzW",
"oX4rSO4Ceyo",
"tPvlEQ6lmKJ",
"aPk4_9I5h_",
"1wkquLkuLuO",
"pi0lKRhXXDZ"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for reading the response. We are very glad you found it helpful! To summarize, we think each of the points you list under Weaknesses and Questions are easily addressable. We hope you agree, given the detailed answers to that effect. Thanks again.",
" Thank you for reading our response. And, thank you ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
3
] | [
"JJZTPSBcKmf",
"YoDIx3Zr-p",
"oX4rSO4Ceyo",
"tPvlEQ6lmKJ",
"cXZH1CGHYMX",
"cXZH1CGHYMX",
"pi0lKRhXXDZ",
"oX4rSO4Ceyo",
"1wkquLkuLuO",
"aPk4_9I5h_",
"nips_2022__Lz540aYDPi",
"nips_2022__Lz540aYDPi",
"nips_2022__Lz540aYDPi"
] |
nips_2022_0Ww7UVEoNue | Active Learning Helps Pretrained Models Learn the Intended Task | Models can fail in unpredictable ways during deployment due to task ambiguity, when multiple behaviors are consistent with the provided training data. An example is an object classifier trained on red squares and blue circles: when encountering blue squares, the intended behavior is undefined. We investigate whether pretrained models are better active learners, capable of disambiguating between the possible tasks a user may be trying to specify. Intriguingly, we find that better active learning is an emergent property of the pretraining process: pretrained models require up to 5 times fewer labels when using uncertainty-based active learning, while non-pretrained models see no or even negative benefit. We find these gains come from an ability to select examples with attributes that disambiguate the intended behavior, such as rare product categories or atypical backgrounds. These attributes are far more linearly separable in pretrained model's representation spaces vs non-pretrained models, suggesting a possible mechanism for this behavior. | Accept | The paper makes a simple but compelling claim, that using large pretrained models within an active learning loop improves the ability to collect labeled data efficiently for improved model training with limited labeling resources. All revieweres agreed that the work is technically correct, that the empirical verification was thoroughly done.
The main point of contention among reviewers was whether this idea is novel or surprising enough to warrant publication at NeurIPS. This is a tricky issue, and I spent considerable time digging into the paper itself after reading through all reviews and author responses. In the end, I believe that it is important for the field to avoid creating a world in which only sensational-sounding ideas are published. Yes, it is relatively intuitive that using a learned representation such as a large pretrained model will help put active learning in a lower dimensional space that will be easier to search through, making AL more efficient. But the field on ML is filled with ideas that sound good in the abstract but fail in practice. I believe that there is clear value to the field to having rigorous evaluation of obvious good ideas, both as a reality check and also to provide a clear documentation of current state of the art so that the field can build from there.
| train | [
"aZUC6OGbsNH",
"_EPjZp6efBi",
"YiJxrKzVmjG",
"3fGJ2kbCy44",
"jNHVBB4RCrLr",
"HAHD2kjhwF1",
"0RlDp3ze2Bo",
"yM9Iff18jnx",
"4InJF-nMrer",
"hTvvsraWjOm"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks so much for reading and engaging with our response! We would definitely incorporate these clarifications into the camera ready version.",
" This is a well-written and helpful response. I understand that the page limit restricts your ability to incorporate some of the clarifications or additional discussi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
3
] | [
"_EPjZp6efBi",
"YiJxrKzVmjG",
"3fGJ2kbCy44",
"jNHVBB4RCrLr",
"hTvvsraWjOm",
"4InJF-nMrer",
"yM9Iff18jnx",
"nips_2022_0Ww7UVEoNue",
"nips_2022_0Ww7UVEoNue",
"nips_2022_0Ww7UVEoNue"
] |
nips_2022_fY6OzqOiTnu | Improving Certified Robustness via Statistical Learning with Logical Reasoning | Intensive algorithmic efforts have been made to enable the rapid improvements of certificated robustness for complex ML models recently. However, current robustness certification methods are only able to certify under a limited perturbation radius. Given that existing pure data-driven statistical approaches have reached a bottleneck, in this paper, we propose to integrate statistical ML models with knowledge (expressed as logical rules) as a reasoning component using Markov logic networks (MLN), so as to further improve the overall certified robustness. This opens new research questions about certifying the robustness of such a paradigm, especially the reasoning component (e.g., MLN). As the first step towards understanding these questions, we first prove that the computational complexity of certifying the robustness of MLN is #P-hard. Guided by this hardness result, we then derive the first certified robustness bound for MLN by carefully analyzing different model regimes. Finally, we conduct extensive experiments on five datasets including both high-dimensional images and natural language texts, and we show that the certified robustness with knowledge-based logical reasoning indeed significantly outperforms that of the state-of-the-arts. | Accept | I agree with zVbd that the paper has too many points. A paper should try to make a single point and then one can write more papers :-). The #P hardness result seems the least interesting of the points. I am also not very interested in the certification of robustness. The real question is can Carlini defeat it. But this paper raises an issue that I have always suspected is important --- are structured labels inherently more robust? I would certainly want to see what Carlini can do in generating adversarial examples for highly structured labels. As I believe that robustness for structured labels deserves more attention, and this paper purports to have positive empirical results in that direction with reasonable review scores I will recommend acceptance.
| train | [
"_kkmXbM6DC2",
"WCNXyUde7no",
"9l649jsB0S7",
"2mGBFz3RCp",
"jCoB6wwXWMx",
"mUwhnHLotq1",
"WlFvjBOm-5B",
"f0Tz-7tt0d",
"8Ner5TVPBwx",
"7sjk9ajdwVp",
"8woAHK18o50",
"dB4i79wRst6",
"FYueE7Qvfkxr",
"R9TV3VIEYz",
"JfgKZ2ZFWeV",
"PJuISe3xL53",
"0vv3-E3x4fQ",
"40nFD_uEo6X",
"lv-0cku2K4j... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_re... | [
" Thanks for the follow-up questions and suggestions!\n\n> on MNIST/CIFAR10, the task you are testing on is a simple classification, and that's what the baseline \"Consistency\" number is for, right?\n\nOn MNIST/CIFAR10, we have followed the standard settings for baselines. In particular, we used LeNet for MNIST a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
1
] | [
"WCNXyUde7no",
"JfgKZ2ZFWeV",
"2mGBFz3RCp",
"mUwhnHLotq1",
"WlFvjBOm-5B",
"8Ner5TVPBwx",
"f0Tz-7tt0d",
"FYueE7Qvfkxr",
"dB4i79wRst6",
"nips_2022_fY6OzqOiTnu",
"lv-0cku2K4j",
"40nFD_uEo6X",
"0vv3-E3x4fQ",
"PJuISe3xL53",
"PJuISe3xL53",
"nips_2022_fY6OzqOiTnu",
"nips_2022_fY6OzqOiTnu",
... |
nips_2022_ZPUkqTf6a-P | Truly Deterministic Policy Optimization | In this paper, we present a policy gradient method that avoids exploratory noise injection and performs policy search over the deterministic landscape, with the goal of improving learning with long horizons and non-local rewards. By avoiding noise injection all sources of estimation variance can be eliminated in systems with deterministic dynamics (up to the initial state distribution). Since deterministic policy regularization is impossible using traditional non-metric measures such as the KL divergence, we derive a Wasserstein-based quadratic model for our purposes. We state conditions on the system model under which it is possible to establish a monotonic policy improvement guarantee, propose a surrogate function for policy gradient estimation, and show that it is possible to compute exact advantage estimates if both the state transition model and the policy are deterministic. Finally, we describe two novel robotic control environments---one with non-local rewards in the frequency domain and the other with a long horizon (8000 time-steps)---for which our policy gradient method (TDPO) significantly outperforms existing methods (PPO, TRPO, DDPG, and TD3). Our implementation with all the experimental settings and a video of the physical hardware test is available at https://github.com/ehsansaleh/tdpo . | Accept | The paper addresses the problem of long-horizon policy learning with non-local rewards via PG that avoids noise injection. The reviewers are in consensus, and I concur, that this paper would be a welcome contribution to NeurIPS because it addresses an important problem, makes a solid contribution, and shows impressive results. The authors' rebuttal sufficiently addressed putting more emphasis on the long-horizon policy learning over the deterministic setting and discussion of the limitations.
If possible, it would be a nice addition to add a video of the robot experiments in the the final revision. | train | [
"eKsNHVZj-E",
"jnlR8I5ne3J",
"6KF0FIMdFl1",
"hunvBvEmccR",
"MjxpE-wNPDq",
"VwcQBdsCEE1",
"lmD9UmDYuGR",
"xlWTf23m8Og",
"ldEy8LbiP0X",
"B40eiIaStoK"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. I maintain that this is a strong paper and so will leave my score unchanged.",
" Thanks for the very detailed response - I really appreciate the clarifications, particularly concerning exploration.\nBecause of the identified limitations, I decided to stick to the current score.",
... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
3
] | [
"6KF0FIMdFl1",
"MjxpE-wNPDq",
"B40eiIaStoK",
"ldEy8LbiP0X",
"ldEy8LbiP0X",
"xlWTf23m8Og",
"xlWTf23m8Og",
"nips_2022_ZPUkqTf6a-P",
"nips_2022_ZPUkqTf6a-P",
"nips_2022_ZPUkqTf6a-P"
] |
nips_2022_joZ4CuOyKY8 | DiSC: Differential Spectral Clustering of Features | Selecting subsets of features that differentiate between two conditions is a key task in a broad range of scientific domains. In many applications, the features of interest form clusters with similar effects on the data at hand. To recover such clusters we develop DiSC, a data-driven approach for detecting groups of features that differentiate between conditions. For each condition, we construct a graph whose nodes correspond to the features and whose weights are functions of the similarity between them for that condition. We then apply a spectral approach to compute subsets of nodes whose connectivity pattern differs significantly between the condition-specific feature graphs. On the theoretical front, we analyze our approach with a toy example based on the stochastic block model. We evaluate DiSC on a variety of datasets, including MNIST, hyperspectral imaging, simulated scRNA-seq and task fMRI, and demonstrate that DiSC uncovers features that better differentiate between conditions compared to competing methods. | Accept | In the discussion, the area chair and the reviewers reached a consensus that the potential for practical relevance of the proposed method is very interesting and agreed to recommend the acceptance of the paper. The reviews provide valuable feedback to the authors to improve the final version of their paper that we are looking forward to see at the conference. | train | [
"ZA7pv5btBx",
"1QpOm__xmlR",
"RLeJwpg71x",
"fdx9Rd6F0p7",
"AWwr3n-gSYw",
"dghWWdcQ2Tg",
"yPrTYqEK_g7",
"8k40gBMoIUKn",
"XGiAZ-5BWJ0",
"qRJzOlpI9mN",
"_qt3JZanrLy",
"nnRCpwWwyhj",
"dbA2sIU3xW",
"uIsU8WPM8U",
"iXbQ0RbbVBG",
"GqOPJtIfak",
"V2eRCf5PxoF"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the further explanations. We agree that our previous comments underestimate the contribution of your algorithm. We have raised the score. ",
" This is a bit of surprising feedback, as to the best of our knowledge, our approach is entirely new. We are confident that it doesn't appear in any of the pap... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"1QpOm__xmlR",
"fdx9Rd6F0p7",
"yPrTYqEK_g7",
"_qt3JZanrLy",
"nnRCpwWwyhj",
"XGiAZ-5BWJ0",
"qRJzOlpI9mN",
"nips_2022_joZ4CuOyKY8",
"qRJzOlpI9mN",
"V2eRCf5PxoF",
"GqOPJtIfak",
"iXbQ0RbbVBG",
"uIsU8WPM8U",
"nips_2022_joZ4CuOyKY8",
"nips_2022_joZ4CuOyKY8",
"nips_2022_joZ4CuOyKY8",
"nips_... |
nips_2022_jBTQGGy9qA- | DASCO: Dual-Generator Adversarial Support Constrained Offline Reinforcement Learning | In offline RL, constraining the learned policy to remain close to the data is essential to prevent the policy from outputting out-of-distribution (OOD) actions with erroneously overestimated values. In principle, generative adversarial networks (GAN) can provide an elegant solution to do so, with the discriminator directly providing a probability that quantifies distributional shift. However, in practice, GAN-based offline RL methods have not outperformed alternative approaches, perhaps because the generator is trained to both fool the discriminator and maximize return - two objectives that are often at odds with each other. In this paper, we show that the issue of conflicting objectives can be resolved by training two generators: one that maximizes return, with the other capturing the "remainder" of the data distribution in the offline dataset, such that the mixture of the two is close to the behavior policy. We show that not only does having two generators enable an effective GAN-based offline RL method, but also approximates a support constraint, where the policy does not need to match the entire data distribution, but only the slice of the data that leads to high long term performance. We name our method DASCO, for Dual-Generator Adversarial Support Constrained Offline RL. On benchmark tasks that require learning from sub-optimal data, DASCO significantly outperforms prior methods that enforce distribution constraint.
| Accept | This paper tackles the important problem of learning policies that remain close to the data in offline RL. More specifically, the authors consider GAN-based methods. They provide a theoretical insight as to why prior work in modelling with GANs fail. Based on that, they are able to propose simple yet theoretically convincing modifications to GANs to train them properly. They conduct a thorough ablation study with evidence of the gain in performance brought by their proposed approach. Most questions were answered and clarifications were provided as part of the rebuttal. The artificial setting for the noisy AntMaze seems to be the biggest concern that remains. Overall, the paper was appreciated by reviewers. | train | [
"QwOnGLIoRpK",
"2PKfc4hKO8R",
"fdRt_xv1hxC",
"wUWGIZssjcw",
"EDTq3JOfyLQ",
"X5h3wt-PKPD",
"0rhAEWtQiV",
"nERClYls0LjJ",
"zFOKdUbFlLA",
"apvZzzzTa8pN",
"ilA_NCFxJpx",
"E8255z3sF90",
"x7WFNVtAiuu",
"bU5b4ULnQCC",
"Fh58winHJ_G",
"rW7DmqeC2TS",
"5gDcpGsb9Za",
"Q6Zx7iG4p4q",
"OA1gCpM5... | [
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
... | [
" Dear Reviewer, \n\nSince there are only 24 hours left in the discussion period, we would love to hear if you have any remaining questions or concerns for us!\n\nThank you very much!",
" We would love to hear if you have more questions or comments for the paper! ",
" We would love to hear if you have any remai... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
2,
4,
4,
4
] | [
"apvZzzzTa8pN",
"E8255z3sF90",
"wUWGIZssjcw",
"rW7DmqeC2TS",
"X5h3wt-PKPD",
"Q6Zx7iG4p4q",
"KlSRhbj47KL",
"X6hs0wcKEGd",
"vVVTzlmxhA9",
"ilA_NCFxJpx",
"Fh58winHJ_G",
"5gDcpGsb9Za",
"bU5b4ULnQCC",
"OA1gCpM5W3Rz",
"2ZJoT2rwS-K",
"K91kYC3EyCeM",
"UcBH8ge_LO",
"Aem_Fbu1xJS",
"Swizv-K... |
nips_2022_RYTGIZxY5rJ | Bag of Tricks for FGSM Adversarial Training | Adversarial training (AT) with samples generated by Fast Gradient Sign Method (FGSM), also known as FGSM-AT, is a computationally simple method to train robust networks. However, during its training procedure, an unstable mode of ``catastrophic overfitting'' has been identified in~\cite{Wong2020FastIB}, where the robust accuracy abruptly drops to zero within a single training step. Existing methods use gradient regularizers or random initialization tricks to attenuate this issue, whereas they either take high computational cost or lead to lower robust accuracy. In this work, we provide the first study which thoroughly examines a collection of tricks from three perspectives: Data Initialization, Network Structure, and Optimization, to overcome the catastrophic overfitting in FGSM-AT. Surprisingly, we find that simple tricks, i.e., masking partial pixels (even without randomness), setting a large convolution stride and smooth activation functions, or regularizing the weights of the first convolutional layer can effectively tackle the overfitting issue. Extensive results on a range of network architectures validate the effectiveness of each proposed tricks, and the combinations of tricks are also investigated. For example, trained with PreActResNet-18 on CIFAR-10, our method attains 51.3\% accuracy against PGD-10 attacker and 46.4\% accuracy against AutoAttack, demonstrating that pure FGSM-AT is capable of enabling robust learners. We will release our code to encourage future exploration on unleashing the potential of FGSM-AT. | Reject | This paper proposes to solve FGSM catastrophic overfitting by combining different algorithmic methods (i.e., masking pattern to the train data, smooth activations, ViTs, constraints on the first layer convolutional weights). The reviewers have considered the problem studied very relevant but were not convinced by the empirical evaluation, finding that the paper is missing an exhaustive evaluation (and for epsilon larger than 8). In addition, they would have appreciated some understandings on the different tricks considered. We encourage the authors to revise their paper, taking into consideration the reviewers’ feedback and to submit the revised work to a forthcoming conference. | val | [
"HCvdiKN2ctY",
"ZmKORmjuCJ",
"S7OhSU2FtB",
"F81GSmrsGOW",
"ZBBYvkowFMc",
"E8sYx550p0Y",
"Qwon4-R0Eya",
"sgPfjrYr4pg",
"1yz9UswtOZ",
"g98LozCA5H",
"9KTIi1amqZQ",
"QPpGok21ns9",
"W9ywEFjb3y1",
"40TVLQkRoc5",
"NcV543cL0sH",
"t33PO9bNX_7"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the author's response and the clarifications. However, I will keep my current score as I believe that the paper can be significantly improved by including the suggested tricks and lessons learned in the future version (I would also recommend the authors to read https://arxiv.org/abs/2010.03593), impr... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
5
] | [
"ZmKORmjuCJ",
"t33PO9bNX_7",
"F81GSmrsGOW",
"Qwon4-R0Eya",
"sgPfjrYr4pg",
"1yz9UswtOZ",
"40TVLQkRoc5",
"g98LozCA5H",
"t33PO9bNX_7",
"NcV543cL0sH",
"40TVLQkRoc5",
"W9ywEFjb3y1",
"nips_2022_RYTGIZxY5rJ",
"nips_2022_RYTGIZxY5rJ",
"nips_2022_RYTGIZxY5rJ",
"nips_2022_RYTGIZxY5rJ"
] |
nips_2022_Kx1VCs1treH | FourierNets enable the design of highly non-local optical encoders for computational imaging | Differentiable simulations of optical systems can be combined with deep learning-based reconstruction networks to enable high performance computational imaging via end-to-end (E2E) optimization of both the optical encoder and the deep learning decoder. This has enabled imaging applications such as 3D localization microscopy, depth estimation, and lensless photography via the optimization of local optical encoders. More challenging computational imaging applications, such as 3D snapshot microscopy which compresses 3D volumes into single 2D images, require a highly non-local optical encoder. We show that existing deep network decoders have a locality bias which prevents the optimization of such highly non-local optical encoders. We address this with a decoder based on a shallow neural network architecture using global kernel Fourier convolutional neural networks (FourierNets). We show that FourierNets surpass existing deep network based decoders at reconstructing photographs captured by the highly non-local DiffuserCam optical encoder. Further, we show that FourierNets enable E2E optimization of highly non-local optical encoders for 3D snapshot microscopy. By combining FourierNets with a large-scale multi-GPU differentiable optical simulation, we are able to optimize non-local optical encoders 170$\times$ to 7372$\times$ larger than prior state of the art, and demonstrate the potential for ROI-type specific optical encoding with a programmable microscope. | Accept | This paper develops a multi-GPU differentiable simulation of a programmable microscope, wherein a large convolutional kernel used in the first layer is implemented in the Fourier domain. The work outperforms sota approaches for lensless photography, and it allows the end-to-end design of a snapshot 3D microscope which beats state of the art systems in simulation. Overall the work provides a compelling demonstration of the power of recent technical advances in ML to lead to demonstrable improvements (in simulation) of computational imaging methods. | val | [
"DkFIEt98X1J",
"l84hW7lRbxd",
"v1c4lpIyjzO",
"kVypS9ummX",
"EnZnWGpPlb",
"kz5yjZ48dYf",
"GucUGtveUNq",
"TaCuEyY263X",
"lKI-UATzYxn",
"MsnHPN-jswy"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" As you might have already noticed, the paper has already been updated with these new results. We hope you will consider updating your score.",
" Thanks for the response. Adding results that use an approximate inverse to start will better contextualize the results and improve the submission.",
" We thank the r... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3
] | [
"l84hW7lRbxd",
"GucUGtveUNq",
"nips_2022_Kx1VCs1treH",
"EnZnWGpPlb",
"MsnHPN-jswy",
"lKI-UATzYxn",
"TaCuEyY263X",
"nips_2022_Kx1VCs1treH",
"nips_2022_Kx1VCs1treH",
"nips_2022_Kx1VCs1treH"
] |
nips_2022_gE1zBYKaEWW | Provable Subspace Identification Under Post-Nonlinear Mixtures | Unsupervised mixture learning (UML) aims at identifying linearly or nonlinearly mixed latent components in a blind manner. UML is known to be challenging: Even learning linear mixtures requires highly nontrivial analytical tools, e.g., independent component analysis or nonnegative matrix factorization. In this work, the post-nonlinear (PNL) mixture model---where {\it unknown} element-wise nonlinear functions are imposed onto a linear mixture---is revisited. The PNL model is widely employed in different fields ranging from brain signal classification, speech separation, remote sensing, to causal discovery. To identify and remove the unknown nonlinear functions, existing works often assume different properties on the latent components (e.g., statistical independence or probability-simplex structures). This work shows that under a carefully designed UML criterion, the existence of a nontrivial {\it null space} associated with the underlying mixing system suffices to guarantee identification/removal of the unknown nonlinearity. Compared to prior works, our finding largely relaxes the conditions of attaining PNL identifiability, and thus may benefit applications where no strong structural information on the latent components is known. A finite-sample analysis is offered to characterize the performance of the proposed approach under realistic settings. To implement the proposed learning criterion, a block coordinate descent algorithm is proposed. A series of numerical experiments corroborate our theoretical claims. | Accept | The post-nonlinear (PNL) mixture model has been studied for a while in BSS community. This paper presents a method to identity nonlinearity under mild conditions so that after removing the nonlinear transformations, a method for source separation from a linear mixture is employed. All of reviewers agree that the paper is well written and has a solid contribution in BSS. Two important contributions are: (1) it is proved that the existence of a non-trivial null space associated with the underlying mixing systems suffices to guarantee identification/removal of the unknown nonlinearity; (2) a sample complexity analysis is provided to characterize the performance of the proposed approach. A downside is in weak experiments, which has been improved during the author rebuttal period. Therefore, I am pleased to suggest the paper to be accepted.
| train | [
"Gqp0ZMaESOs",
"m5vdvD03-Hs",
"oegIv90lju",
"pceJTF1lJYp",
"Ze2RW2Zd1hO",
"lsV0Mi1DSq",
"0Grl8A2tfUb",
"fP1HfVcBYSD",
"oUN2ih_UTI",
"G3tWHtlSyN_",
"lAMBeKv2Khl"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I want to thank the authors for their response to my review and the clarifications they provided. I think that the requirement of Q to be dense should be emphasized and better explained in the manuscript and I encourage the authors to better explain that part in the revised version of the paper. All in all, I bel... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"lsV0Mi1DSq",
"Ze2RW2Zd1hO",
"pceJTF1lJYp",
"lAMBeKv2Khl",
"G3tWHtlSyN_",
"oUN2ih_UTI",
"fP1HfVcBYSD",
"nips_2022_gE1zBYKaEWW",
"nips_2022_gE1zBYKaEWW",
"nips_2022_gE1zBYKaEWW",
"nips_2022_gE1zBYKaEWW"
] |
nips_2022_atb3yifRtX | You Can’t Count on Luck: Why Decision Transformers and RvS Fail in Stochastic Environments | Recently, methods such as Decision Transformer that reduce reinforcement learning to a prediction task and solve it via supervised learning (RvS) have become popular due to their simplicity, robustness to hyperparameters, and strong overall performance on offline RL tasks. However, simply conditioning a probabilistic model on a desired return and taking the predicted action can fail dramatically in stochastic environments since trajectories that result in a return may have only achieved that return due to luck. In this work, we describe the limitations of RvS approaches in stochastic environments and propose a solution. Rather than simply conditioning on returns, as is standard practice, our proposed method, ESPER, conditions on learned average returns which are independent from environment stochasticity. Doing so allows ESPER to achieve strong alignment between target return and expected performance in real environments. We demonstrate this in several challenging stochastic offline-RL tasks including the challenging puzzle game 2048, and Connect Four playing against a stochastic opponent. In all tested domains, ESPER achieves significantly better alignment between the target return and achieved return than simply conditioning on returns. ESPER also achieves higher maximum performance than even the value-based baselines.
| Accept | The authors explore a fundamental limitation of Decision Transformers and related RL via supervised learning approaches when applied to stochastic environments. They propose a new and simple approach that clusters experiences to disentangle action quality from environment stochasticity. Their ESPER approach achieves large improvements on a number of simple, stochastic environments including Gambling, Connect Four and 2048.
The reviewers were all satisfied by the novelty, technical soundness and relevance of this work for the NeurIPS community, but were initially of mixed opinions about the selection of challenge domains. Having discussed this point at length with the reviewers, I am satisfied with the authors' choice of environments for two reasons: (1) they allow the specific shortcomings of previous methods to be isolated and addressed directly, and (2) they are consistent with the environments used by previous related work published in top-tier conferences. I am recommending this paper for acceptance accordingly. | train | [
"0T7MANGT9g",
"bPsu6pBms3",
"KuVJ9dqv1Gi",
"eXwEZC9YNn2",
"2FFk74MZiIR",
"5KqpOdmE5Y",
"aepaiVart3k",
"mat2ja031xk",
"cGciBLlZ7-W",
"2dMOknpB18J",
"CJbe4DHBLu",
"NBf_WoN6_IC",
"BuQvmCtehiMh",
"fVqHisjhOsP",
"O5hHIRdBQPm",
"O61D-B7YgFL",
"xCjSGXYKLLP",
"Kh7-lIc0DeH",
"V7M77NsRUcd"... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" I see. That was my misreading. I don't have further questions and concerns. Also thanks for updating the manuscript. I will update my rating.",
" I agree with the authors that Crafter is not an appropriate benchmark to request here. It is not a widely adopted benchmark for this area, does not isolate the fundam... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"cGciBLlZ7-W",
"aepaiVart3k",
"eXwEZC9YNn2",
"CJbe4DHBLu",
"2dMOknpB18J",
"aepaiVart3k",
"mat2ja031xk",
"xCjSGXYKLLP",
"2dMOknpB18J",
"fVqHisjhOsP",
"BuQvmCtehiMh",
"BuQvmCtehiMh",
"Kh7-lIc0DeH",
"O5hHIRdBQPm",
"V7M77NsRUcd",
"xCjSGXYKLLP",
"GnLbbwxLR37",
"nips_2022_atb3yifRtX",
... |
nips_2022_4X0q4uJ1fR | Sample Constrained Treatment Effect Estimation | Treatment effect estimation is a fundamental problem in causal inference. We focus on designing efficient randomized controlled trials, to accurately estimate the effect of some treatment on a population of $n$ individuals. In particular, we study \textit{sample-constrained treatment effect estimation}, where we must select a subset of $s \ll n$ individuals from the population to experiment on. This subset must be further partitioned into treatment and control groups. Algorithms for partitioning the entire population into treatment and control groups, or for choosing a single representative subset, have been well-studied. The key challenge in our setting is jointly choosing a representative subset and a partition for that set.
We focus on both individual and average treatment effect estimation, under a linear effects model. We give provably efficient experimental designs and corresponding estimators, by identifying connections to discrepancy minimization and leverage-score-based sampling used in randomized numerical linear algebra. Our theoretical results obtain a smooth transition to known guarantees when $s$ equals the population size. We also empirically demonstrate the performance of our algorithms.
| Accept | The reviewers came to consensus that this paper makes a good progress on the study of treatment effect estimation. I agree with these opinions and please polish the manuscript so that the concerns raised by the reviewers become clear in the final version. In particular, I'm also interested in the concern on the constant $\delta$ and I expect that this point is addressed so that the dependence of the results on delta becomes clear. | train | [
"Vr25nM9Vhrv",
"K3H1-YZfsRm",
"AZbBdRR_dD0",
"hyVAqPYQmXs",
"Vh4uYHOW7GB",
"SUTTIlQZqP",
"5hVkKZBhWP9"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for their rebuttal.\nAs they have addressed all my concerns, I have increased my score.",
" 1. *\" In assumption 2.1, ... \"*\n\n The row-normalization is to simplify the analysis and the final theorem statements. All of our results will hold without row normalization and ou... | [
-1,
-1,
-1,
-1,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
4,
3,
2
] | [
"K3H1-YZfsRm",
"5hVkKZBhWP9",
"SUTTIlQZqP",
"Vh4uYHOW7GB",
"nips_2022_4X0q4uJ1fR",
"nips_2022_4X0q4uJ1fR",
"nips_2022_4X0q4uJ1fR"
] |
nips_2022_2ZfUNW7SoaS | Online Decision Mediation | Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior: At each time, the algorithm observes an action chosen by a fallible agent, and decides whether to *accept* that agent's decision, *intervene* with an alternative, or *request* the expert's opinion. For instance, in clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances, thus real-world decision support is often limited to monitoring and forecasting. Instead, such an intermediary would strike a prudent balance between the former (purely prescriptive) and latter (purely descriptive) approaches, while providing an efficient interface between human mistakes and expert feedback. In this work, we first formalize the sequential problem of *online decision mediation*---that is, of simultaneously learning and evaluating mediator policies from scratch with *abstentive feedback*: In each round, deferring to the oracle obviates the risk of error, but incurs an upfront penalty, and reveals the otherwise hidden expert action as a new training data point. Second, we motivate and propose a solution that seeks to trade off (immediate) loss terms against (future) improvements in generalization error; in doing so, we identify why conventional bandit algorithms may fail. Finally, through experiments and sensitivities on a variety of datasets, we illustrate consistent gains over applicable benchmarks on performance measures with respect to the mediator policy, the learned model, and the decision-making system as a whole. | Accept | Thank you for submitting your paper to NeurIPS! This paper studies sequential decision-making for mediation -- given actions chosen by an imperfect human, it decides whether to accept the decision, intervene with an alternative, or request a costly expert opinion. The authors identify an exploration-exploitation tradeoff for this problem (costly to obtain labels, but improves future accuracy) and build on the bandit literature to propose the new UMPIRE policy. The model and algorithmic approach seem sound for optimizing overall system accuracy, and the reviewers especially appreciated the extensive experiments on real datasets. I am pleased to recommend acceptance. However, please be sure to add more discussion of the limitations into the main paper (as promised in the response); even if the entire discussion doesn't fit in the main text, please add pointers in the main paper to Appendix E.
There are also some technical choices that were brought up by the senior program committee that would be useful to discuss since they may have usability/impact implications. First, what are implications of the choice of a scalarized objective that trades off the error metric with the expert cost (Eq. 3), say compared with something that's based on constraints? Does the scalarized objective do a good job approximating how we expect decisions to be triaged in practice? In what applications? Second, the regime k_req =m/m-1 - gamma is mentioned as interesting (in Remark 1) when combined with a 0/1 loss. Could you please expand on applications where the 0/1 loss is appropriate and the interpretation of the “interesting regime” in that case? | train | [
"agAz5obR81B",
"SdKnjH4X-zq",
"k6KYcLOugLO",
"D72kQBGmbD",
"DhyfZb7VGWn",
"iOqSkBQGau3",
"i04hBA3NQ5R",
"84IVasisDKlf",
"r9bYGjdBKfP",
"kOs7dYMAvuw",
"iMC7U44FwM7",
"qSfZlpamKZ"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Update: Thanks for addressing this reviewer's comments - the paper is now substantially clearer and comprehensive. I have changed my score from 6 to 7.",
" ---\n\nThank you again for your time and expertise during the review process!\n\nIf there were any leftover concerns, we would sincerely appreciate the oppo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
2
] | [
"D72kQBGmbD",
"iMC7U44FwM7",
"kOs7dYMAvuw",
"kOs7dYMAvuw",
"qSfZlpamKZ",
"iMC7U44FwM7",
"iMC7U44FwM7",
"iMC7U44FwM7",
"iMC7U44FwM7",
"nips_2022_2ZfUNW7SoaS",
"nips_2022_2ZfUNW7SoaS",
"nips_2022_2ZfUNW7SoaS"
] |
nips_2022_Hr8475tQGKE | Two-layer neural network on infinite dimensional data: global optimization guarantee in the mean-field regime | Analysis of neural network optimization in the mean-field regime is important as the setting allows for feature learning. Existing theory has been developed mainly for neural networks in finite dimensions, i.e., each neuron has a finite-dimensional parameter. However, the setting of infinite-dimensional input naturally arises in machine learning problems such as nonparametric functional data analysis and graph classification. In this paper, we develop a new mean-field analysis of two-layer neural network in an infinite-dimensional parameter space. We first give a generalization error bound, which shows that the regularized empirical risk minimizer properly generalizes when the data size is sufficiently large, despite the neurons being infinite-dimensional. Next, we present two gradient-based optimization algorithms for infinite-dimensional mean-field networks, by extending the recently developed particle optimization framework to the infinite-dimensional setting. We show that the proposed algorithms converge to the (regularized) global optimal solution, and moreover, their rates of convergence are of polynomial order in the online setting and exponential order in the finite sample setting, respectively. To our knowledge this is the first quantitative global optimization guarantee of neural network on infinite-dimensional input and in the presence of feature learning. | Accept | The paper considers the two-layer neural network in the mean-field regime and proposes an algorithm with complexity independent of the input dimension. Overall, I think the paper is very interesting. I recommend an acceptance. | train | [
"-uO-l-3B1ao",
"aNzqo4aHYi",
"-aB0qaF5xRA",
"D_bHlZFRPH7H",
"WkwrIRpFfiP",
"SOpqUjDkaJg"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your helpful comments. We address the technical comments below.\n\n**1. ``Lack disscussion on the theoretical result. It would be better to discuss the implication of the convergence rate e.g. on $\\lambda_2$.''**\n\nAs commented on line 282 (line 286 of the revised version), the exponential depende... | [
-1,
-1,
-1,
6,
5,
9
] | [
-1,
-1,
-1,
2,
3,
3
] | [
"D_bHlZFRPH7H",
"WkwrIRpFfiP",
"SOpqUjDkaJg",
"nips_2022_Hr8475tQGKE",
"nips_2022_Hr8475tQGKE",
"nips_2022_Hr8475tQGKE"
] |
nips_2022_uzqUp0GjKDu | Unsupervised Learning under Latent Label Shift | What sorts of structure might enable a learner to discover classes from unlabeled data? Traditional approaches rely on feature-space similarity and heroic assumptions on the data. In this paper, we introduce unsupervised learning under Latent Label Shift (LLS), where the label marginals $p_d(y)$ shift but the class conditionals $p(x|y)$ do not. This work instantiates a new principle for identifying classes: elements that shift together group together. For finite input spaces, we establish an isomorphism between LLS and topic modeling: inputs correspond to words, domains to documents, and labels to topics. Addressing continuous data, we prove that when each label's support contains a separable region, analogous to an anchor word, oracle access to $p(d|x)$ suffices to identify $p_d(y)$ and $p_d(y|x)$ up to permutation. Thus motivated, we introduce a practical algorithm that leverages domain-discriminative models as follows: (i) push examples through domain discriminator $p(d|x)$; (ii) discretize the data by clustering examples in $p(d|x)$ space; (iii) perform non-negative matrix factorization on the discrete data; (iv) combine the recovered $p(y|d)$ with the discriminator outputs $p(d|x)$ to compute $p_d(y|x) \; \forall d$. With semisynthetic experiments, we show that our algorithm can leverage domain information to improve state of the art unsupervised classification methods. We reveal a failure mode of standard unsupervised classification methods when data-space similarity does not indicate true groupings, and show empirically that our method better handles this case. Our results establish a deep connection between distribution shift and topic modeling, opening promising lines for future work. | Accept | This paper studies the problem of identifying classes in unlabeled data. It considers a little-to-not-studied setting, where data is available exhibiting latent class proportion shift. This means that p(y) changes, but by assumption the conditional feature distributions p(x|y) do not. The paper proposes a principle that features that shift together should group together in order to identify the latent classes. In both the tabular and continuous data settings, the paper shows theoretically and in practice that this principle with the associated assumptions is sufficient to label the data in an unsupervised setting.
The reviewers all agreed that the paper makes a strong contribution to a less studied area, and that the idea of exploiting label shift to identify classes is a significant advancement in this area. In particular, they liked the connection between the proposed setting and topic modeling. This leads to an approach for handling continuous data by clustering. The reviewers also generally felt the paper is clear and well written.
Much of the discussion related to the realism of the proposed setting and the assumptions. The authors are encouraged to include the mains points from these discussions in the final version. | train | [
"INmWjJu0wq",
"JVekQWqsZq",
"rg_iDG3Hxw",
"IYWXcx8jINy",
"7CG5aGxTlTl",
"hyq7f4wLe_F",
"MOiBM1FSl9VX",
"InmZ6qPrjpw2",
"aXyoxMRqJI",
"H0yc-WBsGE",
"BIZ4e8I5Q9P",
"J9MKwCPiZXP",
"BbgLFG9f4b",
"p9_HGg2Y8W5"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you so much for your response! I agree that we can always have broader discussion about if label shift is realistic, but this work is nonetheless valuable and theoretically interesting. I appreciate the additional experiments on clustering in a naive representation space and the additional discussion on lim... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"hyq7f4wLe_F",
"7CG5aGxTlTl",
"IYWXcx8jINy",
"p9_HGg2Y8W5",
"hyq7f4wLe_F",
"BbgLFG9f4b",
"InmZ6qPrjpw2",
"J9MKwCPiZXP",
"BIZ4e8I5Q9P",
"nips_2022_uzqUp0GjKDu",
"nips_2022_uzqUp0GjKDu",
"nips_2022_uzqUp0GjKDu",
"nips_2022_uzqUp0GjKDu",
"nips_2022_uzqUp0GjKDu"
] |
nips_2022_syU-XvinTI1 | No Free Lunch from Deep Learning in Neuroscience: A Case Study through Models of the Entorhinal-Hippocampal Circuit | Research in Neuroscience, as in many scientific disciplines, is undergoing a renaissance based on deep learning. Unique to Neuroscience, deep learning models can be used not only as a tool but interpreted as models of the brain. The central claims of recent deep learning-based models of brain circuits are that they make novel predictions about neural phenomena or shed light on the fundamental functions being optimized. We show, through the case-study of grid cells in the entorhinal-hippocampal circuit, that one may get neither. We begin by reviewing the principles of grid cell mechanism and function obtained from first-principles modeling efforts, then rigorously examine the claims of deep learning models of grid cells. Using large-scale hyperparameter sweeps and theory-driven experimentation, we demonstrate that the results of such models may be more strongly driven by particular, non-fundamental, and post-hoc implementation choices than fundamental truths about neural circuits or the loss function(s) they might optimize. We discuss why these models cannot be expected to produce accurate models of the brain without the addition of substantial amounts of inductive bias, an informal No Free Lunch result for Neuroscience. Based on first principles work, we provide hypotheses for what additional loss functions will produce grid cells more robustly. In conclusion, caution and consideration, together with biological knowledge, are warranted in building and interpreting deep learning models in Neuroscience. | Accept | This paper examines previous deep learning models of the entorhinal-hippocampal formation that have been reported to exhibit the emergence of grid cells. The authors show that this phenomenon only occurs under very specific hyperparameter settings and with a variety of post-hoc modelling decisions, suggesting that it is not a robust finding based on the cost functions used, as suggested in the original work. The authors use this as a case study to make the point that inductive biases and consideration of biological knowledge should be considered critical ingredients for models in computational neuroscience, and that more care needs to be taken when reporting emergent phenomena in deep learning models for neuroscience.
The reviewers were generally positive about the paper, and agreed it makes a worthwhile contribution. They had some concerns related to potential over-reach in the paper's statements/claims, but after engaging with the authors, the majority of such concerns were addressed. As such, consensus was reached to accept this paper. | train | [
"bhTR823Qvlv",
"DUr90dCvGFv",
"7JbTjFcBGJ_",
"F5iDwYC984c",
"nrvDFZEc9VJ",
"Q0mPN9ZTS3m",
"wcGL4TtZWXv",
"mnzY4ZOsXC_",
"6X_x_sW7X_G",
"O6sGxm1B-0",
"0gWOmeN2EjB",
"GS2Pa5SXr8z",
"iQ_rNDdrlym",
"MNHrAoDJYVc",
"94CS05Xh5VmR",
"ixKkpjuxNn",
"TrNsGKyryDD",
"PVIZEzYGSGG",
"y15aW50WFH... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_re... | [
" I am glad to see that the authors spent extra efforts to improve the hyperparameter search. Considering the appreciation demonstrated by the other Reviewers, I feel I might have been a bit biased in my initial (negative) judgement, probably due to my background in both neurocognitive modeling and deep learning. I... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
2,
4,
3
] | [
"DUr90dCvGFv",
"mnzY4ZOsXC_",
"Q0mPN9ZTS3m",
"6X_x_sW7X_G",
"0gWOmeN2EjB",
"iQ_rNDdrlym",
"mnzY4ZOsXC_",
"94CS05Xh5VmR",
"TrNsGKyryDD",
"ixKkpjuxNn",
"ixKkpjuxNn",
"PVIZEzYGSGG",
"PVIZEzYGSGG",
"y15aW50WFHi",
"y15aW50WFHi",
"nips_2022_syU-XvinTI1",
"nips_2022_syU-XvinTI1",
"nips_20... |
nips_2022_907ZdmPmmH_ | Are all Frames Equal? Active Sparse Labeling for Video Action Detection | Video action detection requires annotations at every frame, which drastically increases the labeling cost. In this work, we focus on efficient labeling of videos for action detection to minimize this cost. We propose active sparse labeling (ASL), a novel active learning strategy for video action detection. Sparse labeling will reduce the annotation cost but poses two main challenges; 1) how to estimate the utility of annotating a single frame for action detection as detection is performed at video level?, and 2) how these sparse labels can be used for action detection which require annotations on all the frames? This work attempts to address these challenges within a simple active learning framework. For the first challenge, we propose a novel frame-level scoring mechanism aimed at selecting most informative frames in a video. Next, we introduce a novel loss formulation which enables training of action detection model with these sparsely selected frames. We evaluate the proposed approach on two different action detection benchmark datasets, UCF-101-24 and J-HMDB-21, and observed that active sparse labeling can be very effective in saving annotation costs. We demonstrate that the proposed approach performs better than random selection, outperforming all other baselines, with performance comparable to supervised approach using merely 10% annotations. | Accept | Paper was reviewed by four reviewers and received: 2 x Weak Accepts, 1 x Reject and 1 x Accept. Reviewers argued that the task is interesting and approach is simple and well motivated. Some raised concerns included: (1) lack of additional comparisons and baselines, (2) lack of discussion regarding far-away frames v/s nearby frames for annotation, (3) lack of novelty and (4) fairness of evaluation. Three out of four reviewers were reasonably convinced by the rebuttal and argue for acceptance. [2s1J] remains concerned about (3) and (4). This was carefully considered by AC. Because no specific papers were provided by [2s1J] to support the claims of lacking novelty, and given the remaining positives reviews, AC is inclined to accept the paper. | train | [
"I8M3Z9-mM9",
"66--m7KhXNH",
"pyYSx21pvI",
"s6mIlJN-Wv2",
"MSOxM4KfFYl",
"uEt7pA2mxH",
"Xi8oYv8frot",
"xgvJbU3goa",
"U-BAOOT35qG",
"CQCT0YVdUBL",
"K3dovbtO_Lr",
"hwop7u28ai9",
"ZXjy8r2LIH0",
"6gHoIxZE4e1",
"54ZvaVCjnDq",
"6EUKPhrtsk",
"eEWrbw4P5N"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer mzzJ,\n\nWe are sincerely thankful for the time and work you put in reviewing our paper. We hope our answers clarified your queries and if you have any more queries regarding the paper feel free to ask us any time. We will be glad to answer them.\n\nSincerely,\n\nAuthors of Paper 8487",
" Dear Rev... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
3,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
5
] | [
"eEWrbw4P5N",
"54ZvaVCjnDq",
"6gHoIxZE4e1",
"uEt7pA2mxH",
"uEt7pA2mxH",
"K3dovbtO_Lr",
"CQCT0YVdUBL",
"hwop7u28ai9",
"ZXjy8r2LIH0",
"eEWrbw4P5N",
"6EUKPhrtsk",
"54ZvaVCjnDq",
"6gHoIxZE4e1",
"nips_2022_907ZdmPmmH_",
"nips_2022_907ZdmPmmH_",
"nips_2022_907ZdmPmmH_",
"nips_2022_907ZdmPm... |
nips_2022_OQs0pLKGGpS | Tractable Function-Space Variational Inference in Bayesian Neural Networks | Reliable predictive uncertainty estimation plays an important role in enabling the deployment of neural networks to safety-critical settings. A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters, infer an approximate posterior distribution, and use it to make stochastic predictions. However, explicit inference over neural network parameters makes it difficult to incorporate meaningful prior information about the data-generating process into the model. In this paper, we pursue an alternative approach. Recognizing that the primary object of interest in most settings is the distribution over functions induced by the posterior distribution over neural network parameters, we frame Bayesian inference in neural networks explicitly as inferring a posterior distribution over functions and propose a scalable function-space variational inference method that allows incorporating prior information and results in reliable predictive uncertainty estimates. We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks and demonstrate that it performs well on a challenging safety-critical medical diagnosis task in which reliable uncertainty estimation is essential. | Accept | The review process for this manuscript is complex. The reviewers are not in consensus. Most of them have engaged considerably with the original submission as well as the significant updates that the authors have made to the manuscript post submission. In my opinion, new full covariance rank results are what make the paper interesting and these were presented after the original submission. Normally, I would find this not to be fair as the reviewers are not obligated to read such a big revision to a submitted article. But at least two reviewers have engaged with the revision considerably and I feel like the paper is stronger than what the current scores imply. The last holdout reviewer maintains a few outstanding low-confidence concerns about the paper—I do not think these should hold back the manuscript from being presented and discussed at the conference.
I am voting to accept this paper in spite of its low score, but recommend that the authors correct their behavior. Such a large revision to a manuscript puts an enormous tax on the review process; this is basically a "journal level" edit to the submission and normally this would require a second round of review. | val | [
"tsLOjhTgDaQ",
"oYU0fLyvn3d",
"Zp34ub0xyfc",
"vhPte9bNLuK",
"zQnf8q1Joy5",
"V076G7RvmN-",
"sxSNISO69wq",
"g5UOGABScN5",
"wxWn8qHrlQF",
"WIgu1Wlh5Mc",
"hTR4yn0wE5Z",
"6Wvr6m60bYX",
"TrgkmymYmgi",
"EES6Mv4-tZR",
"zYotcvta494",
"evsPnyipCY",
"O69Dq1L7zf",
"6_8qu2jmkKm",
"Pyud7m5s0WE... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" **Thank you for replying to our response!**\n\n### Clarification\n\n> I'll leave it to area chairs to decide whether to admit additional results with changes to the formulation from the original submission.\n\nWe would like to emphasize that **we have not changed the formulation of the original submission**. The ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
2,
4
] | [
"oYU0fLyvn3d",
"O69Dq1L7zf",
"V076G7RvmN-",
"zQnf8q1Joy5",
"WIgu1Wlh5Mc",
"Pyud7m5s0WE",
"g5UOGABScN5",
"evsPnyipCY",
"nips_2022_OQs0pLKGGpS",
"R5oGk16wbKE",
"R5oGk16wbKE",
"R5oGk16wbKE",
"R5oGk16wbKE",
"R5oGk16wbKE",
"FzZB5Oy6d9",
"FzZB5Oy6d9",
"0FhCeTnPxQV",
"0FhCeTnPxQV",
"lfe... |
nips_2022_B2PpZyAAEgV | Transform Once: Efficient Operator Learning in Frequency Domain | Spectral analysis provides one of the most effective paradigms for information-preserving dimensionality reduction, as simple descriptions of naturally occurring signals are often obtained via few terms of periodic basis functions. In this work, we study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time: frequency-domain models (FDMs). Existing FDMs are based on complex-valued transforms i.e. Fourier Transforms (FT), and layers that perform computation on the spectrum and input data separately. This design introduces considerable computational overhead: for each layer, a forward and inverse FT. Instead, this work introduces a blueprint for frequency domain learning through a single transform: transform once (T1). To enable efficient, direct learning in the frequency domain we derive a variance preserving weight initialization scheme and investigate methods for frequency selection in reduced-order FDMs. Our results noticeably streamline the design process of FDMs, pruning redundant transforms, and leading to speedups of 3x to 10x that increase with data resolution and model size. We perform extensive experiments on learning the solution operator of spatio-temporal dynamics, including incompressible Navier-Stokes, turbulent flows around airfoils and high-resolution video of smoke. T1 models improve on the test performance of FDMs while requiring significantly less computation (5 hours instead of 32 for our large-scale experiment), with over 20% reduction in predictive error across tasks. | Accept | The paper received three positive reviews (including a well motivated strong accept) and a negative ones. Overall, the feedback provided by the reviewer seem to be useful. The area chair does agree that the concerns of the negative reviewer are a good reason for a reject, even though their review may trigger interesting discussions. The majority of the reviewers believe that the paper makes an interesting contribution and the area chair is happy to recommend an accept. | train | [
"VLSJs2lp3st",
"5QUwjnDCTw9",
"nACBIoAdRg8",
"70AjYZsYmuP",
"LR7oR7MUoTE",
"mDgeS0l0SOb",
"n4VkFhKK7u",
"Nw-eQkv7pc",
"85RdUsVHaPM",
"bdS3o93T2g",
"7HbCUPRTKdb",
"S6XxacO5dCa",
"HnsZ2PoQ0SS",
"E-LQqaqu-d",
"I1fmQGMrq0f",
"58cRpJIGCUu",
"exWZzyB9yQ13",
"kdOOon-ecK",
"ogJdslDWvsK",... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_... | [
" Thank you for the discussion and for planning to raise the score (small reminder: the OpenReview score has not been updated). The updated manuscript will include additional details that emerged during the discussion phase. ",
" Thank you for the interesting insights. We have provided several arguments based on ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
4
] | [
"70AjYZsYmuP",
"nACBIoAdRg8",
"Nw-eQkv7pc",
"n4VkFhKK7u",
"bdS3o93T2g",
"85RdUsVHaPM",
"7HbCUPRTKdb",
"E-LQqaqu-d",
"exWZzyB9yQ13",
"58cRpJIGCUu",
"S6XxacO5dCa",
"I1fmQGMrq0f",
"nips_2022_B2PpZyAAEgV",
"NiMUCauqm1f",
"-E68bTjZON",
"8u6NxyMga-4",
"kdOOon-ecK",
"ogJdslDWvsK",
"Tjxj... |
nips_2022__3XVbh6L2c | Reinforcement Learning with Automated Auxiliary Loss Search | A good state representation is crucial to solving complicated reinforcement learning (RL) challenges. Many recent works focus on designing auxiliary losses for learning informative representations. Unfortunately, these handcrafted objectives rely heavily on expert knowledge and may be sub-optimal. In this paper, we propose a principled and universal method for learning better representations with auxiliary loss functions, named Automated Auxiliary Loss Search (A2LS), which automatically searches for top-performing auxiliary loss functions for RL. Specifically, based on the collected trajectory data, we define a general auxiliary loss space of size $7.5 \times 10^{20}$ and explore the space with an efficient evolutionary search strategy. Empirical results show that the discovered auxiliary loss (namely, A2-winner) significantly improves the performance on both high-dimensional (image) and low-dimensional (vector) unseen tasks with much higher efficiency, showing promising generalization ability to different settings and even different benchmark domains. We conduct a statistical analysis to reveal the relations between patterns of auxiliary losses and RL performance. | Accept | The paper introduces an interesting evolutionary scheme for black-box hyper-parameter optimisation for representation learning, looks like a useful and general tool for optimising RL-agent in case the compute is not a constraint. | train | [
"I86RAnEKUuK",
"egs2QOkgLNh",
"_8GzAQK0mEk",
"EjXFES9a-sI",
"T_SNjbXZYEW",
"xD6GYcfNXLL",
"76e10Lql2lx_",
"6DkoHrE4fuA",
"eVh3pjPgt7z",
"D7DV-rSDLFi",
"3wcA0eZqZFg",
"NsFLe2_h4IC",
"o6Oz3FJReF",
"ojvCGmZYJq",
"KMtz_uD-wn",
"ZxuiJkpYvt5",
"G4TaqQjzN9E",
"euKAixEmLHn"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" After pondering on your comments, we agree with you that parallelized random sampling has the potential to achieve even stronger performance. However, we would like to point out that:\n\nOur paper has already made a large number of important contributions: (i) we are the first to search for auxiliary loss functio... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5
] | [
"egs2QOkgLNh",
"_8GzAQK0mEk",
"EjXFES9a-sI",
"T_SNjbXZYEW",
"o6Oz3FJReF",
"76e10Lql2lx_",
"eVh3pjPgt7z",
"nips_2022__3XVbh6L2c",
"D7DV-rSDLFi",
"euKAixEmLHn",
"NsFLe2_h4IC",
"G4TaqQjzN9E",
"ojvCGmZYJq",
"KMtz_uD-wn",
"ZxuiJkpYvt5",
"nips_2022__3XVbh6L2c",
"nips_2022__3XVbh6L2c",
"n... |
nips_2022_u8FDFtoMKp2 | Zero-shot Transfer Learning within a Heterogeneous Graph via Knowledge Transfer Networks | Data continuously emitted from industrial ecosystems such as social or e-commerce platforms are commonly represented as heterogeneous graphs (HG) composed of multiple node/edge types. State-of-the-art graph learning methods for HGs known as heterogeneous graph neural networks (HGNNs) are applied to learn deep context-informed node representations. However, many HG datasets from industrial applications suffer from label imbalance between node types. As there is no direct way to learn using labels rooted at different node types, HGNNs have been applied to only a few node types with abundant labels. We propose a zero-shot transfer learning module for HGNNs called a Knowledge Transfer Network (KTN) that transfers knowledge from label-abundant node types to zero-labeled node types through rich relational information given in the HG. KTN is derived from the theoretical relationship, which we introduce in this work, between distinct feature extractors for each node type given in an HGNN model. KTN improves the performance of 6 different types of HGNN models by up to 960% for inference on zero-labeled node types and outperforms state-of-the-art transfer learning baselines by up to 73% across 18 different transfer learning tasks on HGs. | Accept | This paper proposes a transfer learning technique in heterogenous graphs, which can contain different types of nodes and edges. It identifies a limitation of existing work on graph neural networks for heterogenous graphs: they tend to implicitly learn separate representation encoders for each type of node. In other words, even though the same network is applied to all nodes, it tends to use separate activation paths to compute the representations. The proposed Knowledge Transfer Networks (KTNs) address this limitation with a novel architecture that explicitly learns a mapping between different domains (in this case, node types). Experiments on benchmark heterogeneous graphs show that KTNs lead to large improvements in classification accuracy on new node types.
The reviewers all liked the contributions of the paper, identifying the problem as important, the KTN architecture as novel, and the experimental results convincing. During the review process, the authors have already updated the paper several times in response to the reviewers feedback. In addition, they are encouraged to include the additional results with ALDA in the final version.
One remaining area for improvement is precisely scoping in the introduction and informal problem statements what exactly is meant by zero-shot learning in this context. As stated in the limitation statement, KTNs require that the task share the same label space. The authors have clarified this in section 3.3, but that is still too late for such a key definition. | train | [
"UnahOmnAEO",
"WN3w0RFgvLd",
"KKlSaQ2t741",
"O22O58rP7um",
"rSLJSpjlt_Q",
"QsU1fpW0v28",
"yLMnpllaSMo",
"Gwm0SZMkNLW",
"OqfoOJkVTMC",
"KJx7zSCCsSj",
"r6Qp7IyN75P",
"RY87z6wL30q",
"0zn4_5VfUAz",
"Y4XqWMjv1RC",
"uY2uHcJcWz",
"Cc60WvvRxI",
"1HbmxXCFuyE"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" After having a discussion with the authors, all my questions/comments are clearly resolved. Thank you again for responding to my comments, and engaging with me to clarify my questions. I am willing to maintain my score: accept. ",
" We really appreciate all your comments and questions, which have helped us to i... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"WN3w0RFgvLd",
"QsU1fpW0v28",
"yLMnpllaSMo",
"rSLJSpjlt_Q",
"OqfoOJkVTMC",
"Gwm0SZMkNLW",
"RY87z6wL30q",
"1HbmxXCFuyE",
"KJx7zSCCsSj",
"r6Qp7IyN75P",
"uY2uHcJcWz",
"Cc60WvvRxI",
"Y4XqWMjv1RC",
"nips_2022_u8FDFtoMKp2",
"nips_2022_u8FDFtoMKp2",
"nips_2022_u8FDFtoMKp2",
"nips_2022_u8FDF... |
nips_2022_9WJU4Lu2KTX | Uncertainty Estimation for Multi-view Data: The Power of Seeing the Whole Picture | Uncertainty estimation is essential to make neural networks trustworthy in real-world applications. Extensive research efforts have been made to quantify and reduce predictive uncertainty. However, most existing works are designed for unimodal data, whereas multi-view uncertainty estimation has not been sufficiently investigated. Therefore, we propose a new multi-view classification framework for better uncertainty estimation and out-of-domain sample detection, where we associate each view with an uncertainty-aware classifier and combine the predictions of all the views in a principled way. The experimental results with real-world datasets demonstrate that our proposed approach is an accurate, reliable, and well-calibrated classifier, which predominantly outperforms the multi-view baselines tested in terms of expected calibration error, robustness to noise, and accuracy for the in-domain sample classification and the out-of-domain sample detection tasks | Accept | Two out of three reviewers learn to accept, with one of them championing the paper. The reviewer that’s most critical did not engage with either the authors or with the reviewers / AC during the discussion period. Furthermore, the review itself is pretty sparse in details and justifications on the criticism. Upon looking at the manuscript I agree with the other two positive reviewers that the work is innovative and the idea of combining multiple views under Product of Experts framework is interesting. Using Sparse GPs as an underlying mechanism does address some of the concerns about computational complexity of GPs. Additionally, the experimental evaluation is detailed enough and highlights the potential of the proposed model. Given all the above and including the observation that the manuscript is well-written with a principled inference mechanism, I am recommending the manuscript of acceptance at NeurIPS. | train | [
"ajVTIBmllnU",
"Otx2kkR-J-y",
"XRm2UfkSsiH",
"mEjgeOJVDCL",
"D3AHFaS7tNS",
"1HcX7YWaVLM",
"1xxP_3DlLVQ",
"K3lQ2MIvZU",
"YC0za7O4d-J",
"RqzxuG86_UQ",
"MTma_fiPvRY",
"wFKzo6WqPSF"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks the authors for a comprehensive rebuttal. Most of my concerns are addressed. I think some part of the rebuttal should be incorporated into the final revision such as answers to point 1, 2, 5, and 8 to better reveal some important technical details and justify the significance of the results. Regarding my p... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4
] | [
"mEjgeOJVDCL",
"1xxP_3DlLVQ",
"YC0za7O4d-J",
"D3AHFaS7tNS",
"1HcX7YWaVLM",
"wFKzo6WqPSF",
"K3lQ2MIvZU",
"MTma_fiPvRY",
"RqzxuG86_UQ",
"nips_2022_9WJU4Lu2KTX",
"nips_2022_9WJU4Lu2KTX",
"nips_2022_9WJU4Lu2KTX"
] |
nips_2022_D45iCWZYcff | Sub-exponential time Sum-of-Squares lower bounds for Principal Components Analysis | Principal Components Analysis (PCA) is a dimension-reduction technique widely used in machine learning and statistics. However, due to the dependence of the principal components on all the dimensions, the components are notoriously hard to interpret. Therefore, a variant known as sparse PCA is often preferred. Sparse PCA learns principal components of the data but enforces that such components must be sparse. This has applications in diverse fields such as computational biology and image processing. To learn sparse principal components, it's well known that standard PCA will not work, especially in high dimensions, and therefore algorithms for sparse PCA are often studied as a separate endeavor. Various algorithms have been proposed for Sparse PCA over the years, but given how fundamental it is for applications in science, the limits of efficient algorithms are only partially understood. In this work, we study the limits of the powerful Sum of Squares (SoS) family of algorithms for Sparse PCA. SoS algorithms have recently revolutionized robust statistics, leading to breakthrough algorithms for long-standing open problems in machine learning, such as optimally learning mixtures of gaussians, robust clustering, robust regression, etc. Moreover, it is believed to be the optimal robust algorithm for many statistical problems. Therefore, for sparse PCA, it's plausible that it can beat simpler algorithms such as diagonal thresholding that have been traditionally used. In this work, we show that this is not the case, by exhibiting strong tradeoffs between the number of samples required, the sparsity and the ambient dimension, for which SoS algorithms, even if allowed sub-exponential time, will fail to optimally recover the component. Our results are complemented by known algorithms in literature, thereby painting an almost complete picture of the behavior of efficient algorithms for sparse PCA. Since SoS algorithms encapsulate many algorithmic techniques such as spectral or statistical query algorithms, this solidifies the message that known algorithms are optimal for sparse PCA. Moreover, our techniques are strong enough to obtain similar tradeoffs for Tensor PCA, another important higher order variant of PCA with applications in topic modeling, video processing, etc. | Accept | The reviewers appreciate the solid theoretical results on the hardness of sparse and tensor PCA. The nearly sharp characterization of limitations of sum-of-square methods makes the paper stand out. The proof techniques may potentially be useful for other problems. Based on the above, I recommend acceptance. Meanwhile, please carefully revise the paper to improve presentation. As the paper is technically involved, it would be nice to better elaborate the proof ideas and highlight the key challenges. | train | [
"aKhvhCBCyfEA",
"ybt502bILaK",
"WYCsO-pWRuv",
"qgbMMOrcRmR",
"UP0Ci9QwvA3",
"rE6VD-ztWrS",
"72KjJTIuesw",
"Ob4sf3PZdFz",
"1PgYH5vRHB",
"gaRzYGVE5em",
"5IW0SfoUS8H",
"E-vyYPFKocf",
"ypOfrOlmj1E",
"lASpqvgTYC"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We kindly thank the reviewer for reading our response and updating their review and score. We are happy that the response was able to address their concerns and welcome any additional suggestions to improve the text.",
" Thank you for your response. I stand corrected regarding my comment on the failure of SoS w... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
5,
3
] | [
"ybt502bILaK",
"qgbMMOrcRmR",
"rE6VD-ztWrS",
"lASpqvgTYC",
"ypOfrOlmj1E",
"E-vyYPFKocf",
"nips_2022_D45iCWZYcff",
"5IW0SfoUS8H",
"gaRzYGVE5em",
"nips_2022_D45iCWZYcff",
"nips_2022_D45iCWZYcff",
"nips_2022_D45iCWZYcff",
"nips_2022_D45iCWZYcff",
"nips_2022_D45iCWZYcff"
] |
nips_2022__h2FKc6E_YV | Rethinking and Scaling Up Graph Contrastive Learning: An Extremely Efficient Approach with Group Discrimination | Graph contrastive learning (GCL) alleviates the heavy reliance on label information for graph representation learning (GRL) via self-supervised learning schemes. The core idea is to learn by maximising mutual information for similar instances, which requires similarity computation between two node instances. However, GCL is inefficient in both time and memory consumption. In addition, GCL normally requires a large number of training epochs to be well-trained on large-scale datasets. Inspired by an observation of a technical defect (i.e., inappropriate usage of Sigmoid function) commonly used in two representative GCL works, DGI and MVGRL, we revisit GCL and introduce a new learning paradigm for self-supervised graph representation learning, namely, Group Discrimination (GD), and propose a novel GD-based method called Graph Group Discrimination (GGD). Instead of similarity computation, GGD directly discriminates two groups of node samples with a very simple binary cross-entropy loss. In addition, GGD requires much fewer training epochs to obtain competitive performance compared with GCL methods on large-scale datasets. These two advantages endow GGD with very efficient property. Extensive experiments show that GGD outperforms state-of-the-art self-supervised methods on eight datasets. In particular, GGD can be trained in 0.18 seconds (6.44 seconds including data preprocessing) on ogbn-arxiv, which is orders of magnitude (10,000+) faster than GCL baselines while consuming much less memory. Trained with 9 hours on ogbn-papers100M with billion edges, GGD outperforms its GCL counterparts in both accuracy and efficiency. | Accept | The paper presents a novel contrastive method of graph representation learning motivated by the observation that previous approaches to graph contrastive learning actually do group discrimination. The proposed method learns graph representations such that the representations of the original graph and a corrupted version are easily distinguishable when projected to a scalar space. The reviewers found the method to be simple and comparable in performance with state-of-the-art methods, while more scalable. | train | [
"t8sIYw3vvl",
"6QbEQjzAnAI",
"z6n40ZmCXo_",
"NlD8BrOX7b",
"B496d9-nmb",
"NNinpzY918",
"25hha5W1ABO",
"QjQNHmsLrmA",
"p-kReLA6ojg",
"jaoAXqEqiEu",
"qrYsd7Cjxkp",
"HWGeLE8dNrP",
"AdFHqOo4nsY",
"M5qRsqBsTb",
"LAC7V8IlJDI",
"Z_wd1g8NPQ",
"64S1DDuVye",
"D06dJMnT7aB",
"m1gxEUoWKA6",
... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_re... | [
" Dear authors,\n\nThe reviewer highly appreciates the efforts you made and has updated the score. Thanks!\n\nBest,\n",
" Thanks for the updated feedback. The reviewer highly appreciates the authors' efforts and is deeply moved by the authors' attitude despite the fact that the proof still has some major flaws. ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
5
] | [
"z6n40ZmCXo_",
"NlD8BrOX7b",
"HWGeLE8dNrP",
"25hha5W1ABO",
"64S1DDuVye",
"QjQNHmsLrmA",
"qrYsd7Cjxkp",
"p-kReLA6ojg",
"LAC7V8IlJDI",
"HWGeLE8dNrP",
"HWGeLE8dNrP",
"Z_wd1g8NPQ",
"nips_2022__h2FKc6E_YV",
"D06dJMnT7aB",
"rwXVtyJ_xuM",
"ucz6YtIcps",
"m1gxEUoWKA6",
"nips_2022__h2FKc6E_Y... |
nips_2022_1X5zpwWoHwu | Nearly-Tight Bounds for Testing Histogram Distributions | We investigate the problem of testing whether a discrete probability distribution over an ordered domain is a histogram on a specified number of bins. One of the most common tools for the succinct approximation of data, $k$-histograms over $[n]$, are probability distributions that are piecewise constant over a set of $k$ intervals. Given samples from an unknown distribution $\mathbf p$ on $[n]$, we want to distinguish between the cases that $\mathbf p$ is a $k$-histogram versus far from any $k$-histogram, in total variation distance. Our main result is a sample near-optimal and computationally efficient algorithm for this testing problem, and a nearly-matching (within logarithmic factors) sample complexity lower bound, showing that the testing problem has sample complexity $\widetilde \Theta (\sqrt{nk} / \epsilon + k / \epsilon^2 + \sqrt{n} / \epsilon^2)$. | Accept | Given samples from an unknown discrete distribution, the goal of the paper is to test if it is a histogram over k bins or epsilon far away from all such distributions in the total variation distance. Authors provide a computationally efficient algorithm and further show that the sample complexity is near optimal. The reviewers agree that the results are interesting and novel and I recommend acceptance. As reviewers remark, the paper can benefit by a discussion on the motivation of this problem formulation and the practicality of the proposed approach. I strongly encourage authors to add a discussion addressing these comments in the final version of the paper. | test | [
"Jkv2Fa825PC",
"b8pG8SCE9I",
"5s6OvaF6Fef",
"-snsh-3X-8",
"EBgXVtTJZYZ",
"G8Ke_HiNxlJ",
"WQnG8z4uJSc",
"G29kin2EpT1",
"eR98SEi4XJx"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response. I have no further clarifications. ",
" We appreciate the reviewer’s concerns regarding practicality, and refer to the response to reviewer #VmGT for this aspect. Regarding the second question, i.e., the (related) problem of testing whether a distribution is a k-histogram up to a permut... | [
-1,
-1,
-1,
-1,
-1,
8,
7,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"EBgXVtTJZYZ",
"eR98SEi4XJx",
"G29kin2EpT1",
"WQnG8z4uJSc",
"G8Ke_HiNxlJ",
"nips_2022_1X5zpwWoHwu",
"nips_2022_1X5zpwWoHwu",
"nips_2022_1X5zpwWoHwu",
"nips_2022_1X5zpwWoHwu"
] |
nips_2022_EEcFW47sktI | Conditional Diffusion Process for Inverse Halftoning | Inverse halftoning is a technique used to recover realistic images from ancient prints (\textit{e.g.}, photographs, newspapers, books). The rise of deep learning has led to the gradual incorporation of neural network designs into inverse halftoning methods. Most of existing inverse halftoning approaches adopt the U-net architecture, which uses an encoder to encode halftone prints, followed by a decoder for image reconstruction. However, the mainstream supervised learning paradigm with element-wise regression commonly adopted in U-net based methods has poor generalization ability in practical applications. Specifically, when there is a large gap between the dithering patterns of the training and test halftones, the reconstructed continuous-tone images have obvious artifacts. This is an important issue in practical applications, since the algorithms for generating halftones are ever-evolving. Even for the same algorithm, different parameter choices will result in different halftone dithering patterns. In this paper, we propose the first generative halftoning method in the literature, which regards the black pixels in halftones as physically moving particles, and makes the randomly distributed particles move under some certain guidance through reverse diffusion process, so as to obtain desired halftone patterns. In particular, we propose a Conditional Diffusion model for image Halftoning (CDH), which consists of a halftone dithering process and an inverse halftoning process. By changing the initial state of the diffusion model, our method can generate visually plausible halftones with different dithering patterns under the condition of image gray level and Laplacian prior. To avoid introducing redundant patterns and undesired artifacts, we propose a meta-halftone guided network to incorporate blue noise guidance in the diffusion process. In this way, halftone images subject to more diverse distributions are fed into the inverse halftoning model, which helps the model to learn a more robust mapping from halftone distributions to continuous-tone distributions, thereby improving the generalization ability to unseen samples. Quantitative and qualitative experimental results demonstrate that the proposed method achieves state-of-the-art results. | Accept | Most reviewers are positive about the paper. After rebuttal, a number of issues were addressed, and the additional evaluations provided in the authors' response should be published in the paper or supplemental. The clarity of the technical exposition and details can still be improved, but reviewers found the results better than SOTA and the ideas worth publishing | train | [
"O-K5l3R9Fbs",
"3HOQSzchFh1",
"nBrN-Dz3va",
"l8u8CsbE1dT",
"5icgOQ3uwZ",
"wOwqsZ4dzap",
"KE10y5OXfCx",
"OC9WCawBWNt",
"kKjRGWWnfl8",
"DaN1WoY16HP",
"w7W40Gqg7tc",
"8gHSpATCYo0",
"Ya6IWPaXJoT",
"6w5Wk0QslX",
"13bJQsMbWah"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" \nDear reviewer:\n\nIn addition to the added comparisons above, we also verify the performance of Palette and its variants (Chitwan et al., 2021) as baselines, and the results are as follows.\n\nMethod | Variants | PSNR | SSIM\n-|-|-|-\n(Chitwan et al., 2021) | Channel 16, Res 1 | 21.12 | 0.693\n(Chitwan et al., ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
1,
2
] | [
"Ya6IWPaXJoT",
"13bJQsMbWah",
"13bJQsMbWah",
"13bJQsMbWah",
"kKjRGWWnfl8",
"13bJQsMbWah",
"13bJQsMbWah",
"13bJQsMbWah",
"6w5Wk0QslX",
"Ya6IWPaXJoT",
"8gHSpATCYo0",
"nips_2022_EEcFW47sktI",
"nips_2022_EEcFW47sktI",
"nips_2022_EEcFW47sktI",
"nips_2022_EEcFW47sktI"
] |
nips_2022_fbUybomIuE | Analyzing Lottery Ticket Hypothesis from PAC-Bayesian Theory Perspective | The lottery ticket hypothesis (LTH) has attracted attention because it can explain why over-parameterized models often show high generalization ability. It is known that when we use iterative magnitude pruning (IMP), which is an algorithm to find sparse networks with high generalization ability that can be trained from the initial weights independently, called winning tickets, the initial large learning rate does not work well in deep neural networks such as ResNet. However, since the initial large learning rate generally helps the optimizer to converge to flatter minima, we hypothesize that the winning tickets have relatively sharp minima, which is considered a disadvantage in terms of generalization ability. In this paper, we confirm this hypothesis and show that the PAC-Bayesian theory can provide an explicit understanding of the relationship between LTH and generalization behavior. On the basis of our experimental findings that IMP with a small learning rate finds relatively sharp minima and that the distance from the initial weights is deeply involved in winning tickets, we offer the PAC-Bayes bound using a spike-and-slab distribution to analyze winning tickets. Finally, we revisit existing algorithms for finding winning tickets from a PAC-Bayesian perspective and provide new insights into these methods. | Accept | Overall: The paper analyzes the key factors or indicators behind the successful identification of winning tickets in Lottery Ticket Hypothesis (LTH).
Reviews: The paper received four reviews. Strong accept (absolutely confident), Accept (Absolutely confident), Accept (confident) and Reject (less confident). It seems that there are at least three reviewer that will champion the paper for publication. The reviewers
found the paper is clear and has a clean presentation. The findings are interesting, as well as the PAC-Bayesian perspective. The authors have provided extensive answers to reviewers' comments, answering most of them successfully.
Main issues raised by reviewers:
- The criterion for finding the winning ticket is unclear
- The connection between IMP for winning tickets and the PAC-Bayesian model is not strong
- The presentation can be improved.
However, the authors have tackled most of the concerns raised.
After rebuttal: The authors have provide extensive clarifications with many additional experimental results, addressing several of the issues raised by the reviewers (some of them acknowledged this effort and they have raised their scores). Overall, the engaged reviewers seem happy with the changes and propose acceptance.
Confidence of reviews: Overall, the reviewers are fairly confident. We will put more weight to the reviews that got engaged in the rebuttal discussion period. | val | [
"dsc20tTRxRd",
"bTzjtbTeC8u",
"pdW7hyu36Aw",
"vg5Aa2oNf26",
"zjWu25tKzQU",
"V1blXa5rgVW",
"d97XVybETck",
"6cKDbDV66r",
"RJ3Gb2DMw5y",
"5LddfYmHPu7",
"8_kJf_ho2JZ",
"hVLWyV0yrE4",
"CD50tOdXIR6",
"zpo74j3k9MZ",
"AKRmmu_TUK5",
"Vawo_iRPHW",
"JO9RejjmewD",
"A6nI281pAEm",
"bolunuUrlAI... | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_re... | [
" Thanks for the quick reply.\n\nI believe these revisions will be very valuable since the Appendix has interesting experiments which I myself hadn't given enough attention when reading the paper for my original review. Although some of the results in Appendix B.2, B.3 and B.4 are negative (as in, they show differe... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
7,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
3,
2
] | [
"bTzjtbTeC8u",
"vg5Aa2oNf26",
"AKRmmu_TUK5",
"5LddfYmHPu7",
"V1blXa5rgVW",
"d97XVybETck",
"6cKDbDV66r",
"hVLWyV0yrE4",
"AKRmmu_TUK5",
"8_kJf_ho2JZ",
"JO9RejjmewD",
"A6nI281pAEm",
"zpo74j3k9MZ",
"bolunuUrlAI",
"Vawo_iRPHW",
"nips_2022_fbUybomIuE",
"nips_2022_fbUybomIuE",
"nips_2022_... |
nips_2022_WIJ2SfPTj8c | ISAAC Newton: Input-based Approximate Curvature for Newton's Method | We present ISAAC (Input-baSed ApproximAte Curvature), a novel method that conditions the gradient using selected second-order information and has an asymptotically vanishing computational overhead, assuming a batch size smaller than the number of neurons. We show that it is possible to compute a good conditioner based on only the input to a respective layer without a substantial computational overhead. The proposed method allows effective training even in small-batch stochastic regimes, which makes it competitive to first-order as well as quasi-Newton methods. | Reject | The paper adds two regularization parameters that creates a method that interpolates between gradient descent and KFAC. The authors then prove the method interpolates, then several numerical experiments are presented on MNIST, and training BERT, and resnets. But the reviewers were not convinced by the experimental results, which did not benchmark the methods against SOTA, or lacking large scale experiments, and also lacking comparisons against other second order methods, for which there are now many.
In addition to this, though the paper cites broadly, it does not review, cite or cover the recent efforts on improving KFAC. For example the missing reference [1]. Though this is a quasi-Newton approach, they also employ a type of regularization to aa^T term. Finally, I think the introduction of these two regularization parameters needs to be slightly better motivated. For now, the motivation is to interpolate between SGD and KFAC. But this is not enough. One can always generate new methods by interpolating between existing methods by adding a parameter. I recommend exploring the LM (Levenberg Marquardt) viewpoint of this type of regularization. This may gives other viewpoints and motivation for using these two regularization parameters.
[1] Goldfarb, D., Ren, Y., Bahamou, A.:, Practical quasi-Newton methods for training deep neural networks. In:Advances inNeural Information Processing Systems. | train | [
"lEMnyYB0Y7E",
"HqtByWNkwI",
"zsE8Qd6HEKu",
"HyM61tM5_jE",
"SSLU4bqkZZ",
"33DCGESqegu",
"HDzCQmXcl-hF",
"p34VOYpjMzZ",
"FAh1URpahS",
"HhPx-tOelmn",
"bgn406aGsFZ",
"n0vqsuxLYZXr",
"QMvoL3al-12",
"gocYINIa7js",
"EhV4Wq2MV8",
"DM9Tzkb3BRv"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read other reviews and the rebuttal. I still have reservations about the significance of the one-parameter update proposed but I have no strong reasons to reject the revised paper.",
" We wish to thank you for helping us improve the paper. Hopefully, you have had a chance to take a look at our responses.... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5,
2
] | [
"HqtByWNkwI",
"FAh1URpahS",
"bgn406aGsFZ",
"n0vqsuxLYZXr",
"33DCGESqegu",
"HDzCQmXcl-hF",
"p34VOYpjMzZ",
"HhPx-tOelmn",
"DM9Tzkb3BRv",
"EhV4Wq2MV8",
"gocYINIa7js",
"QMvoL3al-12",
"nips_2022_WIJ2SfPTj8c",
"nips_2022_WIJ2SfPTj8c",
"nips_2022_WIJ2SfPTj8c",
"nips_2022_WIJ2SfPTj8c"
] |
nips_2022_U2bAR6qzF9E | Self-Similarity Priors: Neural Collages as Differentiable Fractal Representations | Many patterns in nature exhibit self-similarity: they can be compactly described via self-referential transformations. Said patterns commonly appear in natural and artificial objects, such as molecules, shorelines, galaxies, and even images. In this work, we investigate the role of learning in the automated discovery of self-similarity and in its utilization for downstream tasks. To this end, we design a novel class of implicit operators, Neural Collages, which (1) represent data as the parameters of a self-referential, structured transformation, and (2) employ hypernetworks to amortize the cost of finding these parameters to a single forward pass. We detail how to leverage the representations produced by Neural Collages in various tasks, including data compression and generation. Neural Collage image compressors are orders of magnitude faster than other self-similarity-based algorithms during encoding and offer compression rates competitive with implicit methods. Finally, we showcase applications of Neural Collages for fractal art and as deep generative models. | Accept | This paper introduces neural collages, which are operators that aim to represent an image via parameters to a self-referential transformation. The parameters for a given are predicted by a hypernetwork. The approach is mainly evaluated on image compression, but applications to generative modeling are considered. While reviewers had some trouble reviewing the submission because it falls out of their area of expertise, they generally agreed that the approach was novel, interesting, and had compelling applications. | train | [
"ffc1npD75q",
"LcKL6o3JzUa",
"AOGN-uybhVl",
"HRrHBIHX_NZX",
"R0Qats2wORq",
"-U00h1AQsz",
"BRko-NAzFub",
"472lHqfQNJT",
"-XKn_ly40Pe",
"uC2c3DFWjF",
"ONO1sRAv2Bb"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" As the author-reviewer discussion phase is coming to an end, please let us know if there are further questions or concerns to address.",
" As the author-reviewer discussion phase is coming to an end, please let us know if there are further questions or concerns to address.",
" Thank you for raising your score... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
2,
3
] | [
"472lHqfQNJT",
"R0Qats2wORq",
"-U00h1AQsz",
"nips_2022_U2bAR6qzF9E",
"ONO1sRAv2Bb",
"BRko-NAzFub",
"uC2c3DFWjF",
"-XKn_ly40Pe",
"nips_2022_U2bAR6qzF9E",
"nips_2022_U2bAR6qzF9E",
"nips_2022_U2bAR6qzF9E"
] |
nips_2022__WqHmwoE7Ud | PlasticityNet: Learning to Simulate Metal, Sand, and Snow for Optimization Time Integration | In this paper, we propose a neural network-based approach for learning to represent the behavior of plastic solid materials ranging from rubber and metal to sand and snow. Unlike elastic forces such as spring forces, these plastic forces do not result from the positional gradient of any potential energy, imposing great challenges on the stability and flexibility of their simulation. Our method effectively resolves this issue by learning a generalizable plastic energy whose derivative closely matches the analytical behavior of plastic forces. Our method, for the first time, enables the simulation of a wide range of arbitrary elasticity-plasticity combinations using time step-independent, unconditionally stable optimization-based time integrators. We demonstrate the efficacy of our method by learning and producing challenging 2D and 3D effects of metal, sand, and snow with complex dynamics. | Accept | After rebuttal, all reviewers vote to accept this submission due to its technical novelty, presentation, and wide potential applications. The AC agrees. Congratulations. | train | [
"ewUq0e_JEOQ",
"AgK4Q7aicox",
"SzxvNFueGbh",
"vqd3V-HaXVS",
"MjJO4gWkAF",
"nemohWqU4Ix",
"JV4nwl4Z07B",
"pzaiVup7d32",
"xV3VYN6LaLy",
"rUJ7l63GXyh",
"iOvbiClJup"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The authors have addressed my concerns thoroughly. After reading all comments from other reviewers, I decided to change my score to 6.",
" Thank you for answering my questions. I don't have major concerns about this paper now and have raised my score to 6.",
" Thanks for your detailed and informative reviews.... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4
] | [
"SzxvNFueGbh",
"vqd3V-HaXVS",
"iOvbiClJup",
"rUJ7l63GXyh",
"xV3VYN6LaLy",
"pzaiVup7d32",
"nips_2022__WqHmwoE7Ud",
"nips_2022__WqHmwoE7Ud",
"nips_2022__WqHmwoE7Ud",
"nips_2022__WqHmwoE7Ud",
"nips_2022__WqHmwoE7Ud"
] |
nips_2022_-Zzi_ZmlDiy | Long-Form Video-Language Pre-Training with Multimodal Temporal Contrastive Learning | Large-scale video-language pre-training has shown significant improvement in video-language understanding tasks. Previous studies of video-language pretraining mainly focus on short-form videos (i.e., within 30 seconds) and sentences, leaving long-form video-language pre-training rarely explored. Directly learning representation from long-form videos and language may benefit many long-form
video-language understanding tasks. However, it is challenging due to the difficulty of modeling long-range relationships and the heavy computational burden caused by more frames. In this paper, we introduce a Long-Form VIdeo-LAnguage pre-training model (LF-VILA) and train it on a large-scale long-form video and paragraph dataset constructed from an existing public dataset. To effectively capture
the rich temporal dynamics and to better align video and language in an efficient end-to-end manner, we introduce two novel designs in our LF-VILA model. We first propose a Multimodal Temporal Contrastive (MTC) loss to learn the temporal relation across different modalities by encouraging fine-grained alignment between long-form videos and paragraphs. Second, we propose a Hierarchical Temporal Window Attention (HTWA) mechanism to effectively capture long-range dependency while reducing computational cost in Transformer. We fine-tune the pre-trained LF-VILA model on seven downstream long-form video-language understanding tasks of paragraph-to-video retrieval and long-form video question-answering, and achieve new state-of-the-art performances. Specifically, our model achieves 16.1% relative improvement on ActivityNet paragraph-to-video retrieval task and 2.4% on How2QA task, respectively. We release our code, dataset, and pre-trained models at https://github.com/microsoft/XPretrain.
| Accept | After a discussion, the reviewers reach a consensus towards the acceptance. The author rebuttal resolved most of the concerns that the reviewers raised and the reviewers find its value to the community. Together with some of the reviewers, I also appreciate that the paper explores long-term video modelling which is an under-explored field. In addition to the long-form video pretraining technique, the paper also introduces a pretraining dataset with long-form videos showing the pretrained model achieves competitive performances on multiple retrieval and VideoQA benchmarks. | train | [
"p_8dhRP6Dbg",
"UaIge7Keeb-",
"hc40zT3MHtD",
"A5xKRfFXa9",
"yDcioKE-yz",
"BX4Eu31Mqs",
"1HMAnVN79Aa",
"g5NZC1hcmL",
"Fnvcj0XcMtE",
"_BYbRncQyWb",
"ZHdlA-geMp",
"_e1Ry00ZhsY",
"ZlOvS-9Phd",
"kBX5v-e8bpn",
"jBIAsvCTohi",
"HvSH-1UIp5",
"Oe8xOFCusC",
"PbiRIEKwPAQ"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for your recognition of this paper and this discussion. We will carefully add the discussion to the final version. Below is our further response to your concern:\n\n**Unfair to not train the other methods with the same dataset.**\n\nWe conduct ablation studies on sampled 1M video-paragraph pai... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3,
3
] | [
"hc40zT3MHtD",
"A5xKRfFXa9",
"ZHdlA-geMp",
"_e1Ry00ZhsY",
"1HMAnVN79Aa",
"g5NZC1hcmL",
"kBX5v-e8bpn",
"_BYbRncQyWb",
"HvSH-1UIp5",
"jBIAsvCTohi",
"PbiRIEKwPAQ",
"Oe8xOFCusC",
"HvSH-1UIp5",
"jBIAsvCTohi",
"nips_2022_-Zzi_ZmlDiy",
"nips_2022_-Zzi_ZmlDiy",
"nips_2022_-Zzi_ZmlDiy",
"ni... |
nips_2022_vF3WefcoePW | Deep Differentiable Logic Gate Networks | Recently, research has increasingly focused on developing efficient neural network architectures. In this work, we explore logic gate networks for machine learning tasks by learning combinations of logic gates. These networks comprise logic gates such as "AND" and "XOR", which allow for very fast execution. The difficulty in learning logic gate networks is that they are conventionally non-differentiable and therefore do not allow training with gradient descent. Thus, to allow for effective training, we propose differentiable logic gate networks, an architecture that combines real-valued logics and a continuously parameterized relaxation of the network. The resulting discretized logic gate networks achieve fast inference speeds, e.g., beyond a million images of MNIST per second on a single CPU core. | Accept | The paper proposes to train classifiers for binary data based on logic gate networks, also called arithmetic circuits or algebraic circuits in the knowledge representation and complexity literature [1,2]. The main motivation is that, once trained, they can be run fast by leveraging only binary operations. The authors achieve this by using the classical way to relax logic operations, via a first-order approximation with fuzzy norms.
The reviewers found the proposed approach interesting and potentially useful for resource-constrained scenarios (where resources do not lack during training but during inference). I agree with them that this paper fits the tinyML literature more than the knowledge representation or reasoning (nowadays falling under the neuro-symbolic) one. However, I agree that believe the paper needs to fix certain aspects for the camera ready.
First, it is not clear why the authors are selecting the set of 16 logical operators, where, ideally only 2 are necessary (eg. AND, or product, and NOT, or negation) as all the others can be retrieved as combinations of the these two, as Table 1 illustrates. Furthermore, as using all 16 operators is the major bottleneck that makes the networks being very slow during training, an ablation test with a (even not minimal) subset of the operators is recommended. Clearly there is a trade-off in terms of accuracy w.r.t. the subset of ops used for a fixed depth network. And this trade-off needs to be made clear in the contribution (The authors can elaborate more than only saying "Omitting subsets of the operators decreased performance in all of our experiments.").
Second, there are a number of claims that need to be fixed. For example, the issue with differentiability with the relaxed functions is that gradients are constant everywhere, and only not defined on a single zero-measure point (between the zero and one regions). This is also one of the issues when trying to learn decision tree structures in a differentiable way. Additionally, the discussion on decision trees being very different from logic gate networks/algebraic circuits is very misleading and needs to be fixed. In fact, a decision node in a tree can be easily represented as a sum unit over some product units or equivalently OR and AND units in propositional logic or SMT [3, 4].
Third, since during training the authors are effectively using a probabilistic mixture model (Eqs 2,3), they are training something similar to a sum product network (without any relevant structural properties, and with negative weights) [5,1]. Therefore, the effect of the temperature parameter is important in retrieving a binary operator and not sufficiently investigated. An ablation study is recommended.
Lastly, for the larger experiments, it seems the authors are reporting the results of competitors which, however, use a different preprocessing (e.g. they use more gray-scale values for CIFAR). This makes the space and time consumption not comparable across models and requires a clarification.
I advice the authors to take care of these aspects to make the paper stronger.
[1] Darwiche and Marquis, "A Knowledge Compilation Map", 2001
[2] Arora and Barak, "Computational complexity: a modern approach" 2009.
[3] Khosravi, et al. "Handling missing data in decision trees: A probabilistic approach." 2020
[4] Devos, et al. "Verifying tree ensembles by reasoning about potential instances." 2021
[5] Choi, et al. "Probabilistic Circuits: A Unifying Framework for Tractable Probabilistic Models" 2020 | train | [
"ZMmIAIKloSS",
"wszael4iJilI",
"96zb-1KHFUOs",
"6nQZI8NjnSv",
"Dq9IdMctg4GC",
"Jv_WsbCHZnk",
"MPhmtGfb0aJ",
"w3dKhKOkb-_",
"daO6Wa5yWgG",
"VshK7uUZrE",
"dKuWsVxdR21",
"S4Cshy2yF-",
"1uzSZP5OJfD",
"By-RPt_OTsR",
"Yb1DQd0pho",
"ivfTmjq4giQ"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for addressing my questions. Upgraded score to 8.",
" Thank you for following up.\n\n> I do find the methodology applied here is very similar to that in the previous work.\n\nCould you please point us to the work that you find most similar in methodology compared to our work? \nFor this, we ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
3,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"VshK7uUZrE",
"96zb-1KHFUOs",
"dKuWsVxdR21",
"Dq9IdMctg4GC",
"Jv_WsbCHZnk",
"S4Cshy2yF-",
"daO6Wa5yWgG",
"nips_2022_vF3WefcoePW",
"ivfTmjq4giQ",
"Yb1DQd0pho",
"By-RPt_OTsR",
"1uzSZP5OJfD",
"nips_2022_vF3WefcoePW",
"nips_2022_vF3WefcoePW",
"nips_2022_vF3WefcoePW",
"nips_2022_vF3WefcoePW... |
nips_2022_1GAjC_FauE | SemiFL: Semi-Supervised Federated Learning for Unlabeled Clients with Alternate Training | Federated Learning allows the training of machine learning models by using the computation and private data resources of many distributed clients. Most existing results on Federated Learning (FL) assume the clients have ground-truth labels. However, in many practical scenarios, clients may be unable to label task-specific data due to a lack of expertise or resource. We propose SemiFL to address the problem of combining communication-efficient FL such as FedAvg with Semi-Supervised Learning (SSL). In SemiFL, clients have completely unlabeled data and can train multiple local epochs to reduce communication costs, while the server has a small amount of labeled data. We provide a theoretical understanding of the success of data augmentation-based SSL methods to illustrate the bottleneck of a vanilla combination of communication-efficient FL with SSL. To address this issue, we propose alternate training to 'fine-tune global model with labeled data' and 'generate pseudo-labels with the global model.' We conduct extensive experiments and demonstrate that our approach significantly improves the performance of a labeled server with unlabeled clients training with multiple local epochs. Moreover, our method outperforms many existing SSFL baselines and performs competitively with the state-of-the-art FL and SSL results. | Accept | There was some disagreement in reviewer scores for this paper, which proposes an algorithm for semi-supervised federated learning. The recurring concerns centered on a few points: (i) the lack of theoretical analysis of the proposed algorithm, (ii) the apparent disconnect of the presented analysis from the setting under consideration, (iii) limited technical novelty, (iv) potentially missing baselines, and (v) lack of clarity in the text. Of these, the response to (i), (iii) and (v) seem reasonable: certainly a systematic analysis of a new algorithm is welcome, but hardly straightforward; simple ideas are to be commended if they definitively solve a challenging problem; and the proposed edits do go some way to addressing the reviewer concerns.
Points (ii) and (iv) remain less clear. For the former, it is a bit confusing to present a generic analysis of SSL techniques, which does not appear to directly influence the proposed algorithm design. For the latter, there are some missing references at a minimum, e.g., Zhang et al., "Benchmarking Semi-supervised Federated Learning"; Itahara et al., "Distillation-Based Semi-Supervised Federated Learning for Communication-Efficient Collaborative Training with Non-IID Private Data". Further, some of the results in Table 1 do not seem to align with what has been previously reported in the literature; e.g., Table 1 of the FedMatch paper. However, it appears that the authors have at least compared against the most recent Fed-SSL techniques, with unsupervised and self-supervised techniques being not directly in scope.
Overall the paper is a borderline case. The recurring critique of limited depth suggests that a compelling empirical analysis would be appropriate. The explicit evaluation of Fed-SSL methods against a labelled-only baseline, and showing existing methods cannot outperform this, is interesting. The paper could be strengthened by compressing Sec 3.2 and focussing instead on a better understanding of why existing Fed-SSL methods underperform. Such insights would be more broadly insightful to the community, and be a useful contribution. | train | [
"BwXWa2Jbw6T",
"UV4OBoN3L4",
"LSiytYf9ul",
"nSi2BvNH08E",
"zCvETfYJul4",
"mmw3sDhZeY",
"84iq9x_k8Tv",
"UopTphS6C5-",
"WvZnM89NpkBY",
"bFH73SuFITk",
"mLtCvu-7iJk",
"oJEDFid0l2",
"0S9uuqE2Rkn",
"jS2dcNDKxpP",
"r-tTUmYiiG",
"0gFxe_wfke",
"pplCF8wl4yH",
"7OwY7yyhGGt",
"7E1JgMiGRqm",
... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for providing detailed responses. My curiosity has been addressed. Thank you.",
" Dear Reviewer Y1ff,\n\nWe apologize for any inconvenience that our message may cause in advance. Again, we would like to thank you for the time you dedicated to reviewing our paper and your valuable comments. W... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
5,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"0S9uuqE2Rkn",
"BZdn8whOo12",
"Wu9AxShUlld",
"7OwY7yyhGGt",
"mmw3sDhZeY",
"jS2dcNDKxpP",
"BZdn8whOo12",
"Wu9AxShUlld",
"7E1JgMiGRqm",
"7OwY7yyhGGt",
"oJEDFid0l2",
"BZdn8whOo12",
"Wu9AxShUlld",
"r-tTUmYiiG",
"7E1JgMiGRqm",
"pplCF8wl4yH",
"7OwY7yyhGGt",
"nips_2022_1GAjC_FauE",
"nip... |
nips_2022_4JYq_Kw4zw | Non-Stationary Bandits under Recharging Payoffs: Improved Planning with Sublinear Regret | The stochastic multi-armed bandit setting has been recently studied in the non-stationary regime, where the mean payoff of each action is a non-decreasing function of the number of rounds passed since it was last played. This model captures natural behavioral aspects of the users which crucially determine the performance of recommendation platforms, ad placement systems, and more. Even assuming prior knowledge of the mean payoff functions, computing an optimal planning in the above model is NP-hard, while the state-of-the-art is a $1/4$-approximation algorithm for the case where at most one arm can be played per round. We first focus on the setting where the mean payoff functions are known. In this setting, we significantly improve the best-known guarantees for the planning problem by developing a polynomial-time $(1-{1}/{e})$-approximation algorithm (asymptotically and in expectation), based on a novel combination of randomized LP rounding and a time-correlated (interleaved) scheduling method. Furthermore, our algorithm achieves improved guarantees -- compared to prior work -- for the case where more than one arms can be played at each round. Moving to the bandit setting, when the mean payoff functions are initially unknown, we show how our algorithm can be transformed into a bandit algorithm with sublinear regret. | Accept | All reviewers agree on the merit of the work. | val | [
"65je8A8L0Pl",
"7xG8fk1lPbZ",
"VjhSr8Mvj1b",
"N7CRw8Gq0K8",
"7h6bCDNYkst",
"JliyvVbSxN0"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank the reviewer for the valuable feedback. \n\nThe main technical difficulties we faced in improving the planning stage can be summarized as follows:\n\n* **Proving that any extreme point solution of our LP relaxation is sparse and, in particular, almost-delay-feasible** (see Lemma 3.5). This ... | [
-1,
-1,
-1,
6,
7,
7
] | [
-1,
-1,
-1,
2,
3,
5
] | [
"JliyvVbSxN0",
"7h6bCDNYkst",
"N7CRw8Gq0K8",
"nips_2022_4JYq_Kw4zw",
"nips_2022_4JYq_Kw4zw",
"nips_2022_4JYq_Kw4zw"
] |
nips_2022_xNeAhc2CNAl | Extreme Compression for Pre-trained Transformers Made Simple and Efficient | Extreme compression, particularly ultra-low bit precision (binary/ternary) quantization, has been proposed to fit large NLP models on resource-constraint devices.
However, to preserve the accuracy for such aggressive compression schemes, cutting-edge methods usually introduce complicated compression pipelines, e.g., multi-stage expensive knowledge distillation with extensive hyperparameter tuning.
Also, they oftentimes focus less on smaller transformer models that have already been heavily compressed via knowledge distillation and lack a systematic study to show the effectiveness of their methods.
In this paper, we perform a very comprehensive systematic study to measure the impact of many key hyperparameters and training strategies from previous.
As a result, we find out that previous baselines for ultra-low bit precision quantization are significantly under-trained.
Based on our study, we propose a simple yet effective compression pipeline for extreme compression.
Our simplified pipeline demonstrates that
(1) we can skip the pre-training knowledge distillation to obtain a 5-layer \bert while achieving better performance than previous state-of-the-art methods, like TinyBERT;
(2) extreme quantization plus layer reduction is able to reduce the model size by 50x, resulting in new state-of-the-art results on GLUE tasks. | Accept | Reviewers agree that this paper presents a systematic study on the impact of hyper-parameters and training strategies of previous works. Based on those empirical observations, they propose a simplified model with layer reduction and single-stage distillation, which do not rely on a complicated and ad-hoc training strategy. Extensive experiments are conducted with thorough comparison with existing works. Authors also clearly point-out their current limitations.
The major concern is that this paper is more focused on discussion of the effectiveness of training strategies in previous methods, while the theoretical contribution is somehow limited. It would be much better if authors could explain their observations (more training epochs is needed while additional distillation stages can be discarded) from a theoretical perspective, although this may be far out of the scope of this work. Nonetheless, this paper presents valuable empirical study over existing Transformer compression methods and may inspire following research; therefore, AC recommends acceptance.
| test | [
"GkBY2eztZT",
"gIHJ70u5hM-",
"Ph7AwzNz0qg",
"93G2KH_wmMS",
"dxIKRPwKOuEc",
"g1F6F6wsFD5",
"x3lEIdu_xbD",
"C5Q4UGeS_1F",
"dqwOlAd52MF",
"fFeBm34Vnfv",
"hiYK9pL87jS"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your answers. However, for the question of \"why increased training time helps\", I was looking for some theoretical reasoning or even speculation. The answer you gave was just more empirical support for the claim. If one knew the reason for this effect, it could potentially be applied to other prob... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
3
] | [
"dxIKRPwKOuEc",
"93G2KH_wmMS",
"hiYK9pL87jS",
"fFeBm34Vnfv",
"g1F6F6wsFD5",
"dqwOlAd52MF",
"C5Q4UGeS_1F",
"nips_2022_xNeAhc2CNAl",
"nips_2022_xNeAhc2CNAl",
"nips_2022_xNeAhc2CNAl",
"nips_2022_xNeAhc2CNAl"
] |
nips_2022_PwlW5Jri1Xt | Meta-Auto-Decoder for Solving Parametric Partial Differential Equations | Many important problems in science and engineering require solving the so-called parametric partial differential equations (PDEs), i.e., PDEs with different physical parameters, boundary conditions, shapes of computation domains, etc. Recently, building learning-based numerical solvers for parametric PDEs has become an emerging new field. One category of methods such as the Deep Galerkin Method (DGM) and Physics-Informed Neural Networks (PINNs) aim to approximate the solution of the PDEs. They are typically unsupervised and mesh-free, but require going through the time-consuming network training process from scratch for each set of parameters of the PDE. Another category of methods such as Fourier Neural Operator (FNO) and Deep Operator Network (DeepONet) try to approximate the solution mapping directly. Being fast with only one forward inference for each PDE parameter without retraining, they often require a large corpus of paired input-output observations drawn from numerical simulations, and most of them need a predefined mesh as well. In this paper, we propose Meta-Auto-Decoder (MAD), a mesh-free and unsupervised deep learning method that enables the pre-trained model to be quickly adapted to equation instances by implicitly encoding (possibly heterogenous) PDE parameters as latent vectors. The proposed method MAD can be interpreted by manifold learning in infinite-dimensional spaces, granting it a geometric insight. Extensive numerical experiments show that the MAD method exhibits faster convergence speed without losing accuracy than other deep learning-based methods. | Accept | The paper considers a meta-learning approach to solve families of PDEs (i.e. learning a solution operator). The main idea of the paper is to parametrize the solution of a member in the PDE family as being a function of a learned latent representation $z$ or that PDE. (Precisely, at a point $x$, they parametrize the solution as $f_{\theta}(x, z)$ where $\theta$ is globally learned, and $z$ is specific to a particular PDE.)
Beyond this, the parameters are fit as usual by minimizing some variational loss (e.g. $l_2$ error), and adapted to a new instance of a PDE as usual (by GD on an appropriately regularized $l_2$ loss).
The idea is fairly straightforward. It seems to perform well on small-scale data (e.g. families of Burgers' equation), compared to training from scratch and DeepONets, but comparisons on more challenging datasets are lacking. The authors also pointed out that fine-tuning (i.e. specializing to a new equation) is where the method shines --- which is reasonable given the parametric form that is fit.
| train | [
"mODkZXYMBR0",
"En1xuLg0aFW",
"wm75XlLTJ6t",
"cjC_U3kIzjK",
"WN_oEHlyjjS",
"mmESqqpCOSL",
"-UGuxEwf9h",
"3a1nGhtWKXE",
"F7j3xaqTLsL",
"UMg4CHeD79o",
"LGSkGnNPGjj",
"KD1qaevGdI_",
"_Q1KoxJPPiZ",
"7XiIq-zmH_7"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks and we will update the paper accordingly. ",
" Hi, thanks for your reponse for resolving my concerns and I have raised my score. I suggest these clarifications will be incorporated into the final version to make it more clear. ",
" Thank you for your careful review of our work and constructive suggesti... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"En1xuLg0aFW",
"-UGuxEwf9h",
"WN_oEHlyjjS",
"mmESqqpCOSL",
"UMg4CHeD79o",
"LGSkGnNPGjj",
"7XiIq-zmH_7",
"_Q1KoxJPPiZ",
"_Q1KoxJPPiZ",
"_Q1KoxJPPiZ",
"KD1qaevGdI_",
"nips_2022_PwlW5Jri1Xt",
"nips_2022_PwlW5Jri1Xt",
"nips_2022_PwlW5Jri1Xt"
] |
nips_2022_zbt3VmTsRIj | Maximum-Likelihood Inverse Reinforcement Learning with Finite-Time Guarantees | Inverse reinforcement learning (IRL) aims to recover the reward function and the associated optimal policy that best fits observed sequences of states and actions implemented by an expert. Many algorithms for IRL have an inherent nested structure: the inner loop finds the optimal policy given parametrized rewards while the outer loop updates the estimates towards optimizing a measure of fit. For high dimensional environments such nested-loop structure entails a significant computational burden. To reduce the computational burden of a nested loop, novel methods such as SQIL \cite{reddy2019sqil} and IQ-Learn \cite{garg2021iq} emphasize policy estimation at the expense of reward estimation accuracy. However, without accurate estimated rewards, it is not possible to do counterfactual analysis such as predicting the optimal policy under different environment dynamics and/or learning new tasks. In this paper we develop a novel {\em single-loop} algorithm for IRL that does not compromise reward estimation accuracy. In the proposed algorithm, each policy improvement step is followed by a stochastic gradient step for likelihood maximization. We show that the proposed algorithm provably converges to a stationary solution with a finite-time guarantee. If the reward is parameterized linearly we show the identified solution corresponds to the solution of the maximum entropy IRL problem. Finally, by using robotics control problems in Mujoco and their transfer settings, we show that the proposed algorithm achieves superior performance compared with other IRL and imitation learning benchmarks. | Accept | This paper presents a new single loop IRL algorithm that avoids the typical policy/reward optimization loop in IRL algorithms, without sacrificing the accuracy of the learned reward function. This is achieved through the use of stochastic gradients of the likelihood function. The proposed algorithm is proved to converge to a stationary solution with a finite-time guarantee. Experiments on some problems in MuJoCo show that the proposed algorithm can outperform existing solutions. The reviewers all agree that the paper is well-written, the algorithm is sufficiently new, and the experiments are compelling. There are some concerns that experimental evaluation is limited to MuJoCo locomotion tasks which can sometimes be too simple to draw strong conclusions. | train | [
"EgNgwf-puoF",
"XtOGSKLCCIL",
"sADQykI2aZR",
"OfhbvX5XGlt",
"ngC-6a_O_r",
"YUFDuJCgoOU",
"p1NNsA9mYqH",
"5Vs6m1MKYbc",
"D1G5vkwAEx0w",
"vvwScRJdSP1",
"UjtKXEx2f-",
"VXTpCNXP9kW",
"sCN7NbElibQ_",
"CpMhOylg3J",
"tQXSNBBvmKX",
"rQOPAk4vTou",
"xTROTs6DQ1",
"zm5Q7uKdXvU",
"KCS-jU1EG8G... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_... | [
" We sincerely appreciate you for taking time to review our paper and thank you for recognizing the contributions of this work.",
" Thank you for your clarification and detailed explanation and for running an additional experiment. I agree that the open-loop vs closed-loop distinction is important.",
" Thank yo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"vvwScRJdSP1",
"OfhbvX5XGlt",
"ngC-6a_O_r",
"YUFDuJCgoOU",
"UjtKXEx2f-",
"CpMhOylg3J",
"EmashNvo1_X",
"1LSosV6dtK",
"KCS-jU1EG8G",
"xTROTs6DQ1",
"VXTpCNXP9kW",
"sCN7NbElibQ_",
"EmashNvo1_X",
"tQXSNBBvmKX",
"rQOPAk4vTou",
"KCS-jU1EG8G",
"h_dtOBufpk",
"1LSosV6dtK",
"nips_2022_zbt3V... |
nips_2022_hk8v6BoKs-w | CoNSoLe: Convex Neural Symbolic Learning | Learning the underlying equation from data is a fundamental problem in many disciplines. Recent advances rely on Neural Networks (NNs) but do not provide theoretical guarantees in obtaining the exact equations owing to the non-convexity of NNs. In this paper, we propose Convex Neural Symbolic Learning (CoNSoLe) to seek convexity under mild conditions. The main idea is to decompose the recovering process into two steps and convexify each step. In the first step of searching for right symbols, we convexify the deep Q-learning. The key is to maintain double convexity for both the negative Q-function and the negative reward function in each iteration, leading to provable convexity of the negative optimal Q function to learn the true symbol connections. Conditioned on the exact searching result, we construct a Locally Convex equation Learning (LoCaL) neural network to convexify the estimation of symbol coefficients. With such a design, we quantify a large region with strict convexity in the loss surface of LoCaL for commonly used physical functions. Finally, we demonstrate the superior performance of the CoNSoLe framework over the state-of-the-art on a diverse set of datasets. | Accept | Authors propose to learn coefficients and symbols connections in the problem of Symbolic Regression (SR) by combining Q learning and neural networks. They rely on the convexity properties of of ICNN to ensure that the Q function and the rewards R() are convex wrt the state S and the action A of the MDP defined. They use the convexity properties and the determinism of the MDP to derive several guarantees on their method - namely, that the exact symbols can be recovered for noiseless datasets because the problem is locally convex in a neighborhood of the true minimum.
There were a number of questions and concerns regarding the theoretical results raised the reviewers which were addressed during the discussion period. I'd like to thank the authors for the important work they put during the rebuttal. The quality of the paper has been improved a lot since the initial submission and the experimental results have improved. | train | [
"ghw8MCTdHT2",
"TvrBeTTwvL",
"WgwF3GAUhd",
"telc5R3kNF",
"kyZXHmxEOK0",
"XqhXHnwygg5",
"rZgNXnU2QGJ",
"ac0-S4INKtc",
"PJvEVttN06o",
"jfaOUvxqnxR",
"OtfBsq0zsaq",
"Fy0X-akEazh",
"mmJIA27FYvQ",
"rqys-aPl03sw",
"S8i1xS_lTyj",
"aUlLmcURhk",
"kAST6V3fW8",
"gdMbQ8X5xd",
"BJn_FNyo0gI",
... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official... | [
" We are really grateful for your constructive comments that largely increase the effectiveness and soundness of our work. We really enjoy the discussion with you these days. Thanks a lot for your consideration and appreciation. ",
" $\\textbf{Q3: A thorough re-evaluation to make core ideas more digestible.}$\n\n... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
2,
3
] | [
"kyZXHmxEOK0",
"WgwF3GAUhd",
"telc5R3kNF",
"kAST6V3fW8",
"nips_2022_hk8v6BoKs-w",
"rZgNXnU2QGJ",
"ac0-S4INKtc",
"PJvEVttN06o",
"jfaOUvxqnxR",
"OtfBsq0zsaq",
"Fy0X-akEazh",
"mmJIA27FYvQ",
"rqys-aPl03sw",
"gdMbQ8X5xd",
"eB7FyhlcH7m",
"wVJhoEv3zw2",
"Gz6dQcjXcws",
"BJn_FNyo0gI",
"Wh... |
nips_2022_U138nQxHh3 | Understanding and Improving Robustness of Vision Transformers through Patch-based Negative Augmentation | We investigate the robustness of vision transformers (ViTs) through the lens of their special patch-based architectural structure, i.e., they process an image as a sequence of image patches. We find that ViTs are surprisingly insensitive to patch-based transformations, even when the transformation largely destroys the original semantics and makes the image unrecognizable by humans. This indicates that ViTs heavily use features that survived such transformations but are generally not indicative of the semantic class to humans. Further investigations show that these features are useful but non-robust, as ViTs trained on them can achieve high in-distribution accuracy, but break down under distribution shifts. From this understanding, we ask: can training the model to rely less on these features improve ViT robustness and out-of-distribution performance? We use the images transformed with our patch-based operations as negatively augmented views and offer losses to regularize the training away from using non-robust features. This is a complementary view to existing research that mostly focuses on augmenting inputs with semantic-preserving transformations to enforce models' invariance. We show that patch-based negative augmentation consistently improves robustness of ViTs on ImageNet based robustness benchmarks across 20+ different experimental settings. Furthermore, we find our patch-based negative augmentation are complementary to traditional (positive) data augmentation techniques and batch-based negative examples in contrastive learning. | Accept | The reviewers agree that this is an interesting work. Some concerns were expressed, especially regarding the limited gain of the method and the generality to other types of ViTs. But overall the reviewers recognize the technical contribution from this paper and the AC agrees with their decisions. | train | [
"NgQ-uBpEKwx",
"3TooYXGVfiI",
"zdU3zJg4R5c",
"NGAza26LUB",
"Cknh5pwISR-",
"07YCZyKjo_",
"66IiWXJSa5-",
"vh8d1_1ebNj",
"yCOzmR5M_n8",
"O8HCqUbkjIK",
"GMGFhCnP1a5",
"FeT0A_dDohk",
"V5YLegKmexW"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks very much for your suggestion and we will add a section displaying specific images that can be recognized by our method while others fail. To briefly give you an impression, our negative augmentation can effectively mitigate models' reliance on biases, e.g., texture bias, color bias, local bias, etc. For e... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4,
4
] | [
"NGAza26LUB",
"zdU3zJg4R5c",
"yCOzmR5M_n8",
"07YCZyKjo_",
"vh8d1_1ebNj",
"O8HCqUbkjIK",
"GMGFhCnP1a5",
"V5YLegKmexW",
"FeT0A_dDohk",
"nips_2022_U138nQxHh3",
"nips_2022_U138nQxHh3",
"nips_2022_U138nQxHh3",
"nips_2022_U138nQxHh3"
] |
nips_2022_62GLWUoOLb5 | Scalable Distributional Robustness in a Class of Non-Convex Optimization with Guarantees | Distributionally robust optimization (DRO) has shown a lot of promise in providing robustness in learning as well as sample-based optimization problems. We endeavor to provide DRO solutions for a class of sum of fractionals, non-convex optimization which is used for decision making in prominent areas such as facility location and security games. In contrast to previous work, we find it more tractable to optimize the equivalent variance regularized form of DRO rather than the minimax form. We transform the variance regularized form to a mixed-integer second-order cone program (MISOCP), which, while guaranteeing global optimality, does not scale enough to solve problems with real-world datasets. We further propose two abstraction approaches based on clustering and stratified sampling to increase scalability, which we then use for real-world datasets. Importantly, we provide global optimality guarantees for our approach and show experimentally that our solution quality is better than the locally optimal ones achieved by state-of-the-art gradient-based methods. We experimentally compare our different approaches and baselines and reveal nuanced properties of a DRO solution. | Accept | This paper studies distributionally robust optimization for a class of non-convex problems motivated by applications to decision problems such as security games and facility location. The reviewers agree that the paper is novel, studies an interesting topic, and proposes non-trivial algorithms with sound analysis. Reviewer bkf3 writes that the clustering and stratified sampling scaling approaches are simple yet elegant solutions. The main common concern among all reviewers i that some sections of the paper are compactly written and somewhat difficult to read. Based on the reviewer feedback, this paper is above the bar for NeurIPS.
Below are a number of typos and clarifications that should be made in the final version of the paper:
- Lines 37, 43: It looks like some citations here need \citep instead of \citet.
- Line 51: Should “location facility” be “facility location”?
- Line 91: Missing 1/n factor (though obviously this does not affect the optimal decision variables z).
- Line 93, 96: I think \mathcal{P}_{\zeta, n} should be \mathcal{P}_{\zeta, N}. I realize this notation is borrowed from the Duchi and Namkoong paper, but in their setting the number of samples was little n.
- Paragraph starting on line 86: I think it would be worth clarifying the various distributions at play. My understanding is that there is an (currently unmentioned) distribution D over feature vectors. The distribution P is the distribution over x* corresponding to the following sampling procedure: Sample a feature vector b from D, then set x* = f*(b), where f is the Bayes optimal classifier. Next, fix a sample b_1, …, b_N from D. hat P is the empirical distribution of f(b_i), where f is a learned approximation to f*, and P* is the empirical distribution of f*(b_i).
- Theorem 1: I think it should be explicitly stated that this theorem applies when the function f is an empirical risk minimizer.
- Line 107: I think \mathcal{H} should be the function class \mathcal{F} from the previous paragraph.
- Equation (1) and line 132: variable i is used twice for different purposes (i.e., two different sum indices in equation (1) and as the index of l_i and the sum index in line 132).
- Line 141: Should this be a piecewise *constant* approximation? I.e., if we use the components of the v variable as indicators for which piece of the approximation I am in, an expression of the form dot(a,v) + b can only represent piecewise constant functions instead of piecewise linear functions. I might not be seeing the intended transformation, but nevertheless it would be good to clarify this.
- Line 174: Are the functions n and d assumed to be Lipschitz with respect to both arguments, or just x?
- Theorem 2: It might be good to clarify how the bound depends on the Lipschitz constant of the functions n and d. I assume this contributes to the number of pieces needed in the piecewise linear approximation.
| train | [
"-B6s7duxhCP",
"SdSBnB6lx22",
"ITVy_J0YKyR",
"falNm4_neA4",
"Nbd0yBa8uw7",
"2REeinKVExf",
"YLVZdX_28Wr",
"cBQUzgGoWgq",
"HPzszYSqowt",
"u71tQdMHiur",
"03_4d6IZNI2",
"pzSvHKX8nkp"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the kind words and for raising the scores.",
" Thank you for your response. The response addressed several concerns I had and I have raised the score.",
" We thank the reviewer for the kind words and for raising the scores.",
" Thank you for your response.\nAs an outsider to the fi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
2,
1
] | [
"SdSBnB6lx22",
"YLVZdX_28Wr",
"falNm4_neA4",
"Nbd0yBa8uw7",
"pzSvHKX8nkp",
"03_4d6IZNI2",
"u71tQdMHiur",
"HPzszYSqowt",
"nips_2022_62GLWUoOLb5",
"nips_2022_62GLWUoOLb5",
"nips_2022_62GLWUoOLb5",
"nips_2022_62GLWUoOLb5"
] |
nips_2022_b8fgqTCBJe | Finding and Listing Front-door Adjustment Sets | Identifying the effects of new interventions from data is a significant challenge found across a wide range of the empirical sciences. A well-known strategy for identifying such effects is Pearl's front-door (FD) criterion. The definition of the FD criterion is declarative, only allowing one to decide whether a specific set satisfies the criterion. In this paper, we present algorithms for finding and enumerating possible sets satisfying the FD criterion in a given causal diagram. These results are useful in facilitating the practical applications of the FD criterion for causal effects estimation and helping scientists to select estimands with desired properties, e.g., based on cost, feasibility of measurement, or statistical power. | Accept | The paper presents new algorithms for finding and enumerating sets satisfying Pearl's front-door (FD) criterion given a causal diagram. While the reviews have made some helpful suggestions which the authors should implement, they all agree that the paper makes a solid contribution to our understanding of this topic.
| train | [
"LcFHRGsYKD",
"J1auZVg3APw",
"vYlWoZsAIu2",
"_D4lqhTWUZ",
"aSoDbx7Q4jQ",
"PB7TqAHmQhx",
"R-PqTXh4-rn",
"rYRWoWLyAPh",
"zPRwIESM79a",
"lNn2AI7i7rd",
"ZCl0Enq0sGn",
"UstovGiQwrb",
"ORlh_KV8gjF"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your clarification. I have understood the meaning of ``completeness'' used in this work. \n\nIt would be better to add a couple of sentences to make this point clearly in the main text. \n\nAs a result, I am happy to increase my score.",
" I see. That makes sense.",
" Thank you for the note. We cer... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"PB7TqAHmQhx",
"rYRWoWLyAPh",
"_D4lqhTWUZ",
"R-PqTXh4-rn",
"ORlh_KV8gjF",
"ORlh_KV8gjF",
"UstovGiQwrb",
"ZCl0Enq0sGn",
"lNn2AI7i7rd",
"nips_2022_b8fgqTCBJe",
"nips_2022_b8fgqTCBJe",
"nips_2022_b8fgqTCBJe",
"nips_2022_b8fgqTCBJe"
] |
nips_2022_9PQ13zJ1HME | Retaining Knowledge for Learning with Dynamic Definition | Machine learning models are often deployed in settings where they must be constantly updated in response to the changes
in class definitions while retaining high accuracy on previously learned definitions. A classical use case is
fraud detection, where new fraud schemes come one after another. While such an update can be accomplished by re-training
on the complete data, the process is inefficient and prevents real-time and on-device learning. On the other hand,
efficient methods that incrementally learn from new data often result in the forgetting of previously-learned knowledge.
We define this problem as Learning with Dynamic Definition (LDD) and demonstrate that popular models, such as
the Vision Transformer and Roberta, exhibit substantial forgetting of past definitions. We present the first practical
and provable solution to LDD. Our proposal is a hash-based sparsity model
\textit{RIDDLE} that solves evolving definitions by associating samples only to relevant parameters. We prove that our model is a universal function approximator and theoretically bounds the knowledge lost during the update process.
On practical tasks with evolving class definition in vision and natural language processing, \textit{RIDDLE} outperforms baselines by up to
30\% on the original dataset while providing competitive accuracy on the update dataset. | Accept | This paper focuses on the problem of learning from new data without forgetting the knowledge from the previously learned tasks. The forgetting happens due to the parameter changes when training on new data. The author proposes RIDDLE, a model with novel parameters grouping strategy, to effectively preserve the parameters that are most responsible for the original tasks. The key idea is to leverage the local sensitive hash function to hash the input and use the bucketing to associate the input with the corresponding set of parameters. The authors conduct both theoretical and empirical analyses on the proposed method, demonstrating that RIDDLE effectively improves the performance on the original tasks, thus showing RIDDLE retains the learned knowledge. All reviewers recognize the novelty and significance of the proposed method. During the discussion, the authors also successfully addressed the reviewers' concerns on the performance comparisons with other baselines, hyperparameter settings, choice of hash functions, etc. Based on the reviews and thorough discussions, we recommend the acceptance of the paper.
| train | [
"aVXJGfs5lK",
"p1fjmxc34z",
"C5ta2fP8Ofx",
"Jlpp78aZnGF",
"HhxcFQMJBNer",
"5bm3p18RXt",
"H9IZsEe2hlL",
"4h-F_Yd8TNw",
"Agc5vc7v1R",
"rn3ytEmODYo",
"ybJuvdI1Ms",
"pYo3sgJnm6z",
"zAPaTuQLbcU",
"TQ6obTeT64"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the support. We are glad our response addresses your concerns.\n\nWe updated the previous response to include the accuracy on update dataset. In each cell, left number is test accuracy on original test set, right number is test accuracy on update test set. We will definitely include a more comprehensiv... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
3,
3
] | [
"p1fjmxc34z",
"H9IZsEe2hlL",
"5bm3p18RXt",
"4h-F_Yd8TNw",
"nips_2022_9PQ13zJ1HME",
"rn3ytEmODYo",
"TQ6obTeT64",
"zAPaTuQLbcU",
"pYo3sgJnm6z",
"ybJuvdI1Ms",
"nips_2022_9PQ13zJ1HME",
"nips_2022_9PQ13zJ1HME",
"nips_2022_9PQ13zJ1HME",
"nips_2022_9PQ13zJ1HME"
] |
nips_2022_j2Vtg_jhKZ | Transferring Pre-trained Multimodal Representations with Cross-modal Similarity Matching | Despite surprising performance on zero-shot transfer, pre-training a large-scale multimodal model is often prohibitive as it requires a huge amount of data and computing resources. In this paper, we propose a method (BeamCLIP) that can effectively transfer the representations of a large pre-trained multimodal model (CLIP-ViT) into a small target model (e.g., ResNet-18). For unsupervised transfer, we introduce cross-modal similarity matching (CSM) that enables a student model to learn the representations of a teacher model by matching the relative similarity distribution across text prompt embeddings. To better encode the text prompts, we design context-based prompt augmentation (CPA) that can alleviate the lexical ambiguity of input text prompts. Our experiments show that unsupervised representation transfer of a pre-trained vision-language model enables a small ResNet-18 to achieve a better ImageNet-1K top-1 linear probe accuracy (66.2%) than vision-only self-supervised learning (SSL) methods (e.g., SimCLR: 51.8%, SwAV: 63.7%), while closing the gap with supervised learning (69.8%). | Accept | This paper presents a method, BeamCLIP to transfer multimodal representations of CLIP to smaller student networks for downstream tasks. It combines both similarity loss of representations and cross-modal similarity matching loss among teacher and student.
The method is tested on six benchmark datasets and showed convincing performance improvement. Reviewers' concerns on fairness and experimental comparison are properly handled in author feedback. | train | [
"uTKQizbUnf",
"CxRfKVHXweY",
"88j5bEy3EsL",
"kFg5-Ruvg2u",
"EHSkyN2Qu2cj",
"J47PeGIn7P8D",
"FOavlc5nrD0",
"EEkbDJFoMP2",
"ihc_3MfA3k",
"Rvks2i7qf6W",
"VeyCii4SPH5",
"zYSud-O3msI",
"puMPM5w3HEF",
"meuaV5puLua"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The article does not raise any red flags per se. It would be interesting, however, to read a section that explicitly addresses the ethical implication of a study such as this. How do they relate to ethical discussions in relation to datasets such as ImageNet and the discriminatory biases in Wikipedia? And which p... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4,
4
] | [
"nips_2022_j2Vtg_jhKZ",
"88j5bEy3EsL",
"EHSkyN2Qu2cj",
"nips_2022_j2Vtg_jhKZ",
"meuaV5puLua",
"puMPM5w3HEF",
"zYSud-O3msI",
"VeyCii4SPH5",
"Rvks2i7qf6W",
"nips_2022_j2Vtg_jhKZ",
"nips_2022_j2Vtg_jhKZ",
"nips_2022_j2Vtg_jhKZ",
"nips_2022_j2Vtg_jhKZ",
"nips_2022_j2Vtg_jhKZ"
] |
nips_2022_Fh9l_pVsBfv | Gaussian Copula Embeddings | Learning latent vector representations via embedding models has been shown promising in machine learning. However, most of the embedding models are still limited to a single type of observation data. We propose a Gaussian copula embedding model to learn latent vector representations of items in a heterogeneous data setting. The proposed model can effectively incorporate different types of observed data and, at the same time, yield robust embeddings. We demonstrate the proposed model can effectively learn in many different scenarios, outperforming competing models in modeling quality and task performance. | Accept |
This paper was a difficult case, where numerical review scores might be in disagreement with several statements during the discussion phase, and also with some of my comments below.
Let me explain this in more detail.
All reviewers mentioned several strong points, such as:
- the general novelty of the concept of using copulas for learning embeddings
- the overall soundness and clarity of writing
- interesting conceptual parts, for instance the proposed ordering-preserving likelihood
- good experimental validation
On the other hand, also several negative points have been raised by the reviewers, and some of them remained valid after the rebuttal. To me, the most important of these negative points is the statement by one of the reviewers that "the technical and theoretical contributions of the paper are not significant".
During the discussion phase, we discussed several points that were listed as "strengths" in the original reviews, such as the statement in one of the reviews that "the Gaussian copula could help to naturally capture the underlining correlations with one model." My counter-argument was that a Gaussian copula model can capture the underlying dependencies if and only if the true underlying "pure" dependency after removing the marginal effects (i.e. the copula) was indeed Gaussian. But this might be rare in practice, and an essential part of the copula literature deals with non-Gaussian dependency models that can explain practically relevant phenomena such as tail dependency etc..
In the end, we somewhat agreed that the usefulness of Gaussian copulas for this purpose might be not so clear, and that some ablation studies might be needed in order to better understand the contribution of the Gaussian copula to the overall method.
A second question I raised during the discussions concerned the efficiency of the proposed inference method, which had an important role in the paper, since the authors explicitly motivated their work by efficiency problems in "traditional" Gaussian copula estimation procedure. If that really is a main contribution of this paper, I would argue that there should have been a comparison with the Hamiltonian MCMC method for Gaussian copula inference in (Alfredo Kalaitzis & Ricardo Silva, NIPS 2013), which - to my experience - is indeed quite efficient in practice, and this argument was also supported by one of the reviewers.
In the end, to me as an area chair, this is one of the classical "borderline" cases, where a paper does certainly contain some interesting aspects, but on the other hand, there are also many potential problems and limitations. Since all reviewers finally saw this paper above the threshold, I also recommend to accept this paper, but I still would like to mention that I am not fully convinced about this recommendation. | train | [
"xk0XPw2_P5y",
"tzsO7iHO_JJ",
"xJWjDBA4VlG",
"v4y-lqe6pI2",
"9G4iBipgFue",
"83kywO1Xpa",
"KF5dmybX7Zo",
"7xNXqtNmlk8"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Concerns are addressed. The score will be updated.",
" We thank all reviewers for the comments, encouragement, appreciation, and suggestions for improving our paper. We briefly highlight the key notions and potentials of our work below. We also respond to each reviewer specifically in separate responses.\n\nKey... | [
-1,
-1,
-1,
-1,
-1,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"xJWjDBA4VlG",
"nips_2022_Fh9l_pVsBfv",
"83kywO1Xpa",
"KF5dmybX7Zo",
"7xNXqtNmlk8",
"nips_2022_Fh9l_pVsBfv",
"nips_2022_Fh9l_pVsBfv",
"nips_2022_Fh9l_pVsBfv"
] |
nips_2022_pgBpQYss2ba | On the Complexity of Adversarial Decision Making | A central problem in online learning and decision making---from bandits to reinforcement learning---is to understand what modeling assumptions lead to sample-efficient learning guarantees. We consider a general adversarial decision making framework that encompasses (structured) bandit problems with adversarial rewards and reinforcement learning problems with adversarial dynamics. Our main result is to show---via new upper and lower bounds---that the Decision-Estimation Coefficient, a complexity measure introduced by Foster et al. in the stochastic counterpart to our setting, is necessary and sufficient to obtain low regret for adversarial decision making. However, compared to the stochastic setting, one must apply the Decision-Estimation Coefficient to the convex hull of the class of models (or, hypotheses) under consideration. This establishes that the price of accommodating adversarial rewards or dynamics is governed by the behavior of the model class under convexification, and recovers a number of existing results --both positive and negative. En route to obtaining these guarantees, we provide new structural results that connect the Decision-Estimation Coefficient to variants of other well-known complexity measures, including the Information Ratio of Russo and Van Roy and the Exploration-by-Optimization objective of Lattimore and György. | Accept | The submission studies the important problem of quantifying the complexity of learning in adversarial sequential decision problems with partial feedback. Although the problem is well-studied in the full-information setting, the same problems in the bandit and reinforcement learning settings are largely open. It is still not clear how the optimal regret depends on the shape of the action and parameter spaces.
This paper makes a significant contribution in this area. It shows that the Decision-Estimation Coefficient (introduced by Foster et al. 2021) quantifies the complexity of learning in many adversarial sequential decision problems. This result would be of interest to the online learning community in NeurIPS. Although the paper does not really resolve any open questions or show substantial improvements that wouldn't be possible with other techniques, the paper provides new important insights, and I believe the introduced tools and techniques in this paper will ultimately lead to such improvements. | train | [
"j7zyv8T-tel",
"_MuEoJAHyA7",
"WuAXb__RRrxm",
"l8YWv3F8tza",
"e3SD-RtP7OW",
"uUVYHhs92i",
"CsYRIhtVy87"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your answer, it improved my understanding of your work, and I confirm my score.\nI think the noiseless MAB example is also worth including in the final version.",
" > How practical is the ExO+ algorithm once instantiated to the bandit and MDP examples? \n\nWe will expand the discussion of computat... | [
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"_MuEoJAHyA7",
"e3SD-RtP7OW",
"uUVYHhs92i",
"CsYRIhtVy87",
"nips_2022_pgBpQYss2ba",
"nips_2022_pgBpQYss2ba",
"nips_2022_pgBpQYss2ba"
] |
nips_2022_QPg5TTAdizy | Exploiting the Relationship Between Kendall's Rank Correlation and Cosine Similarity for Attribution Protection | Model attributions are important in deep neural networks as they aid practitioners in understanding the models, but recent studies reveal that attributions can be easily perturbed by adding imperceptible noise to the input. The non-differentiable Kendall's rank correlation is a key performance index for attribution protection. In this paper, we first show that the expected Kendall's rank correlation is positively correlated to cosine similarity and then indicate that the direction of attribution is the key to attribution robustness. Based on these findings, we explore the vector space of attribution to explain the shortcomings of attribution defense methods using $\ell_p$ norm and propose integrated gradient regularizer (IGR), which maximizes the cosine similarity between natural and perturbed attributions. Our analysis further exposes that IGR encourages neurons with the same activation states for natural samples and the corresponding perturbed samples. Our experiments on different models and datasets confirm our analysis on attribution protection and demonstrate a decent improvement in adversarial robustness. | Accept | Reviewers all find this paper presenting both good theoretical findings and empirical results for an important problem (feature attribution robustness). The approaches authors used in connecting the relationship between Kendall’s rank correlation and cosine similarity, as well as the geometric perspectives, are well received. The presentations are well written, with minor places for quick improvements.
Reviewers have raised various weakness points but we agreed most of them are minor, do not affect the contribution of this paper, and/or can be fixed without much effort.
Overall we recommend acceptance and would like to encourage the authors to further improve this paper presentation following reviewers’ suggestions in the next version. | val | [
"vu8khQc-ve_",
"dHhHuQyCRdX",
"AODU0tmjpDP",
"FekPLdPfLcN",
"HCzohOiURkZ",
"i4_tagKTibY",
"ZQ5MHHCQ25",
"9AGJe8oaqJX",
"08VOZYqHtK7",
"_8c-fi9YDPW"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for addressing the comments !",
" Thank you very much for your reply. We will follow your suggestions and include the tables and discussions on choosing $\\lambda$ in the final version.\n\n",
" Thanks for your reply!\n\nThe revised version is clearer to me. Why not put the tables of choices of $\\lambd... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"i4_tagKTibY",
"AODU0tmjpDP",
"ZQ5MHHCQ25",
"nips_2022_QPg5TTAdizy",
"_8c-fi9YDPW",
"08VOZYqHtK7",
"9AGJe8oaqJX",
"nips_2022_QPg5TTAdizy",
"nips_2022_QPg5TTAdizy",
"nips_2022_QPg5TTAdizy"
] |
nips_2022_UEhzUupXbL2 | M2N: Mesh Movement Networks for PDE Solvers | Numerical Partial Differential Equation (PDE) solvers often require discretizing the physical domain by using a mesh. Mesh movement methods provide the capability to improve the accuracy of the numerical solution without introducing extra computational burden to the PDE solver, by increasing mesh resolution where the solution is not well-resolved, whilst reducing unnecessary resolution elsewhere. However, sophisticated mesh movement methods, such as the Monge-Ampère method, generally require the solution of auxiliary equations. These solutions can be extremely expensive to compute when the mesh needs to be adapted frequently. In this paper, we propose to the best of our knowledge the first learning-based end-to-end mesh movement framework for PDE solvers. Key requirements of learning-based mesh movement methods are: alleviating mesh tangling, boundary consistency, and generalization to mesh with different resolutions. To achieve these goals, we introduce the neural spline model and the graph attention network (GAT) into our models respectively. While the Neural-Spline based model provides more flexibility for large mesh deformation, the GAT based model can handle domains with more complicated shapes and is better at performing delicate local deformation. We validate our methods on stationary and time-dependent, linear and non-linear equations, as well as regularly and irregularly shaped domains. Compared to the traditional Monge-Ampère method, our approach can greatly accelerate the mesh adaptation process by three to four orders of magnitude, whilst achieving comparable numerical error reduction. | Accept | Reviews for this paper were somewhat mixed, but overall the AC agrees with reviewer h6GN and others that the idea in this work is creative and could inspire future work. In particular, while the experiments do not display diversity in terms of mesh density/shape, the overall approach is compelling and fairly well verified by the experiments. It seems reasonable for NeurIPS to take a risk on publishing this work in hopes that it will be valuable for future PDE solution machinery that can address some of the missing finer points.
In the revised camera-ready version of the paper, please acknowledge the shortcomings/limitations of the work (e.g., use of Monge-Ampere in contrast to other methods and use of simple 2D problems) and say something about how you might address these limitations. | train | [
"i_s-5MMEJ9",
"FOS2Fy4DDHK",
"wh6zuq6sbeh",
"qX1SLbazurn",
"X4wovIn1Ey5",
"7B716t7QtXy",
"aUZtdJVlRzY",
"RwS-GJEZ13-",
"VLaClXnZcgS"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to sincerely thank all the reviewers for your professional and constructive comments. We have replied to your questions and comments in detail respectively. We have also revised the manuscript accordingly to address the proposed issues, including the experimental settings, cited literature, etc.",
... | [
-1,
-1,
-1,
-1,
-1,
6,
4,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
2,
4,
3,
5
] | [
"nips_2022_UEhzUupXbL2",
"VLaClXnZcgS",
"RwS-GJEZ13-",
"aUZtdJVlRzY",
"7B716t7QtXy",
"nips_2022_UEhzUupXbL2",
"nips_2022_UEhzUupXbL2",
"nips_2022_UEhzUupXbL2",
"nips_2022_UEhzUupXbL2"
] |
nips_2022_zAuiZpZ478l | Hierarchical Lattice Layer for Partially Monotone Neural Networks | Partially monotone regression is a regression analysis in which the target values are monotonically increasing with respect to a subset of input features. The TensorFlow Lattice library is one of the standard machine learning libraries for partially monotone regression. It consists of several neural network layers, and its core component is the lattice layer. One of the problems of the lattice layer is that it requires the projected gradient descent algorithm with many constraints to train it. Another problem is that it cannot receive a high-dimensional input vector due to the memory consumption. We propose a novel neural network layer, the hierarchical lattice layer (HLL), as an extension of the lattice layer so that we can use a standard stochastic gradient descent algorithm to train HLL while satisfying monotonicity constraints and so that it can receive a high-dimensional input vector. Our experiments demonstrate that HLL did not sacrifice its prediction performance on real datasets compared with the lattice layer. | Accept | Overall, the reviews about this paper are very positive. The authors spent great effort engaging in discussions and improving the paper with clarifications and additional experiments. We recommend accepting the paper. | train | [
"JlCStr5hjdW",
"GbTfg_Ydvn",
"hoM__K3SR2R",
"f7haDMPjBJ",
"OiueemPt7FK",
"-c12meKnHR6",
"OAb150H1w_X",
"0J1ZcljBC3-",
"PDRpDnxp5-n",
"_kzzcO2yLQK",
"ctbL-Y3T5h_",
"jXlChWc6BAJ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank authors for answering my question. Taking into account other reviews and authors responses, I am updating my score to accept.\n\nIt would suggest that authors add reference to the crossing problem example in [Tagasovska and Lopez-Paz, 2019] to the updated version of their manuscript.",
" For better repr... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"PDRpDnxp5-n",
"f7haDMPjBJ",
"OiueemPt7FK",
"0J1ZcljBC3-",
"-c12meKnHR6",
"OAb150H1w_X",
"jXlChWc6BAJ",
"ctbL-Y3T5h_",
"_kzzcO2yLQK",
"nips_2022_zAuiZpZ478l",
"nips_2022_zAuiZpZ478l",
"nips_2022_zAuiZpZ478l"
] |
nips_2022_cPVuuk1lZb3 | Emergence of Hierarchical Layers in a Single Sheet of Self-Organizing Spiking Neurons | Traditionally convolutional neural network architectures have been designed by stacking layers on top of each other to form deeper hierarchical networks. The cortex in the brain however does not just stack layers as done in standard convolution neural networks, instead different regions are organized next to each other in a large single sheet of neurons. Biological neurons self organize to form topographic maps, where neurons encoding similar stimuli group together to form logical clusters. Here we propose new self-organization principles that allow for the formation of hierarchical cortical regions (i.e. layers) in a completely unsupervised manner without requiring any predefined architecture. Synaptic connections are dynamically grown and pruned, which allows us to actively constrain the number of incoming and outgoing connections. This way we can minimize the wiring cost by taking into account both the synaptic strength and the connection length. The proposed method uses purely local learning rules in the form of spike-timing-dependent plasticity (STDP) with lateral excitation and inhibition. We show experimentally that these self-organization rules are sufficient for topographic maps and hierarchical layers to emerge. Our proposed Self-Organizing Neural Sheet (SONS) model can thus form traditional neural network layers in a completely unsupervised manner from just a single large pool of unstructured spiking neurons. | Accept | Novel and sound contribution. | train | [
"Y5FR-yXx87G",
"seREe0mWPy",
"ypfUNf4rz15",
"Tnw1t0y7Kps",
"kWoWK50rFYV",
"YTW8ze_NMa-",
"xeQjvz_XUU",
"UnGyp2jdpf",
"uhC2IcN7ENY",
"IacPVCcJ2z",
"VWES3S_PChb",
"Fv_CSHc1cri"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for an incredibly thorough response, addressing all my concerns with the paper. I have increased the score to 7. \n\nAs a side note, the color palette adjustment helps tremendously. I was unable to perceive the pattern differences in Fig 4, e.g., whereas they are quite obvious... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
4
] | [
"UnGyp2jdpf",
"kWoWK50rFYV",
"YTW8ze_NMa-",
"nips_2022_cPVuuk1lZb3",
"VWES3S_PChb",
"Fv_CSHc1cri",
"IacPVCcJ2z",
"uhC2IcN7ENY",
"nips_2022_cPVuuk1lZb3",
"nips_2022_cPVuuk1lZb3",
"nips_2022_cPVuuk1lZb3",
"nips_2022_cPVuuk1lZb3"
] |
nips_2022_uuaMrewU9Kk | Skills Regularized Task Decomposition for Multi-task Offline Reinforcement Learning | Reinforcement learning (RL) with diverse offline datasets can have the advantage of leveraging the relation of multiple tasks and the common skills learned across those tasks, hence allowing us to deal with real-world complex problems efficiently in a data-driven way. In offline RL where only offline data is used and online interaction with the environment is restricted, it is yet difficult to achieve the optimal policy for multiple tasks, especially when the data quality varies for the tasks. In this paper, we present a skill-based multi-task RL technique on heterogeneous datasets that are generated by behavior policies of different quality. To learn the shareable knowledge across those datasets effectively, we employ a task decomposition method for which common skills are jointly learned and used as guidance to reformulate a task in shared and achievable subtasks. In this joint learning, we use Wasserstein Auto-Encoder (WAE) to represent both skills and tasks on the same latent space and use the quality-weighted loss as a regularization term to induce tasks to be decomposed into subtasks that are more consistent with high-quality skills than others. To improve the performance of offline RL agents learned on the latent space, we also augment datasets with imaginary trajectories relevant to high-quality skills for each task. Through experiments, we show that our multi-task offline RL approach is robust to different-quality datasets and it outperforms other state-of-the-art algorithms for several robotic manipulation tasks and drone navigation tasks. | Accept | The reviewers appreciated the authors' response and clarifications. Given the feedback from the reviewers and the discussion, I would like to recommend this paper for acceptance and congratulate them on a strong submission. I encourage the authors to address the reviewers' comments for the final version of the paper. | train | [
"JaE8MJJCiX-",
"M7wjV6lYEsD",
"am4jDDz_57M",
"sCkbs6ysbRv",
"mcMVKinttN",
"f92r_3amEbD",
"b8Fmas40bu",
"fcKJxYh9wkgF",
"4j-PvzbuS1u",
"iiganscntWY",
"LuYV7Il1xX",
"0sw_yDSl-5h",
"nWVoHWsPL8R",
"uV4gbCD9cbT",
"88bzbbdflpF",
"vg7osmNd6o7",
"o6JiiPUuJ5U",
"EaKZcoHFVba"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your thoughtful comment and insight.\n### Reviewer comment\nCould you explain more why TD3+BC uses spatial isolation of model weights? If TD3+BC is just adding to the state a one-hot vector encoding the tasks, then couldn't many of the weights be shared across the different tasks?\n\n### Author resp... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"M7wjV6lYEsD",
"sCkbs6ysbRv",
"LuYV7Il1xX",
"b8Fmas40bu",
"fcKJxYh9wkgF",
"0sw_yDSl-5h",
"88bzbbdflpF",
"o6JiiPUuJ5U",
"vg7osmNd6o7",
"vg7osmNd6o7",
"vg7osmNd6o7",
"o6JiiPUuJ5U",
"o6JiiPUuJ5U",
"EaKZcoHFVba",
"EaKZcoHFVba",
"nips_2022_uuaMrewU9Kk",
"nips_2022_uuaMrewU9Kk",
"nips_20... |
nips_2022_lNokkSaUbfV | Masked Autoencoding for Scalable and Generalizable Decision Making | We are interested in learning scalable agents for reinforcement learning that can learn from large-scale, diverse sequential data similar to current large vision and language models. To this end, this paper presents masked decision prediction (MaskDP), a simple and scalable self-supervised pretraining method for reinforcement learning (RL) and behavioral cloning (BC). In our MaskDP approach, we employ a masked autoencoder (MAE) to state-action trajectories, wherein we randomly mask state and action tokens and reconstruct the missing data. By doing so, the model is required to infer masked out states and actions and extract information about dynamics. We find that masking different proportions of the input sequence significantly helps with learning a better model that generalizes well to multiple downstream tasks. In our empirical study we find that a MaskDP model gains the capability of zero-shot transfer to new BC tasks, such as single and multiple goal reaching, and it can zero-shot infer skills from a few example transitions. In addition, MaskDP transfers well to offline RL and shows promising scaling behavior w.r.t. to model size. It is amenable to data efficient finetuning, achieving competitive results with prior methods based on autoregressive pretraining. | Accept | The paper investigates the use of Masked Auto Encoders (MAE) for unsupervised pretraining in RL. Reviewer ufWH summarizes well, "this paper shows extensive empirical comparisons against baselines. In particular, offline RL results on Walker are comparable to SoTA in ExoRL [31]". cGtJ says, "The paper propose several downstream application after the unsupervised pre training: Goal reaching, prompting, and fine tuning. The experiments are done in the deep mind suit with Mojuco. The authors also provides analysis on masking ratio and scalability."
Overall, reviewers find this to be a simple but strong method for unsupervised training for RL. I agree. | train | [
"NQ_V7JyEimY",
"01x7Bw4g5UT",
"L3StaCYdCO",
"MMGEygR5nQC",
"jrqzQ6YR8_H",
"WuP2NTxWQE_p",
"f74lytXhqBC",
"l0w7_fuOiNN",
"FwpngiXK8N"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. We want to address more about dataset quality.\n\n> diverse trajectories generated by an exploration policy trained with e.g. [DIAYN](https://arxiv.org/abs/1802.06070) (or a more recent alternative) to improve the performance of the proposed method\n\nDiverse exploratory data is more ... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"01x7Bw4g5UT",
"jrqzQ6YR8_H",
"nips_2022_lNokkSaUbfV",
"f74lytXhqBC",
"l0w7_fuOiNN",
"FwpngiXK8N",
"nips_2022_lNokkSaUbfV",
"nips_2022_lNokkSaUbfV",
"nips_2022_lNokkSaUbfV"
] |
nips_2022_cx5ViLfcVq | Information-Theoretic Analysis of Unsupervised Domain Adaptation | This paper uses information-theoretic tools to analyze the generalization error in unsupervised domain adaptation (UDA). This study presents novel upper bounds for two notions of generalization errors. The first notion measures the gap between the population risk in the target domain and that in the source domain, and the second measures the gap between the population risk in the target domain and the empirical risk in the source domain. While our bounds for the first kind of error are in line with the traditional analysis and give similar insights, our bounds on the second kind of error are algorithm-dependent and also inspire insights into algorithm designs. Specifically, we present two simple techniques for improving generalization in UDA and validate them experimentally. | Reject | The reviewers agreed that the paper's novelty is limited and lacks proper justification. The extensive authors' discussion shows that they could improve the manuscript, but the number of modifications in the revised version would require another round of reviews.
Therefore, I encourage the authors to submit an improved version of their work to an upcoming venue. | train | [
"4zQZ2x32Yf",
"PrQKlpxcMt",
"nwIFGVWI1PC",
"XWqL4cUtUYZ",
"2VpIcZascjJ",
"I0EcaenELfg",
"gFoJCqU9nYV",
"mWp-pOUkil5",
"kREtQOnb5aC",
"eECJC-PFvYz",
"_ADnAPgXPrz",
"UUVN-p9GtWp",
"J-tabn7Ahz",
"gZELFcpKn-Pa",
"i-5dS3G_kd",
"J9LeLZNBj8A",
"-e-HKe4Et_k",
"kDohLkXcEg7D",
"Tx1lVK7GpaI... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" Thanks for the reply!\n\nWe also provide some sample complexity bounds to characterize the convergence rate of the empirical KL divergence in the Appendix of the latest revision. Please check Theorem B.1 and Corollary B.1 in Section B.8 of Appendix and Theorem B.2 in Section B.9 of Appendix. Unlike Theorem 5.1 th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"PrQKlpxcMt",
"-e-HKe4Et_k",
"XWqL4cUtUYZ",
"2VpIcZascjJ",
"I0EcaenELfg",
"kREtQOnb5aC",
"_ADnAPgXPrz",
"kREtQOnb5aC",
"eECJC-PFvYz",
"J-tabn7Ahz",
"nips_2022_cx5ViLfcVq",
"cBnind-kFN1",
"gZELFcpKn-Pa",
"i-5dS3G_kd",
"fZR5yjFapd",
"-e-HKe4Et_k",
"kDohLkXcEg7D",
"Ig16qySpQTw",
"Wb... |
nips_2022_iUOUnyS6uTf | STNDT: Modeling Neural Population Activity with Spatiotemporal Transformers | Modeling neural population dynamics underlying noisy single-trial spiking activities is essential for relating neural observation and behavior. A recent non-recurrent method - Neural Data Transformers (NDT) - has shown great success in capturing neural dynamics with low inference latency without an explicit dynamical model. However, NDT focuses on modeling the temporal evolution of the population activity while neglecting the rich covariation between individual neurons. In this paper we introduce SpatioTemporal Neural Data Transformer (STNDT), an NDT-based architecture that explicitly models responses of individual neurons in the population across time and space to uncover their underlying firing rates. In addition, we propose a contrastive learning loss that works in accordance with mask modeling objective to further improve the predictive performance. We show that our model achieves state-of-the-art performance on ensemble level in estimating neural activities across four neural datasets, demonstrating its capability to capture autonomous and non-autonomous dynamics spanning different cortical regions while being completely agnostic to the specific behaviors at hand. Furthermore, STNDT spatial attention mechanism reveals consistently important subsets of neurons that play a vital role in driving the response of the entire population, providing interpretability and key insights into how the population of neurons performs computation. | Accept | This paper introduces a spatiotemporal neural data transformer (STNDT) model for multineuronal spike trains. The method achieves state of the art performance on a variety of tasks within the Neural Latents Benchmark (NLB), which was introduced at NeurIPS in 2021. Technically speaking, the approach is a slight tweak on standard transformers, but it allows for trial-by-trial reweighting of values based on a neurons-by-neurons similarity matrix. It is interesting that this tweak is enough to boost performance.
While the reviewers expressed some hesitation with regard to the degree of technical novelty and the questionable interpretability of the model, I think it is important to recognize improvements in benchmarks that the field has put forward at NeurIPS. The paper is well written and the results are thorough. Overall, I think it is a valuable contribution.
In addition to addressing the reviewers' concerns, I would like the authors to address the following points in the final paper:
- I still think Figure 1 is confusing. The spike train in the top row and the firing rates in the bottom left seem to suggest that there are N=3 neurons and T=5 time steps, but the matrices and stated shapes suggest that there N=5 neurons and T=3 time steps. I believe the authors intended to have N=3 and T=5. If so, please correct the figure to help avoid any confusion!
- The introduction does a good job of summarizing related work in statistical neuroscience, but it does not, as far as I can see, discuss the enormous literature on variations of transformers. I do not know the best references, but I would be surprised if there haven't been related works in the transformer literature for other spatiotemporal tasks (e.g. applied to modeling audio or video data). The lack of discussion of related work in the broader ML community is a glaring omission. | train | [
"6-ahKPQYgdP",
"i903Pt07sCb",
"wB25sla7ei6",
"2L-Zb4jPtw",
"XtjWk0UUpfb",
"Df9YY-phs5F",
"12bf3-aXe5",
"XZByl5aKBXjn",
"eKbPIzUmkmt",
"KY5t_D7DFL0",
"79yjiBGgrdz",
"22vvhKdWdu-",
"_wINiKb45Cr",
"WIQaZP20XY",
"CGosHioMQ1",
"mp6qIcpo9q",
"f4hvhwHxOs",
"KV8BoBtOSWw"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate the reviewer's evaluation and recognition of our work.",
" We thank the reviewer for the recognition of our work. We will include the discussion as the reviewer suggested. Although the question of what non-trivial input features (other than trivial mean spiking activity) the spatial attention pick... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"XZByl5aKBXjn",
"12bf3-aXe5",
"XtjWk0UUpfb",
"Df9YY-phs5F",
"WIQaZP20XY",
"CGosHioMQ1",
"eKbPIzUmkmt",
"KY5t_D7DFL0",
"79yjiBGgrdz",
"f4hvhwHxOs",
"22vvhKdWdu-",
"mp6qIcpo9q",
"WIQaZP20XY",
"CGosHioMQ1",
"KV8BoBtOSWw",
"nips_2022_iUOUnyS6uTf",
"nips_2022_iUOUnyS6uTf",
"nips_2022_iU... |
nips_2022_rH-X09cB50f | Understanding Cross-Domain Few-Shot Learning Based on Domain Similarity and Few-Shot Difficulty | Cross-domain few-shot learning (CD-FSL) has drawn increasing attention for handling large differences between the source and target domains--an important concern in real-world scenarios. To overcome these large differences, recent works have considered exploiting small-scale unlabeled data from the target domain during the pre-training stage. This data enables self-supervised pre-training on the target domain, in addition to supervised pre-training on the source domain. In this paper, we empirically investigate which pre-training is preferred based on domain similarity and few-shot difficulty of the target domain. We discover that the performance gain of self-supervised pre-training over supervised pre-training becomes large when the target domain is dissimilar to the source domain, or the target domain itself has low few-shot difficulty. We further design two pre-training schemes, mixed-supervised and two-stage learning, that improve performance. In this light, we present six findings for CD-FSL, which are supported by extensive experiments and analyses on three source and eight target benchmark datasets with varying levels of domain similarity and few-shot difficulty. Our code is available at https://github.com/sungnyun/understanding-cdfsl. | Accept | Authors present a comprehensive assessment of learning strategies for CD-FSL, including supervised learning, self-supervised learning (4 variants thereof), semi-supervised learning (referred to as "mixed supervised"), single-stage and dual-stage training, and data augmentation strategies. Patterns are found among all these approaches by correlating results to source-target similarity by EMD, and measuring target task difficulty from SL.
Pros:
- [AC/R] A comprehensive study like this is currently missing from the CD-FSL literature. Some findings are less obvious than others and provide valuable information for the community.
- [AC/R] Extensive Experiments
- [AC/R] Well-written and easy to follow
Cons:
- [R] Assumption of unlabeled samples for the target domain is controversial. [AC] CD-FSL benchmark included unlabeled samples for study. While there may be applications where unlabeled data is not available, it does not diminish the utility of this setting.
- [R] Not clear if unlabeled and labeled data in target domain have overlapping classes. [AC] Assumption of benchmark is that there is overlapping classes, though acknowledge in practice this may not be the case. Probably an idea for a future benchmark rather than a critique of this work.
- [R] Some results are not surprising. Authors addressed this concern by pointing out that their results are in contradiction to prior findings, specifically around SSL.
- [AC/R] Only one distance measure was studied. Authors responded rationale for why EMD makes sense, but preferably they could have provided multiple measures and assessed how the multiple measures are similar and different, and how the findings may change.
- [R] Lack of visualization of embeddings. Authors have added visualizations to the appendix.
Overall reviews lean toward accept after revisions. Some negative comments from one reviewer are more critiques of prior published benchmark, not this particular work. Given this, AC feels assessment is mostly accept, though borderline. If authors could add additional distance measures and study how the different measures impact results, would improve quality and impact of paper. Recommend accept.
AC Rating: Borderline Accept
| train | [
"QqQZg1VD3T",
"SkixC1hiYCj",
"6mOw0jN5R1v",
"s9hUx9kSv9L",
"SkKioipWZ5",
"mGdKYnuwKYE",
"LV9EOEDstf",
"_kahSyvzLY",
"iwNcPJ-Fcy",
"sxGyJQxATLg",
"fbms6DPPTwi",
"e5yP7Ev1Sh_",
"XZFl_nlZH0",
"ohI7eubE483"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer Ger5,\n\nWe sincerely appreciate your valuable comments for improving our work. We have put in efforts in the revision (e.g., few-shot difficulty quantification and deeper backbone) to enhance our work based on your comments. \n\nWe would like to send you a gentle reminder that the rolling discussi... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"e5yP7Ev1Sh_",
"6mOw0jN5R1v",
"sxGyJQxATLg",
"SkKioipWZ5",
"nips_2022_rH-X09cB50f",
"nips_2022_rH-X09cB50f",
"ohI7eubE483",
"ohI7eubE483",
"XZFl_nlZH0",
"XZFl_nlZH0",
"e5yP7Ev1Sh_",
"nips_2022_rH-X09cB50f",
"nips_2022_rH-X09cB50f",
"nips_2022_rH-X09cB50f"
] |
nips_2022_JUXn1vXcrLA | ALMA: Hierarchical Learning for Composite Multi-Agent Tasks | Despite significant progress on multi-agent reinforcement learning (MARL) in recent years, coordination in complex domains remains a challenge. Work in MARL often focuses on solving tasks where agents interact with all other agents and entities in the environment; however, we observe that real-world tasks are often composed of several isolated instances of local agent interactions (subtasks), and each agent can meaningfully focus on one subtask to the exclusion of all else in the environment. In these composite tasks, successful policies can often be decomposed into two levels of decision-making: agents are allocated to specific subtasks and each agent acts productively towards their assigned subtask alone. This decomposed decision making provides a strong structural inductive bias, significantly reduces agent observation spaces, and encourages subtask-specific policies to be reused and composed during training, as opposed to treating each new composition of subtasks as unique. We introduce ALMA, a general learning method for taking advantage of these structured tasks. ALMA simultaneously learns a high-level subtask allocation policy and low-level agent policies. We demonstrate that ALMA learns sophisticated coordination behavior in a number of challenging environments, outperforming strong baselines. ALMA's modularity also enables it to better generalize to new environment configurations. Finally, we find that while ALMA can integrate separately trained allocation and action policies, the best performance is obtained only by training all components jointly. Our code is available at https://github.com/shariqiqbal2810/ALMA | Accept | The paper addresses multi-agent coordination in hierarchical task scenario by simultaneously learning coordination and execution policies.
In summary, the reviewers found, and I concur, that the paper would be a welcome contribution to the NeurIPS community. The reviews found the paper clearly written, addressing an important problem, with good results. The rebuttal added new ablations and addressed the most critical reviewers comments. The final version should discuss methods the limitations. | train | [
"vBJHZ0Hy3W",
"ikSlM_B-vaj",
"sv2sKtbO0Hx",
"X8Y9ychK9kB",
"By8qEiUJtgn",
"02T138MZCx",
"fhOxIdozj7g",
"PTcsPrOiNjZ",
"-br6ZuDmugH",
"xIem61namDz",
"072pZoGB1bW"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" My concerns are resolved by the authors. Thanks for their response. ",
" We humbly request that the reviewer acknowledge our response to their review before the end of the discussion period, as we believe all concerns have been thoroughly addressed.",
" Well received. I'll keep my score.",
" Thank you for y... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
3
] | [
"ikSlM_B-vaj",
"xIem61namDz",
"PTcsPrOiNjZ",
"By8qEiUJtgn",
"02T138MZCx",
"072pZoGB1bW",
"xIem61namDz",
"-br6ZuDmugH",
"nips_2022_JUXn1vXcrLA",
"nips_2022_JUXn1vXcrLA",
"nips_2022_JUXn1vXcrLA"
] |
nips_2022_nzuuao_V-B_ | Foreseeing Privacy Threats from Gradient Inversion Through the Lens of Angular Lipschitz Smoothness | Recent works proposed server-side input recovery attacks in federated learning (FL), in which an honest-but-curious server can recover clients’ data (e.g., images) using shared model gradients, thus raising doubts regarding the safety of FL. However, the attack methods are typically demonstrated on only a few models or focus heavily on the reconstruction of a single image, which is easier than that of a batch (multiple images). Thus, in this study, we systematically re-evaluated state-of-the-art (SOTA) attack methods on a variety of models in the context of batch reconstruction. For a broad spectrum of models, we considered two types of model variations: implicit (i.e., without any change in architecture) and explicit (i.e., with architectural changes). Motivated by the re-evaluation results that the quality of reconstructed image batch differs per model, we propose angular Lipschitz constant of a model gradient function with respect to an input as a measure that explains the vulnerability of a model against input recovery attacks. The prototype of the proposed measure is derived from our theorem on the convergence of attackers’ gradient matching optimization, and re-designed into the scale-invariant form to prevent trivial server-side loss scaling trick. We demonstrated the predictability of the proposed measure on the vulnerability under recovery attacks by empirically showing its strong monotonic correlation with not only loss drop during gradient matching optimization but also the quality of the reconstructed image batch. We expect our measure to be a key factor for developing client-side defensive strategies against privacy threats in our proposed realistic FL setting called black-box setting, where the server deliberately conceals global model information from clients excluding model gradients. | Reject | Several analyses provided in the paper are not novel and known in the literature. The black-box setting is not properly motivated and impractical. The angular Lipschitz constant is a good contribution, but it seems to be only a small part of the paper. For these reasons, the reviewers are not convinced that the contribution of this paper is significant enough. | val | [
"GGFzI6PuybF",
"NFIQcyOPDQ",
"jhbmV6ESZ-K",
"vwnojhqtOit",
"27EIxOGczR",
"y5bPKI90VU",
"IrC6bOrl8IS",
"udwFBWw48Gx",
"_n01W5uXS_3",
"dKiH0sH-NCf",
"i0QvM1mJYeK",
"a_uw5oDsoaP",
"iR7noAlC1h",
"868jXWErHzw",
"unWq0SYpXU_",
"GcCiETEpVsU",
"hkn7zly51Go",
"awSgOWQDmE6",
"scuBQ1OKf-X",... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" I have read all comments presented in reponse to this review and have decided to keep my score. I hope this submission could be rewritten from the ground up to reflect the large amount of feedback and various references provided by the other reviewers and me during the discussion period.",
" Today is the end of... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
6,
2,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"868jXWErHzw",
"vK9IqNrT59D",
"xUs2r5mVZzW",
"nIb39BY5G10",
"y5bPKI90VU",
"IrC6bOrl8IS",
"xUs2r5mVZzW",
"xNBK15Xket-",
"vK9IqNrT59D",
"nIb39BY5G10",
"a_uw5oDsoaP",
"iR7noAlC1h",
"scuBQ1OKf-X",
"7KtxhHgrCaq",
"SuBTrr5yRYF9",
"v91U1SZeDWA",
"ozFjRvG-GiD",
"scuBQ1OKf-X",
"Bl9XJbGi7D... |
nips_2022_8LeCgKb6UX | Graph Reordering for Cache-Efficient Near Neighbor Search | Graph search is one of the most successful algorithmic trends in near neighbor search. Several of the most popular and empirically successful algorithms are, at their core, a greedy walk along a pruned near neighbor graph. However, graph traversal applications often suffer from poor memory access patterns, and near neighbor search is no exception to this rule. Our measurements show that popular search indices such as the hierarchical navigable small-world graph (HNSW) can have poor cache miss performance. To address this issue, we formulate the graph traversal problem as a cache hit maximization task and propose multiple graph reordering as a solution. Graph reordering is a memory layout optimization that groups commonly-accessed nodes together in memory. We mathematically formalize the connection between the graph layout and the cache complexity of search. We present exhaustive experiments applying several reordering algorithms to a leading graph-based near neighbor method based on the HNSW index. We find that reordering improves the query time by up to 40%, we present analysis and improvements for existing graph layout methods, and we demonstrate that the time needed to reorder the graph is negligible compared to the time required to construct the index. | Accept | This paper studies how to order in-memory sequences for graph embedding. There was a positive consensus that the studied problem is interesting and results are sufficiently discussed. There were some concerns on missing results, which were addressed during rebuttals. | test | [
"ov-NIRwWRz",
"gsCONplZ1p",
"nJ7z_b_rFhw",
"cI6zBekkfvd",
"L1s1q1hISTq",
"Sd1Zn-J9K4R"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your review and comments to improve the paper. We address specific issues below.\n\n**Performance via hardware prefetching:** We’ve done some additional experiments to understand where the performance improvements come from. These experiments supplement our brief discussion of prefetching in Section... | [
-1,
-1,
-1,
6,
7,
6
] | [
-1,
-1,
-1,
3,
4,
4
] | [
"Sd1Zn-J9K4R",
"L1s1q1hISTq",
"cI6zBekkfvd",
"nips_2022_8LeCgKb6UX",
"nips_2022_8LeCgKb6UX",
"nips_2022_8LeCgKb6UX"
] |
nips_2022_H547BtAyOJ4 | Integral Probability Metrics PAC-Bayes Bounds | We present a PAC-Bayes-style generalization bound which enables the replacement of the KL-divergence with a variety of Integral Probability Metrics (IPM). We provide instances of this bound with the IPM being the total variation metric and the Wasserstein distance. A notable feature of the obtained bounds is that they naturally interpolate between classical uniform convergence bounds in the worst case (when the prior and posterior are far away from each other), and improved bounds in favorable cases (when the posterior and prior are close). This illustrates the possibility of reinforcing classical generalization bounds with algorithm- and data-dependent components, thus making them more suitable to analyze algorithms that use a large hypothesis space. | Accept | PAC-Bayes bounds provides a control of the risk of aggregation of predictors. In these bounds, the Kullback-Leibler divergence between the aggregation distribution and a prior appears in the upper bound on the risk. In this paper, the authors prove variants where the KL is replaced by an IPM (Integral Probability Metrics), including the total variation distance and Wasserstein. The important point is that these bounds are close to "uniform" (Vapnik-type) bounds in some unfavorable settings, but also can improve on them is more favorable scenarios.
The reviewers agreed that the results are novel and that the paper is technically sound. These bounds really extend the framework of PAC-Bayes bounds (for example, they are not necessarily vacuous when the posterior is not absolutely continuous with respect to the prior), and the fact that they recover uniform bounds in the worst case is also nice. All the reviewers recommended to accept the paper, and I agree with them.
The reviewers pointed out a few missing references, I will ask the authors to include them in the paper as promised during the discussion. I will add the following references that were not mentioned by the reviewers, and thus leave the authors decide to include them or not:
- Alquier and Guedj (2018) actually provided PAC-Bayes bounds based on f-divergences, that were then improved by Ohnishi and Honorio (2021) (especially the dependence with respect to the confidence level).
- there were a few attempts to replace the KL by the Wasserstein distance. The benefit were not as clear as in the present paper, but the authors might want to comment on the paper https://hal.archives-ouvertes.fr/hal-03262687/ or Lopez & Jog (2018) on MI bounds... | train | [
"L2JsFhWEYp",
"ef3C3jbGR-S",
"M2HfZu7558y",
"oxGkYPRjgnb",
"7zqpJj3m0X",
"WeAv9RcQYWE",
"OWbbtrlFIuP",
"NSQK9zvKZpf",
"9JVNZrFFCe5",
"UKbN7AgGzsg",
"5cpNgvl5hD7",
"6M07vjgXXdS",
"cqlieDjIK27"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their response. I keep my initial recommendation of accepting the paper, also in light of the correction of the proof of Proposition 4.",
" I thank the reviewer for their time.\n\nTo me, there is no major point left motivating the reject of the paper, I then move my score to 6.",
" We ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"OWbbtrlFIuP",
"M2HfZu7558y",
"oxGkYPRjgnb",
"7zqpJj3m0X",
"cqlieDjIK27",
"6M07vjgXXdS",
"5cpNgvl5hD7",
"UKbN7AgGzsg",
"nips_2022_H547BtAyOJ4",
"nips_2022_H547BtAyOJ4",
"nips_2022_H547BtAyOJ4",
"nips_2022_H547BtAyOJ4",
"nips_2022_H547BtAyOJ4"
] |
nips_2022_atd4X6U1jT | Human-AI Collaborative Bayesian Optimisation | Abstract Human-AI collaboration looks at harnessing the complementary strengths of both humans and AI. We propose a new method for human-AI collaboration in Bayesian optimisation where the optimum is mainly pursued by the Bayesian optimisation algorithm following complex computation, whilst getting occasional help from the accompanying expert having a deeper knowledge of the underlying physical phenomenon. We expect experts to have some understanding of the correlation structures of the experimental system, but not the location of the optimum. The expert provides feedback by either changing the current recommendation or providing her belief on the good and bad regions of the search space based on the current observations. Our proposed method takes such feedback to build a model that aligns with the expert’s model and then uses it for optimisation. We provide theoretical underpinning on why such an approach may be more efficient than the one without expert’s feedback. The empirical results show the robustness and superiority of our method with promising efficiency gains. | Accept | Reviewers found the contribution of introducing human feedback into Bayesian optimization novel, interesting, and sound. (It is worth noting that the author rebuttal was crucial in assuaging several reviewer concerns and confusions.) A more extensive human study would significantly strengthen the work, but that said, the paper should be an interesting contribution to NeurIPS as is. | train | [
"QaL0Qcd8ljX",
"QIBGRjzGKA3",
"h6-F5NSV3MY",
"f_rvdOd6Zu",
"xRsn16bj5-d",
"oamhGaVE_Lp",
"wgyS4Trphr6",
"V1F-NBMhIbG",
"MlCG5YDNyJG",
"AuoNWKsM6fD",
"6MpjMWffGr8",
"KtWGzYZcW-8",
"T-W94xjjA5h",
"ulbAFVkJOt"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank our reviewer for acknowledging and carefully reading through our rebuttal to understand our contributions.\n\n",
" I thank the authors for their comments. They clarified the value of their work contribution compared to the state of the art, and they provided supplementary results supporting the usefuln... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4
] | [
"QIBGRjzGKA3",
"T-W94xjjA5h",
"xRsn16bj5-d",
"oamhGaVE_Lp",
"V1F-NBMhIbG",
"6MpjMWffGr8",
"ulbAFVkJOt",
"ulbAFVkJOt",
"T-W94xjjA5h",
"T-W94xjjA5h",
"KtWGzYZcW-8",
"nips_2022_atd4X6U1jT",
"nips_2022_atd4X6U1jT",
"nips_2022_atd4X6U1jT"
] |
nips_2022_xqyDqMojMfC | Constrained Stochastic Nonconvex Optimization with State-dependent Markov Data | We study stochastic optimization algorithms for constrained nonconvex stochastic optimization problems with Markovian data. In particular, we focus on the case when the transition kernel of the Markov chain is state-dependent. Such stochastic optimization problems arise in various machine learning problems including strategic classification and reinforcement learning. For this problem, we study both projection-based and projection-free algorithms. In both cases, we establish that the number of calls to the stochastic first-order oracle to obtain an appropriately defined $\epsilon$-stationary point is of the order $\mathcal{O}(1/\epsilon^{2.5})$. In the projection-free setting we additionally establish that the number of calls to the linear minimization oracle is of order $\mathcal{O}(1/\epsilon^{5.5})$. We also empirically demonstrate the performance of our algorithm on the problem of strategic classification with neural networks. | Accept | The paper considers constrained smooth nonconvex problems in a stochastic setting where the data comes from a Markov chain. For this setting, the paper proposes an algorithm that converges to an $\epsilon$-stationary point with stochastic oracle complexity $1/\epsilon^{2.5}$. The paper further shows that in the settings where the projection oracle is expensive to compute (e.g., under nuclear norm constraints), the algorithm can be implemented with $\mathcal{O}(\frac{1}{\epsilon^{5.5}})$ calls to a linear minimization oracle. On the technical side, the principal contributions are in designing a novel moving-average gradient estimator suitable for Markovian data and in designing an auxiliary Markov chain based on a noise decomposition idea (similar to [AMP05]), whose iterates are close to the iterates of the original algorithm. The analysis is sufficiently general to handle both constrained and unconstrained settings.
The motivation for studying the considered problems comes from strategic classification (an example used in the numerical experiments provided in the paper) and reinforcement learning. These problems are of high interest to the ML community and the contributions of the paper seem sufficient. The results will plausibly lead to more developments in this area. | test | [
"KkjXqlUjqK",
"mfRXkzOgwh1",
"tr11vGT9BiU",
"rLe9pcxZ5lC",
"1QGOVYwbZh",
"u8Blp00xmE",
"qhZxUNADmp",
"10-wucN5bRz",
"E9UXE4QdeL0",
"WZ8DR10kD3d",
"uuZsnsDaQNc",
"MH7AJ5Q7o0f",
"3Dp422TlN3Gr",
"pTAVmomB72",
"CIgnFv7apIq",
"jH2Ky3J19C0",
"RSs3_Q2cOzBX",
"YcisVIV8NDxx",
"-7Y-0IXyYWP... | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_r... | [
" Thank you for the response. \n\nWe just wanted to make a quick clarification to regarding another incorrect claim you make now (i.e., \"claiming the reference I mentioned is state-independent.\"). \n\nPlease see https://openreview.net/forum?id=xqyDqMojMfC¬eId=Axm_fl1usja we mention \"Paper [3] ( changed to [D3... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3,
3
] | [
"mfRXkzOgwh1",
"uuZsnsDaQNc",
"uuZsnsDaQNc",
"qhZxUNADmp",
"u8Blp00xmE",
"-7Y-0IXyYWP",
"EFB45vGUebm8",
"E9UXE4QdeL0",
"WZ8DR10kD3d",
"kuGNNzvRlR0",
"MH7AJ5Q7o0f",
"pTAVmomB72",
"RSs3_Q2cOzBX",
"RSs3_Q2cOzBX",
"RSs3_Q2cOzBX",
"RSs3_Q2cOzBX",
"Axm_fl1usja",
"nips_2022_xqyDqMojMfC",
... |
nips_2022_WbnvmtD9N1g | Double Bubble, Toil and Trouble: Enhancing Certified Robustness through Transitivity | In response to subtle adversarial examples flipping classifications of neural network models, recent research has promoted certified robustness as a solution. There, invariance of predictions to all norm-bounded attacks is achieved through randomised smoothing of network inputs. Today's state-of-the-art certifications make optimal use of the class output scores at the input instance under test: no better radius of certification (under the $L_2$ norm) is possible given only these score. However, it is an open question as to whether such lower bounds can be improved using local information around the instance under test. In this work, we demonstrate how today's ``optimal'' certificates can be improved by exploiting both the transitivity of certifications, and the geometry of the input space, giving rise to what we term Geometrically-Informed Certified Robustness. By considering the smallest distance to points on the boundary of a set of certifications this approach improves certifications for more than $80 \%$ of Tiny-Imagenet instances, yielding an on average $5\%$ increase in the associated certification. When incorporating training time processes that enhance the certified radius, our technique shows even more promising results, with a uniform $4$ percentage point increase in the achieved certified radius. | Accept | The paper proposes a very simple idea that can improve one of the strongest robustness certificates. Most of the concerns were minor, and they were well addressed during the rebuttal phase. The reviewers were mostly happy with the current paper though shared a few remaining concerns, which could significantly improve the manuscript if properly addressed in the camera ready version. For instance, it will be great to see whether or not this idea can also improve the robustness certificates produced by different algorithms. The current performance gain still looks marginal, but it might be interesting if the same technique can bring in larger gains for different types of certificates. | train | [
"9PulirB0U30",
"86dZVLyKCjF",
"uT8pk1Qoi3W",
"HsAS7FunOQP",
"ecyY4L4f24Z",
"QryvFAAqg3",
"tqvDyO7FSmF",
"ld4ZTxeUS--",
"GctL2nTEcIt",
"bCGlRPMUfHa",
"S4p_eP8RKTi",
"ZavSExwsEt",
"Kwwrrm4Xhlk",
"ns7nX2M-mAv",
"aGO2Qr0A3vU",
"n9GLIZGYWyF",
"jN2L82rH3dI",
"LJ-krRdJsX"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for both their comments, and their updated score. \n\n> Relating to IBP\n\nThe reviewer is correct that we were thinking more Convex Relaxation based methods in our response (as that's what we primarily cite in the paper) rather than the Interval Bound Propagation that you mentioned, and we ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"86dZVLyKCjF",
"ZavSExwsEt",
"LJ-krRdJsX",
"ecyY4L4f24Z",
"QryvFAAqg3",
"tqvDyO7FSmF",
"GctL2nTEcIt",
"bCGlRPMUfHa",
"S4p_eP8RKTi",
"Kwwrrm4Xhlk",
"aGO2Qr0A3vU",
"LJ-krRdJsX",
"jN2L82rH3dI",
"n9GLIZGYWyF",
"nips_2022_WbnvmtD9N1g",
"nips_2022_WbnvmtD9N1g",
"nips_2022_WbnvmtD9N1g",
"... |
nips_2022_WxWO6KPg5g2 | Deep Surrogate Assisted Generation of Environments | Recent progress in reinforcement learning (RL) has started producing generally capable agents that can solve a distribution of complex environments. These agents are typically tested on fixed, human-authored environments. On the other hand, quality diversity (QD) optimization has been proven to be an effective component of environment generation algorithms, which can generate collections of high-quality environments that are diverse in the resulting agent behaviors. However, these algorithms require potentially expensive simulations of agents on newly generated environments. We propose Deep Surrogate Assisted Generation of Environments (DSAGE), a sample-efficient QD environment generation algorithm that maintains a deep surrogate model for predicting agent behaviors in new environments. Results in two benchmark domains show that DSAGE significantly outperforms existing QD environment generation algorithms in discovering collections of environments that elicit diverse behaviors of a state-of-the-art RL agent and a planning agent. Our source code and videos are available at https://dsagepaper.github.io/. | Accept | The authors describe a method of scaling the benefits of quality diversity (QD) optimization for automatic generation of RL environments, by replacing costly agent evaluation with a learnt surrogate model. They then demonstrate that this method improves agent performance in two settings: a maze environment using RL and a Mario environment using A* planning.
The reviewers agree that the proposed DSAGE algorithm is both novel and technically sound. Where they disagree is on whether QD optimization as a field of research has inherent value for RL research or the broader NeurIPS readership. To summarize each of their respective stances:
R1: "[...] I still agree this method is solid and improves previous QD optimization methods. But I didn't directly see its contribution to a broader community."
R2: "[...] I have some skepticisms about QD methods in general and prefer unsupervised methods that do not require human specifications of what constitutes an interesting environment. However, it seems unfair to hold this paper responsible for general disagreements with QD style methods and instead seems like the criteria should be whether this is an interesting or useful contribution to researchers who either work on environment design or on researchers who work on QD + RL."
R3: "[...] this is a novel problem setting for the NeurIPS community, combining level generation (from the games community) and RL. It is highly relevant in combination with new work on generalization in RL in particular, and evaluating the robustness of our agents."
I have no great insight on whether QD methods will prove to be valuable for improving agent generalization or robustness in future. But I tend to agree with R3; in my opinion NeurIPS is the right community to be exploring these questions, and this work has made meaningful and rigorous contributions to this particular line of research. I am recommending this paper for acceptance but note that this is borderline and may require reevaluation in the context of other submissions. | test | [
"1KMPoRiJlBi",
"ucy3XAljFfg",
"jY0AF3miKV",
"LzBEIcLrDR8",
"UeomrMSK9YI",
"MsqrP9r2c_3",
"8w0Iy93Y_Kr",
"mocfqAPEayU",
"E3yVXX3Fvd",
"1mjEStWWFfL",
"tndJuo6J3H",
"e1jXiy-g6F9",
"5V5lBO1_wMB",
"EWhgYLj5fYQ",
"ITkRHEbneWr",
"z1mD1e_nEu7",
"duPtYXySKq",
"RUi42govxy4",
"uaQ4AqIaP22",... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" Thank you for your response. I appreciate this clarification on the diverse agent behaviors. I also agree that this paper is orthogonal to PAIRED/ACCEL, which generates somewhat infeasible solutions for a trainable agent, while your method aims at generating feasible and diverse environments for a given agent.\n... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"ucy3XAljFfg",
"jY0AF3miKV",
"ITkRHEbneWr",
"UeomrMSK9YI",
"MBaQk90-XcP",
"8w0Iy93Y_Kr",
"tndJuo6J3H",
"E3yVXX3Fvd",
"duPtYXySKq",
"TBXbvWIa-FQ",
"TBXbvWIa-FQ",
"MBaQk90-XcP",
"MBaQk90-XcP",
"MBaQk90-XcP",
"uaQ4AqIaP22",
"uaQ4AqIaP22",
"wcU3JDPY72w",
"nips_2022_WxWO6KPg5g2",
"nip... |
nips_2022_MNQMy2MpbcO | Batch Multi-Fidelity Active Learning with Budget Constraints | Learning functions with high-dimensional outputs is critical in many applications, such as physical simulation and engineering design. However, collecting training examples for these applications is often costly, e.g., by running numerical solvers. The recent work (Li et al., 2022) proposes the first multi-fidelity active learning approach for high-dimensional outputs, which can acquire examples at different fidelities to reduce the cost while improving the learning performance. However, this method only queries at one pair of fidelity and input at a time, and hence has a risk of bringing in strongly correlated examples to reduce the learning efficiency. In this paper, we propose Batch Multi-Fidelity Active Learning with Budget Constraints (BMFAL-BC), which can promote the diversity of training examples to improve the benefit-cost ratio, while respecting a given budget constraint for batch queries. Hence, our method can be more practically useful. Specifically, we propose a novel batch acquisition function that measures the mutual information between a batch of multi-fidelity queries and the target function, so as to penalize highly correlated queries and encourages diversity. The optimization of the batch acquisition function is challenging in that it involves a combinatorial search over many fidelities while subject to the budget constraint. To address this challenge, we develop a weighted greedy algorithm that can sequentially identify each (fidelity, input) pair, while achieving a near $(1 - 1/e)$-approximation of the optimum. We show the advantage of our method in several computational physics and engineering applications. | Accept | The paper proposes a new batch active learning algorithm for multi-fidelity tasks. The proposal is a sequential greedy algorithmwith approximation guarantees and authors provide extensive experimental comparison.
The reviewers generally agree that the paper should be accepted.
In the camera-ready version, the authors are strongly encouraged to include:
- additional discussions that they promised in their rebuttal
- additional experimental results with more than 3 fidelities
- a pointer to their open-sourced code and datasets | test | [
"nyiFaid7Gpx",
"ob8ObqFyqDq",
"3-bQz_SX2U2",
"_dSDwy7V7NP",
"n1584DPIUgZ",
"luvPDO1qzw",
"wbV-AgYAKnZ",
"S1mAaGcRoLF",
"oBig_25RL-Bo",
"RFh-n4AP8Pb",
"ud28yyWZX0a",
"sHscd97H45l",
"McyGURThcpT"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for the clarifications. I have no further questions.",
" Thank you for your responses to the feedback provided across all reviews, especially for the clarifications on the connections to the highlighted paper.\n\nIn view of the feedback shared by other reviewers, I am currently... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3,
2
] | [
"n1584DPIUgZ",
"wbV-AgYAKnZ",
"S1mAaGcRoLF",
"oBig_25RL-Bo",
"luvPDO1qzw",
"McyGURThcpT",
"sHscd97H45l",
"ud28yyWZX0a",
"RFh-n4AP8Pb",
"nips_2022_MNQMy2MpbcO",
"nips_2022_MNQMy2MpbcO",
"nips_2022_MNQMy2MpbcO",
"nips_2022_MNQMy2MpbcO"
] |
nips_2022_2Ln-TWxVtf | Time-Conditioned Dances with Simplicial Complexes: Zigzag Filtration Curve based Supra-Hodge Convolution Networks for Time-series Forecasting | Graph neural networks (GNNs) offer a new powerful alternative for multivariate time series forecasting, demonstrating remarkable success in a variety of spatio-temporal applications, from urban flow monitoring systems to health care informatics to financial analytics. Yet, such GNN models pre-dominantly capture only lower order interactions, that is, pairwise relations among nodes, and also largely ignore intrinsic time-conditioned information on the underlying topology of multivariate time series. To address these limitations, we propose a new time-aware GNN architecture which amplifies the power of the recently emerged simplicial neural networks with a time-conditioned topological knowledge representation in a form of zigzag persistence. That is, our new approach, Zigzag Filtration Curve based Supra-Hodge Convolution Networks (ZFC-SHCN) is built upon the two main components: (i) a new highly computationally efficient
zigzag persistence curve which allows us to systematically encode time-conditioned topological information, and (ii) a new temporal multiplex graph representation module for learning higher-order network interactions. We discuss theoretical properties of the proposed time-conditioned topological knowledge representation and extensively validate the new time-aware ZFC-SHCN model
in conjunction with time series forecasting on a broad range of synthetic and real-world datasets: traffic flows, COVID-19 biosurveillance, Ethereum blockchain, surface air temperature, wind energy, and vector autoregressions. Our experiments demonstrate that the ZFC-SHCN achieves the state-of-the-art performance with lower requirements on computational costs. | Accept | Their new ideas are highly appreciated by the authors. The authors' enthusiastic revision of the paper is also good.
It is also nice that the benefits are clear and the advantages are clearly presented both theoretically and experimentally. | train | [
"tdP8CZtCYUi",
"2ADznQv53SE",
"L_j30jJudW",
"DhNXCK_eTA2x",
"txeH0_QWlsO",
"x-X5MJugPqIT",
"d1HXHJP_Uhz",
"AYiSR8k2W1P3",
"t13gxhph6AQ",
"MGcDTrhCVxE",
"_K4Q5k0Bmj1",
"8Kqnz7JgzA",
"9OzJdGWGRoJ",
"dtVp1WgeBTg",
"uESCs7CS8W4b",
"auLDkjRq7ws",
"3Ep9-1UAD6_",
"1F565MyUmSm5",
"_sa1vi... | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",... | [
" Thanks very much for the very constructive and valuable feedback inspiring us to think on new directions and for raising the score! ",
" Thanks for the response, it answers most of my questions. I would like to revise the rating score.",
" We have added a subsection on TDA for graph learning in the related wo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
3
] | [
"2ADznQv53SE",
"auLDkjRq7ws",
"ESZ0wZVJNSM",
"84Hmxkf1urQ",
"x-X5MJugPqIT",
"bCzXhp1JY-4",
"t13gxhph6AQ",
"MGcDTrhCVxE",
"MGcDTrhCVxE",
"ESZ0wZVJNSM",
"84Hmxkf1urQ",
"KWR4_D9ZH83",
"nips_2022_2Ln-TWxVtf",
"84Hmxkf1urQ",
"84Hmxkf1urQ",
"84Hmxkf1urQ",
"ESZ0wZVJNSM",
"ESZ0wZVJNSM",
... |
nips_2022_wN1CBFFx7JF | Theoretical analysis of deep neural networks for temporally dependent observations | Deep neural networks are powerful tools to model observations over time with non-linear patterns. Despite the widespread use
of neural networks in such settings, most theoretical developments of deep neural networks are under the assumption of independent observations, and theoretical results for temporally dependent observations are scarce. To bridge this gap, we study theoretical properties of deep neural networks on modeling non-linear time series data. Specifically, non-asymptotic bounds for prediction error of (sparse) feed-forward neural network with ReLU activation function is established under mixing-type assumptions. These assumptions are mild such that they include a wide range of time series models including auto-regressive models. Compared to independent observations, established convergence rates have additional logarithmic factors to compensate for additional complexity due to dependence among data points. The theoretical results are supported via various numerical simulation settings as well as an application to a macroeconomic data set. | Accept | The reviewers consensus is that this manuscript is a near the borderline between acceptance and rejection due in large part to the results being somewhat natural extensions of existing results from [25] with the appropriate extension of the bounds as pointed out by reviewer FsYW. While this limits the overall excitement about the results, it is an important extension that would be of interest to many readers as time-series data is far more realistic in many applications than is i.i.d. data. There is a useful discussion between reviewer KBnt and the author about some numerical experiments. The author is encouraged to consider if is possible to expand upon the numerical experiments in the manuscript to highlight as much as possible the difference achieve between the dependent entries in the time-series and those in the i.i.d. setting. | train | [
"hvNj2wwmNW7",
"HTSbuR44dow",
"yzi6Aa3otmX",
"Twc8JrmRm9",
"OnqvESM1qO",
"wbIxMyekI4U",
"R6pTnqSQiBJ",
"u-XoDTTBUnX",
"TCubXUq0dVw",
"-btcPVdEFnK"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The difference in performances may be a finite sample issue which would vanish for diverging n. Also, it may come from the form of regression model. For AR(p) linear model, the last p observations contain all the information to make one-step prediction. While for AR(p) nonlinear model, Wold decomposition tells us... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4
] | [
"HTSbuR44dow",
"yzi6Aa3otmX",
"Twc8JrmRm9",
"R6pTnqSQiBJ",
"-btcPVdEFnK",
"TCubXUq0dVw",
"u-XoDTTBUnX",
"nips_2022_wN1CBFFx7JF",
"nips_2022_wN1CBFFx7JF",
"nips_2022_wN1CBFFx7JF"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.