paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nips_2022_fdyxLGHE6bU | Active Learning with Safety Constraints | Active learning methods have shown great promise in reducing the number of samples necessary for learning. As automated learning systems are adopted into real-time, real-world decision-making pipelines, it is increasingly important that such algorithms are designed with safety in mind. In this work we investigate the complexity of learning the best safe decision in interactive environments. We reduce this problem to a safe linear bandits problem, where our goal is to find the best arm satisfying certain (unknown) safety constraints. We propose an adaptive experimental design-based algorithm, which we show efficiently trades off between the difficulty of showing an arm is unsafe vs suboptimal. To our knowledge, our results are the first on best-arm identification in linear bandits with safety constraints. In practice, we demonstrate that this approach performs well on synthetic and real world datasets. | Accept | 3 reviewers evaluated the submitted paper, 2 recommending rejection (1x reject, 1x borderline reject), 1 borderline acceptance. There was some interaction between authors and reviewers and the reviewers considered the authors' responses. Unfortunately, the reviewers' expertise was more on the active learning side and less on the bandit side -- in that regard, the paper title is rather uninformative about what to expect from the paper. Therefore, I decided to read the paper closely myself.
--
Based on my own reading, I would argue that the paper makes some valuable contributions to the problem of solving safety-constrained bandits. The considered problem setting in which constraint violations are possible during exploration is relevant as motivated by the authors by several convincing examples.
The theoretical and algorithmic contributions (assuming the correctness of the proofs which I didn't check in large detail) are interesting, providing some first solid insights into a novel problem setting, and providing the first algorithm to solve the considered problem setting with guarantees.
The coverage of related work is mainly fine but, as also noted by the authors, is missing the very related paper by Lindner et al. (ICML'22) which investigates a similar but simpler problem setting.
The empirical evaluation is not very exhaustive but is in line with typical bandit papers. My only real criticism in that regard is that the chosen baseline is unnecessarily weak in that it estimates all safety gaps up to some desired precision while it would be sufficient to estimate those only until one can be sufficiently certain that an arm is not feasible. One can also think of other sensible baselines, e.g., a greedy policy that focuses on the most promising-looking arms until they are shown to be unsafe, if they are (of course one can then construct examples, where such a strategy will fail but for not constructed examples such a strategy might work reasonably well).
The write-up is mainly clear and easy to follow if one is familiar with general bandit literature. Nevertheless, the title of the paper is rather uninformative and might trigger incorrect expectations.
Overall, as already said above, I think the paper would be a valuable addition to the existing literature.
--
Taking into account my own reading of the paper, as well as the 3 submitted reviews, and the author's feedback, I am recommending acceptance of the paper. The main concerns of the critical reviews were regarding the relevance of the problem setting, violation of constraints during exploration, and the evaluation, in particular the considered baselines. While the raised concerns are valid, I think the authors sufficiently justify their problem setting and the empirical evaluation is also sufficient (although it could clearly be extended; see some suggestions in my review and also below). I see the paper's main contributions on the theoretical side - and in that regard, there are some novel valuable insights.
When preparing the camera-ready version of the paper, please consider the following suggestions for improving the paper:
* Adjust the discussion of related work: in particular, include Lindner et al. in the revised paper
* Consider the addition of further baselines to provide a better sense of the proposed algorithm's performance
* There are some minor issues with notation, e.g., line 55/56 should be an argmax. So please carefully revise your paper in that regard.
* Consider adjusting the title of your paper to better match it to the content
* Carefully consider all reviews, in particular also those recommending rejection, when preparing the camera-ready version and adjust your paper accordingly; For instance, I suggest implementing some of the authors' responses that came up in the discussion to emphasize the importance and practical relevance of the studied problem better. | train | [
"sIaP8C3Jo-_",
"3e3Bzy8LNRB",
"kAoOxzx_0Uw",
"5ZhBR-DBzER",
"vu8osn8edp1",
"d3JCOpTOVRn",
"MFtcqWSfhiD",
"nnHWGwxXh89",
"HON1mEIqEo5",
"3GIJGcoJIA"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for their response to my review.\n\nUnfortunately, I cannot make an educated guess about the depth of the contribution of the paper and how far it is going beyond state of the art mentioned in the paper. But I also see that this is a shared issue among the other reviews. That is ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
2
] | [
"d3JCOpTOVRn",
"kAoOxzx_0Uw",
"vu8osn8edp1",
"3GIJGcoJIA",
"HON1mEIqEo5",
"nnHWGwxXh89",
"nips_2022_fdyxLGHE6bU",
"nips_2022_fdyxLGHE6bU",
"nips_2022_fdyxLGHE6bU",
"nips_2022_fdyxLGHE6bU"
] |
nips_2022_GNHyNOR8Sn | A Boosting Approach to Reinforcement Learning | Reducing reinforcement learning to supervised learning is a well-studied and effective approach that leverages the benefits of compact function approximation to deal with large-scale Markov decision processes. Independently, the boosting methodology (e.g. AdaBoost) has proven to be indispensable in designing efficient and accurate classification algorithms by combining rough and inaccurate rules-of-thumb.
In this paper, we take a further step: we reduce reinforcement learning to a sequence of weak learning problems. Since weak learners perform only marginally better than random guesses, such subroutines constitute a weaker assumption than the availability of an accurate supervised learning oracle. We prove that the sample complexity and running time bounds of the proposed method do not explicitly depend on the number of states.
While existing results on boosting operate on convex losses, the value function over policies is non-convex. We show how to use a non-convex variant of the Frank-Wolfe method for boosting, that additionally improves upon the known sample complexity and running time bounds even for reductions to supervised learning. | Accept | This paper introduces a boosting approach for RL. RL is reduced to a supervised learning problem. The proposed method implements the Frank-Wolfe algorithm where the objective is the value function is non-convex. The weak learners are used approximately (and efficiently) solve the problem. The paper gives a sample complexity bound for finding a near-optimal policy using weak learning assumption.
This paper provides an interesting novel approach for RL that is certainly of interest to Neurips community and as such is a valuable contribution to the conference, | test | [
"YZyz_MUEgTK",
"VpNdQ1VIYOV",
"z4FRvr_BBg8",
"jesFH8G5zQ3",
"GtTDFrPipa_"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. \n\nIt clarified some uncertainties, and I have updated my review accordingly.",
" We thank the reviewer for their careful reading of our paper, and will incorporate the suggestions made in the final version.\n\nWe remark that the use of FW to overcome the non-convexity of the value... | [
-1,
-1,
-1,
8,
6
] | [
-1,
-1,
-1,
4,
3
] | [
"z4FRvr_BBg8",
"GtTDFrPipa_",
"jesFH8G5zQ3",
"nips_2022_GNHyNOR8Sn",
"nips_2022_GNHyNOR8Sn"
] |
nips_2022_TIXwBZB3Jl6 | VaiPhy: a Variational Inference Based Algorithm for Phylogeny | Phylogenetics is a classical methodology in computational biology that today has become highly relevant for medical investigation of single-cell data, e.g., in the context of development of cancer. The exponential size of the tree space is unfortunately a formidable obstacle for current Bayesian phylogenetic inference using Markov chain Monte Carlo based methods since these rely on local operations. And although more recent variational inference (VI) based methods offer speed improvements, they rely on expensive auto-differentiation operations for learning the variational parameters. We propose VaiPhy, a remarkably fast VI based algorithm for approximate posterior inference in an \textit{augmented tree space}. VaiPhy produces marginal log-likelihood estimates on par with the state-of-the-art methods on real data, and is considerably faster since it does not require auto-differentiation. Instead, VaiPhy combines coordinate ascent update equations with two novel sampling schemes: (i) \textit{SLANTIS}, a proposal distribution for tree topologies in the augmented tree space, and (ii) the \textit{JC sampler}, the, to the best of our knowledge, first ever scheme for sampling branch lengths directly from the popular Jukes-Cantor model. We compare VaiPhy in terms of density estimation and runtime. Additionally, we evaluate the reproducibility of the baselines. We provide our code on GitHub: \url{https://github.com/Lagergren-Lab/VaiPhy}. | Accept | The authors propose a novel variational inference method for phylogenetic inference with a specific focus towards improving the speed of variational inference. The reviewers agree that the paper makes significant contributions to an important problem. The authors are encouraged to take the reviewer feedback carefully in preparing the next version of the manuscript. | train | [
"I29ActzJV-d",
"nSybZWqNtl_F",
"0acmV8m8xGk",
"GZOCJGKRrVc",
"731EgF0GeVG",
"c8zhO0VDqH",
"BWWAK-Ntnq",
"2MIw8ozMH35",
"s6-5WzGa_po",
"PU9NJ1BbSzs",
"F53-8uER4NV",
"Pa23lg4BYHX",
"lAALlBpw9dI"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their thorough response. The authors addressed my questions and comments. In light of the authors's answers to my review and to the other reviews, I am in favor of accepting the paper at NeurIPS and increase the grade to 6. ",
" I thank the authors for their thorough response.\n\nThe aut... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"731EgF0GeVG",
"2MIw8ozMH35",
"PU9NJ1BbSzs",
"PU9NJ1BbSzs",
"Pa23lg4BYHX",
"Pa23lg4BYHX",
"lAALlBpw9dI",
"F53-8uER4NV",
"nips_2022_TIXwBZB3Jl6",
"nips_2022_TIXwBZB3Jl6",
"nips_2022_TIXwBZB3Jl6",
"nips_2022_TIXwBZB3Jl6",
"nips_2022_TIXwBZB3Jl6"
] |
nips_2022_NnuYZ1el24C | Curious Exploration via Structured World Models Yields Zero-Shot Object Manipulation | It has been a long-standing dream to design artificial agents that explore their environment efficiently via intrinsic motivation, similar to how children perform curious free play. Despite recent advances in intrinsically motivated reinforcement learning (RL), sample-efficient exploration in object manipulation scenarios remains a significant challenge as most of the relevant information lies in the sparse agent-object and object-object interactions. In this paper, we propose to use structured world models to incorporate relational inductive biases in the control loop to achieve sample-efficient and interaction-rich exploration in compositional multi-object environments. By planning for future novelty inside structured world models, our method generates free-play behavior that starts to interact with objects early on and develops more complex behavior over time. Instead of using models only to compute intrinsic rewards, as commonly done, our method showcases that the self-reinforcing cycle between good models and good exploration also opens up another avenue: zero-shot generalization to downstream tasks via model-based planning. After the entirely intrinsic task-agnostic exploration phase, our method solves challenging downstream tasks such as stacking, flipping, pick & place, and throwing that generalizes to unseen numbers and arrangements of objects without any additional training. | Accept | Reviewer zPKR summarizes the paper well: "This paper an agent that learns a world model through exploratory play, which it then deploys to solve downstream tasks via a planner. The world model is an ensemble of GNNs, where nodes represent objects, which introduces an object-centric inductive bias. Effective playful exploration is achieved by using disagreement among the ensemble as a proxy for uncertainty, which allows the agent to select actions whose outcomes it cannot yet reliably predict. The authors show that this method works better than baselines, in the sense that it quickly starts engaging in “interesting” interactions (with objects)."
All reviewers unanimously vote to accept the paper. The limitations of the work are well summarized in the comment: "The most significant limitation of this paper is the limited application in the real-world setting as the proprioceptive states instead of image observations used in the paper acts as an important constraint. Besides, the heaviness in using an ensemble of GNNs may be a limitation."
The benefits of the method outweigh the limitations. I hope authors work on extending their work to operate on sensory observations in the future. | train | [
"apPaI9QMe6o",
"1rO0HUakJbX",
"SKmXOSMGYy",
"qbCPeNFw6w",
"EhOujUo8mer",
"RCkYlYxvuWC",
"zbpOVPh5pXr",
"Uxvdh1KOHan",
"Wwf4CJ5Mzc9",
"VzM69VGyPKb"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the clarification.\n\nThis was a nice paper and my score remains very positive.",
" Many thanks to the authors for their response, and especially for clarifying the white noise issue. I remain impressed by the paper, and I'm now more confident in that assessment, so I have increased my confidence acc... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"zbpOVPh5pXr",
"RCkYlYxvuWC",
"VzM69VGyPKb",
"nips_2022_NnuYZ1el24C",
"VzM69VGyPKb",
"Wwf4CJ5Mzc9",
"Uxvdh1KOHan",
"nips_2022_NnuYZ1el24C",
"nips_2022_NnuYZ1el24C",
"nips_2022_NnuYZ1el24C"
] |
nips_2022_vjKIKdXijK | Convexity Certificates from Hessians | The Hessian of a differentiable convex function is positive semidefinite. Therefore, checking the Hessian of a given function is a natural approach to certify convexity. However, implementing this approach is not straightforward, since it requires a representation of the Hessian that allows its analysis. Here, we implement this approach for a class of functions that is rich enough to support classical machine learning. For this class of functions, it was recently shown how to compute computational graphs of their Hessians. We show how to check these graphs for positive-semidefiniteness. We compare our implementation of the Hessian approach with the well-established disciplined convex programming (DCP) approach and prove that the Hessian approach is at least as powerful as the DCP approach for differentiable functions. Furthermore, we show for a state-of-the-art implementation of the DCP approach that the Hessian approach is actually more powerful, that is, it can certify the convexity of a larger class of differentiable functions. | Accept | This was a though call. Two reviewers were only slightly positive about the paper and one of them did not seem to understand it very much, while a third reviewer who is an expert, really like the paper very much and so I tend to accept. | train | [
"iRmP5ljeMop",
"Jaq7vjZC0LO",
"SeG4na6Xtvz",
"l49GwQrCVuJ",
"kLS8noc8DE0",
"_L69MejIaXX",
"BDeOXpOT9ev",
"BV-o1tA6pDJ",
"mP-10cDr_9i",
"s4uPQbDavVA",
"MrjrWQxgI5e",
"N5uHm5rlzc_"
] | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer yPQf,\n\ndid the mathematical proof from the book [1, chapter 4] convince you that computing the Hessian DAG is very efficient and can be done in linear time and space? If not, please let us know. Thank you!",
" Thank you for bringing up your concerns about the efficiency of the Hessian computatio... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
1,
5
] | [
"Jaq7vjZC0LO",
"SeG4na6Xtvz",
"s4uPQbDavVA",
"kLS8noc8DE0",
"BV-o1tA6pDJ",
"BV-o1tA6pDJ",
"s4uPQbDavVA",
"N5uHm5rlzc_",
"MrjrWQxgI5e",
"nips_2022_vjKIKdXijK",
"nips_2022_vjKIKdXijK",
"nips_2022_vjKIKdXijK"
] |
nips_2022_lYHUY4H7fs | Envy-free Policy Teaching to Multiple Agents | We study envy-free policy teaching. A number of agents independently explore a common Markov decision process (MDP), but each with their own reward function and discounting rate. A teacher wants to teach a target policy to this diverse group of agents, by means of modifying the agents' reward functions: providing additional bonuses to certain actions, or penalizing them. When personalized reward modification programs are used, an important question is how to design the programs so that the agents think they are treated fairly. We adopt the notion of envy-freeness (EF) from the literature on fair division to formalize this problem and investigate several fundamental questions about the existence of EF solutions in our setting, the computation of cost-minimizing solutions, as well as the price of fairness (PoF), which measures the increase of cost due to the consideration of fairness. We show that 1) an EF solution may not exist if penalties are not allowed in the modifications, but otherwise always exists. 2) Computing a cost-minimizing EF solution can be formulated as convex optimization and hence solved efficiently. 3) The PoF increases but at most quadratically with the geometric sum of the discount factor, and at most linearly with the size of the MDP and the number of agents involved; we present tight asymptotic bounds on the PoF. These results indicate that fairness can be incorporated in multi-agent teaching without significant computational or PoF burdens. | Accept | I agree with a reviewer that one of the existence results is a bit trivial: it
simply says that if you can change payoffs arbitrarily then sure, you can cause
anything to happen because you can just penalize everything outside the policy
by infinity. The same can be said for Theorem 4.3. As that same reviewer points
out, the results in Section 5 are simply pointing out linearity.
From a technical perspective, the results that are non-trivial are: Theorem 4.2,
and the price of fairness bounds. From a technical perspective I would say the
results are a bit thin though.
Moreover, the applications are also fairly weak: the authors failed to argue
convincingly either in the rebuttal or the paper that there are really credible
applications of this.
Thus, the argument for acceptance rests primarily on the following: the model is
interesting, and it is a fairly clean and easy to understand problem. It is
possible that this paper would spur interesting follow-up work.
| train | [
"jWFmB-UqxpF",
"wX36RJPBl3",
"B1afJqANK0R",
"QpRiBkP3e0k",
"T9NX4dakLFo",
"kXPqojr46p",
"DW7fbkrA3b8",
"Xh-fu5sH2c0P",
"0xe-R1AcRNU",
"V1NDAelnCEVJ",
"lNtHKM3ZGmy",
"JeJNA_Vnfs6",
"H_SUFQ5vtW",
"9etmZVtVbhy"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your further feedback and useful suggestions!\n\nOne concrete example, as we also replied to Reviewer sVfc, is as follows.\n\nTake language teaching as an example. It can be modeled as an MDP where a state represents a student's overall skill and is encoded as the student's performance on different ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4
] | [
"B1afJqANK0R",
"QpRiBkP3e0k",
"0xe-R1AcRNU",
"DW7fbkrA3b8",
"kXPqojr46p",
"Xh-fu5sH2c0P",
"9etmZVtVbhy",
"H_SUFQ5vtW",
"JeJNA_Vnfs6",
"lNtHKM3ZGmy",
"nips_2022_lYHUY4H7fs",
"nips_2022_lYHUY4H7fs",
"nips_2022_lYHUY4H7fs",
"nips_2022_lYHUY4H7fs"
] |
nips_2022_tJBYkwVDv5 | Finite Sample Analysis Of Dynamic Regression Parameter Learning | We consider the dynamic linear regression problem, where the predictor vector may vary with time. This problem can be modeled as a linear dynamical system, with non-constant observation operator, where the parameters that need to be learned are the variance of both the process noise and the observation noise. While variance estimation for dynamic regression is a natural problem, with a variety of applications, existing approaches to this problem either lack guarantees altogether, or only have asymptotic guarantees without explicit rates. In particular, existing literature does not provide any clues to the following fundamental question: In terms of data characteristics, what does the convergence rate depend on? In this paper we study the global system operator -- the operator that maps the noise vectors to the output. We obtain estimates on its spectrum, and as a result derive the first known variance estimators with finite sample complexity guarantees. The proposed bounds depend on the shape of a certain spectrum related to the system operator, and thus provide the first known explicit geometric parameter of the data that can be used to bound estimation errors. In addition, the results hold for arbitrary sub Gaussian distributions of noise terms. We evaluate the approach on synthetic and real-world benchmarks. | Accept | The paper considers the estimation of process and observation noise variances in a subclass of linear dynamical systems and provides algorithms with finite sample guarantees. The math is novel and the results are interesting. I am happy to recommend acceptance.
| train | [
"zsSC7AX5fK",
"EozDJ9Kt0hH",
"rQSSn-QqQQt",
"amD_itf_zOh",
"oQTrUDxtme4",
"rD5WjDE1it8f",
"nQ1L-_Kzx6C",
"MQI1AQ1PsxB",
"H7ZaSfvvwa5"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for providing a detailed comparison with the LDS line of work and clarifying the technical difficulties that arise in analyzing the system with time-varying observation operators. I stick to my score in favor of accepting this paper.",
" Thanks so much for the comment and for raising the sco... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"rD5WjDE1it8f",
"rQSSn-QqQQt",
"oQTrUDxtme4",
"H7ZaSfvvwa5",
"MQI1AQ1PsxB",
"nQ1L-_Kzx6C",
"nips_2022_tJBYkwVDv5",
"nips_2022_tJBYkwVDv5",
"nips_2022_tJBYkwVDv5"
] |
nips_2022_wGF5mreJVN | Learning to Navigate Wikipedia by Taking Random Walks | A fundamental ability of an intelligent web-based agent is seeking out and acquiring new information. Internet search engines reliably find the correct vicinity but the top results may be a few links away from the desired target. A complementary approach is navigation via hyperlinks, employing a policy that comprehends local content and selects a link that moves it closer to the target. In this paper, we show that behavioral cloning of randomly sampled trajectories is sufficient to learn an effective link selection policy. We demonstrate the approach on a graph version of Wikipedia with 38M nodes and 387M edges. The model is able to efficiently navigate between nodes 5 and 20 steps apart 96% and 92% of the time, respectively. We then use the resulting embeddings and policy in downstream fact verification and question answering tasks where, in combination with basic TF-IDF search and ranking methods, they are competitive results to the state-of-the-art methods. | Accept | This paper aims to use the random walk to learn an effective link selection policy on a graph version of Wikipedia. They navigate graph-structured web data to find a target by hyperlinks within articles. To effectively navigate on the web, they first construct Wikipedia as a graph where nodes represent a web page and edges denote the hyperlinks in the current page and then conduct several strategies for sampling random trajectories from the graph to find the path from the start node to target node.
Overall, this paper is interesting that building Wikipedia as a graph to navigate the target. However, the novelty of this paper is not enough. Multi-hop reasoning in the knowledge graph has achieved significant progress. The main idea of this paper is similar to multi-hop reasoning. Besides, the authors only use the existing path-finding method, such as random walk, to navigate the target. Although the authors claim that difference between their navigation and knowledge graph navigation, a better path-finding method satisfied Wikipedia navigation should be explored.
Since the constructed Wikipedia graph contains natural language text, the authors
use a Transformer to learn representations for nodes and edges. Benefiting from semantic content, the learned representations can enhance the performance of downstream tasks. Such a method is a common technique applied in text graphs where nodes consist of text information. The novel aspects only lie in the navigation policy network, which computes possible actions and defines the probability for action selection. Instead of using pre-training embedding for action embeddings, the authors utilize learnable embeddings for actions.
The authors apply the proposed navigation method to Wikipedia for the fact verification task and achieve a significant boost when integrated into a simple TF-IDF scheme.
Results in small graph navigation show that the proposed method accomplishes an outstanding performance in terms of success rate. Besides, results demonstrate that the learned embeddings perform better than fixed embedding from pre-trained language models. For example, the navigation policy network with a feed-forward layer performs best in most cases. The experimental results are consistent with the assumption.
| train | [
"W35t0u-ycAw",
"hm8jhqLZuGL",
"pHADRqJ-Wc",
"5-9sHCkUjwO",
"Ji5-6H6gQJO",
"KUIvx1C2r7m",
"4J-Ym13JoOU",
"JK6Wc-6yIsO",
"qGlQr2TfQ58",
"2I2_ruR0Kqd"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you so much for answering my questions and concerns, in particular, clarifying the main contribution of the paper. I have increased my rating.",
" Thanks again for the detailed feedback. As the discussion period is ending shortly, we wanted to check if you had a chance to look at our response. We have tri... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"4J-Ym13JoOU",
"JK6Wc-6yIsO",
"2I2_ruR0Kqd",
"qGlQr2TfQ58",
"qGlQr2TfQ58",
"JK6Wc-6yIsO",
"2I2_ruR0Kqd",
"nips_2022_wGF5mreJVN",
"nips_2022_wGF5mreJVN",
"nips_2022_wGF5mreJVN"
] |
nips_2022_aqALH2UAwQH | Efficient and Stable Fully Dynamic Facility Location | We consider the classic facility location problem in fully dynamic data streams, where elements can be both inserted and deleted. In this problem, one is interested in maintaining a stable and high quality solution throughout the data stream while using only little time per update (insertion or deletion). We study the problem and provide the first algorithm that at the same time maintains a constant approximation and incurs polylogarithmic amortized recourse per update. We complement our theoretical results with an experimental analysis showing the practical efficiency of our method. | Accept | The paper considers facility location problem in a well-motivated fully-dynamic setting where clients can arrive and depart. The goal is to maintain a near-optimal solution and simultaneously minimize the amount of "recourse" (the number of facility openings/closings and reassignments of clients to centers). However, this algorithm only works when the ratio of maximum distance / opening cost to minimum distance / opening cost is bounded polynomially in the number m of given facility locations; the paper presents an algorithm that maintains a constant-factor approximation guarantee while using an O(log m)--amount of amortized recourse per change. The idea is to relax the classical greedy algorithm of Jain-Mahdian-Saberi for the static case.which picks repeatedly a cluster---a facility along with the clients assigned to it---of minimum average cost, and to show how to maintain such a solution dynamically with small recourse.
The paper was generally strongly appreciated by the reviewers. | train | [
"JZ0P5IfSpoZ",
"vuLbwdwVh8J",
"nLfC_tMgEsC",
"6luXjRLNcs1",
"5_PHVc9gEWz",
"YCo827zqNKz",
"tAfaQzvihIm",
"oocG10QxvN",
"39ywtPwmgPo",
"5J_-i22pDFN",
"mZAiURqdyV4",
"UgzXLgZomCX",
"zdZLSoVJFmW",
"M8tqGLo9cWm",
"nCJLAFQmaY"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank again the reviewer for all their invested time and their thorough review. We appreciate that the discussion helped in clarifying and alleviating their main concerns.\n\nWe will summarise the main points of the discussion in the next version of our paper.",
" I would like to thank the authors for their ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
5,
3
] | [
"vuLbwdwVh8J",
"nLfC_tMgEsC",
"5_PHVc9gEWz",
"tAfaQzvihIm",
"oocG10QxvN",
"mZAiURqdyV4",
"39ywtPwmgPo",
"nCJLAFQmaY",
"M8tqGLo9cWm",
"zdZLSoVJFmW",
"UgzXLgZomCX",
"nips_2022_aqALH2UAwQH",
"nips_2022_aqALH2UAwQH",
"nips_2022_aqALH2UAwQH",
"nips_2022_aqALH2UAwQH"
] |
nips_2022_-IHPcl1ZhF5 | RISE: Robust Individualized Decision Learning with Sensitive Variables | This paper introduces RISE, a robust individualized decision learning framework with sensitive variables, where sensitive variables are collectible data and important to the intervention decision, but their inclusion in decision making is prohibited due to reasons such as delayed availability or fairness concerns. A naive baseline is to ignore these sensitive variables in learning decision rules, leading to significant uncertainty and bias. To address this, we propose a decision learning framework to incorporate sensitive variables during offline training but not include them in the input of the learned decision rule during model deployment. Specifically, from a causal perspective, the proposed framework intends to improve the worst-case outcomes of individuals caused by sensitive variables that are unavailable at the time of decision. Unlike most existing literature that uses mean-optimal objectives, we propose a robust learning framework by finding a newly defined quantile- or infimum-optimal decision rule. The reliable performance of the proposed method is demonstrated through synthetic experiments and three real-world applications. | Accept | Reviewers are on the whole positive about this paper, after detailed responses from the authors. If this paper is accepted, many readers will be interested, and it does have the potential to be useful for real-world applications in medicine and elsewhere. Therefore, on balance, the paper does reach the standard required for NeurIPS. | test | [
"Gy-15Q_pTt",
"I0GqGrPrDZL",
"9x4Eqb2i-Cf",
"_Y7wz5T8_zR",
"eIkWQMIaLZ",
"kIuDoIKsI8",
"JM3TZdtqKNyq",
"aGs6zdcZvyo",
"-irbAFNVeGw",
"KEJRCb9EaGz",
"j1e0RuYAqOG",
"YOyLhiYA1PE",
"lvYXhjORGFF",
"86tUyAULaO9",
"Ig7mckBN6dt",
"ZVVhqhKLXPT",
"oIG8vwHCl7Y",
"mLREF8Jlg4h",
"Q6ZA-CHz042... | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer Git7, Thank you for your invaluable feedback. Given the closing window of the rebuttal period, we were wondering whether our response and the revised manuscript addressed your concerns. If you have any additional comments, please let us know, we would be happy to address them. We kindly ask you to c... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
3
] | [
"j1e0RuYAqOG",
"lvYXhjORGFF",
"eIkWQMIaLZ",
"kIuDoIKsI8",
"86tUyAULaO9",
"-irbAFNVeGw",
"aGs6zdcZvyo",
"nips_2022_-IHPcl1ZhF5",
"KEJRCb9EaGz",
"GLLwezAQAJl",
"YOyLhiYA1PE",
"Q6ZA-CHz042",
"mLREF8Jlg4h",
"Ig7mckBN6dt",
"ZVVhqhKLXPT",
"oIG8vwHCl7Y",
"nips_2022_-IHPcl1ZhF5",
"nips_202... |
nips_2022_k98U0cb0Ig | Adaptive Stochastic Variance Reduction for Non-convex Finite-Sum Minimization | We propose an adaptive variance-reduction method, called AdaSpider, for minimization of $L$-smooth, non-convex functions with a finite-sum structure. In essence, AdaSpider combines an AdaGrad-inspired (Duchi et al., 2011), but a fairly distinct, adaptive step-size schedule with the recursive \textit{stochastic path integrated estimator} proposed in (Fang et al., 2018). To our knowledge, AdaSpider is the first parameter-free non-convex variance-reduction method in the sense that it does not require the knowledge of problem-dependent parameters, such as smoothness constant $L$, target accuracy $\epsilon$ or any bound on gradient norms. In doing so, we are able to compute an $\epsilon$-stationary point with $\tilde{O}\left(n + \sqrt{n}/\epsilon^2\right)$ oracle-calls, which matches the respective lower bound up to logarithmic factors. | Accept | After discussion with the authors, all reviewers are satisfied with the technical quality of the paper. Achieving smoothness adaptivity in variance reduction is a significant result. | train | [
"bnfiSrhOqOH",
"uEEw27hjh8T",
"shcRgjEHadP",
"lz93WLIVX2N",
"_QyDdPlGQL_",
"aXex8MvGH47",
"bBX4LG8lzl",
"r7BSXzPbJ8R",
"2vuGeWLHsUT",
"wiXcbAwqsBr",
"2MT29Pb-49L"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Hi, thanks for the additional details! And thanks for the experiments for increasing dataset size, this helps a bit with intuition. So I guess one reason (perhaps not the only one, as you mention) that tuning the step size is important is to make sure that the variance has the proper $n$ dependence. As you write ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"uEEw27hjh8T",
"shcRgjEHadP",
"_QyDdPlGQL_",
"nips_2022_k98U0cb0Ig",
"r7BSXzPbJ8R",
"2MT29Pb-49L",
"wiXcbAwqsBr",
"2vuGeWLHsUT",
"nips_2022_k98U0cb0Ig",
"nips_2022_k98U0cb0Ig",
"nips_2022_k98U0cb0Ig"
] |
nips_2022_i3ewAfTbCxJ | Invariance Learning in Deep Neural Networks with Differentiable Laplace Approximations | Data augmentation is commonly applied to improve performance of deep learning by enforcing the knowledge that certain transformations on the input preserve the output. Currently, the data augmentation parameters are chosen by human effort and costly cross-validation, which makes it cumbersome to apply to new datasets. We develop a convenient gradient-based method for selecting the data augmentation without validation data during training of a deep neural network. Our approach relies on phrasing data augmentation as an invariance in the prior distribution on the functions of a neural network, which allows us to learn it using Bayesian model selection. This has been shown to work in Gaussian processes, but not yet for deep neural networks. We propose a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective, which can be optimised without human supervision or validation data. We show that our method can successfully recover invariances present in the data, and that this improves generalisation and data efficiency on image datasets. | Accept | After reading the submission and its reviews, my understanding is that the submission proposes to use a tractable Laplace approximations (General Gauss-Newton and K-FAC) to learn suitable invariances/augmentation for the considered neural network. The derivations of this paper and the description of the method are clear and well motivated. While the experiments illustrate the concept and the soundness of the method, they remains small scale and are limited to sets of parameterized augmentations. Nonetheless, I recommend this submission for acceptance. | train | [
"H6gtUwVN-93",
"uELGhFKp8Pt",
"MNX5xVRIRvd",
"7zYhI_BN5H7",
"QRf6wWI4Nh6",
"0bu15CRxr4C",
"8YsA-g2gycJ",
"j4qpPSoeuob",
"TD7pbBhQoF"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the thoughtful reply. I have updated my review.",
" Dear reviewer Dhxp, \n\nWe would appreciate a timely response to our rebuttal so that we can still address follow-up questions or concerns you might have.\n\nSincerely, \n\nAuthors",
" We would like to thank all reviewers for their time and eff... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
2
] | [
"uELGhFKp8Pt",
"0bu15CRxr4C",
"nips_2022_i3ewAfTbCxJ",
"TD7pbBhQoF",
"j4qpPSoeuob",
"8YsA-g2gycJ",
"nips_2022_i3ewAfTbCxJ",
"nips_2022_i3ewAfTbCxJ",
"nips_2022_i3ewAfTbCxJ"
] |
nips_2022_1beC9_dmOQ0 | Jump Self-attention: Capturing High-order Statistics in Transformers | The recent success of Transformer has benefited many real-world applications, with its capability of building long dependency through pairwise dot-products. However, the strong assumption that elements are directly attentive to each other limits the performance of tasks with high-order dependencies such as natural language understanding and Image captioning. To solve such problems, we are the first to define the Jump Self-attention (JAT) to build Transformers. Inspired by the pieces moving of English Draughts, we introduce the spectral convolutional technique to calculate JAT on the dot-product feature map. This technique allows JAT's propagation in each self-attention head and is interchangeable with the canonical self-attention. We further develop the higher-order variants under the multi-hop assumption to increase the generality. Moreover, the proposed architecture is compatible with the pre-trained models. With extensive experiments, we empirically show that our methods significantly increase the performance on ten different tasks. | Accept | The paper presents a novel architecture named jump self-attention to capture the high-order statistics in transformers. Specifically, the model builds an GCN layer on top of the attention layer, based on the attention scores. The reviews are generally positive. The major concerns are around the significance of the improvement comparing to the original softmax. The authors may want to improve this part more in the final version. | val | [
"ydlDDG4x6a",
"nCTBZkg8HTD",
"wxfHUUxfW9",
"3IQn7wkRkSz",
"WtiSPubjSXa",
"2yJSDlGudzl",
"O-xMqIyxD2",
"d2q5uY8CEv-z",
"084ZhluZDj",
"yoy6O6O2HX",
"jndovPlbWRK",
"-wXq81KGtL",
"I1Efjk429GR",
"dcfMmGdC1rv",
"So8kgrIPMxE",
"Rzzly-hxu2L",
"P6u3ZbXDdSJ"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your kind feedback.\nPlease feel free to let us know if you have any questions or further suggestions.\n\nYours,\n\nAuthors",
" I will increase my score to 6.",
" Dear reviewer MTPd,\n\nThanks for your kind feedback with score updating. We will continue to improve our final version and try to inclu... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"nCTBZkg8HTD",
"WtiSPubjSXa",
"O-xMqIyxD2",
"2yJSDlGudzl",
"So8kgrIPMxE",
"d2q5uY8CEv-z",
"-wXq81KGtL",
"P6u3ZbXDdSJ",
"Rzzly-hxu2L",
"So8kgrIPMxE",
"So8kgrIPMxE",
"dcfMmGdC1rv",
"dcfMmGdC1rv",
"nips_2022_1beC9_dmOQ0",
"nips_2022_1beC9_dmOQ0",
"nips_2022_1beC9_dmOQ0",
"nips_2022_1beC... |
nips_2022_Ryy7tVvBUk | Predictive Coding beyond Gaussian Distributions | A large amount of recent research has the far-reaching goal of finding training methods for deep neural networks that can serve as alternatives to backpropagation~(BP). A prominent example is predictive coding (PC), which is a neuroscience-inspired method that performs inference on hierarchical Gaussian generative models. These methods, however, fail to keep up with modern neural networks, as they are unable to replicate the dynamics of complex layers and activation functions. In this work, we solve this problem by generalizing PC to arbitrary probability distributions, enabling the training of architectures, such as transformers, that are hard to approximate with only Gaussian assumptions. We perform three experimental analyses. First, we study the gap between our method and the standard formulation of PC on multiple toy examples. Second, we test the reconstruction quality on variational autoencoders, where our method reaches the same reconstruction quality as BP. Third, we show that our method allows us to train transformer networks and achieve performance comparable with BP on conditional language models. More broadly, this method allows neuroscience-inspired learning to be applied to multiple domains, since the internal distributions can be flexibly adapted to the data, tasks, and architectures used. | Accept | Motivated by advancing the applicability of backpropagation alternatives, the paper extends predictive coding to non-Gaussian distributions, so it can be used to train effectively complex architectures such as transformers.
The reviews are divided: three reviews give a score of 7 (accept) whereas one review gives a score of 4 (borderline reject). The positive reviews cite the following strengths: clearly stated motivation, technical soundness, potential for impact and convincing experiments. The negative review cites lack of clarity as the main weakness (something also mentioned in one of the positive reviews), while the reviewer is not convinced about the originality of the method given similar advances in variational inference and generative modelling.
On balance, given the potential of impact in the field of predictive coding and the technical soundness, I'm happy to recommend acceptance, even though clarity is somewhat lacking.
Some reviewers requested more details on computational complexity, memory consumption and scalability. I encourage the authors to use the extra content page in the camera-ready version to discuss these aspects further.
| train | [
"yJ7Et9MQpDX",
"_Aad02eErJu",
"SMrNxkDdICC",
"vri-K4eO6BA",
"X8P-CviD_fa",
"b3aEicaePMxR",
"fkg3BAhjUOG",
"1WnlzNlhkZ1",
"F4Kz8RWJHS",
"HMUbp0fNGE1",
"-vYMkt8Sam9",
"W0V31b69ax5r",
"F4Wsvk3dvAY",
"oOFzdV8dgUO",
"RJLkZnqgPRq",
"owe7dLlRPbL",
"_jrUBx5E7GL",
"bYx070H6EOI",
"ARdJPatb... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the response.\n\nResponse to:\n\n paragraph 1: In most of deep learning (if not all given the nature of neural networks and backpropagation), the value contained in every single neuron is interpreted as the single point mass of the posterior distribution $p(x|d)$ (where $x$ represents th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
5
] | [
"_Aad02eErJu",
"HMUbp0fNGE1",
"vri-K4eO6BA",
"X8P-CviD_fa",
"oOFzdV8dgUO",
"ARdJPatbxT_",
"ARdJPatbxT_",
"ARdJPatbxT_",
"ARdJPatbxT_",
"ARdJPatbxT_",
"c9-IwIVu_Dq",
"bYx070H6EOI",
"c9-IwIVu_Dq",
"bYx070H6EOI",
"_jrUBx5E7GL",
"_jrUBx5E7GL",
"nips_2022_Ryy7tVvBUk",
"nips_2022_Ryy7tVv... |
nips_2022_sWNT5lT7l9G | Constrained GPI for Zero-Shot Transfer in Reinforcement Learning | For zero-shot transfer in reinforcement learning where the reward function varies between different tasks, the successor features framework has been one of the popular approaches. However, in this framework, the transfer to new target tasks with generalized policy improvement (GPI) relies on only the source successor features [5] or additional successor features obtained from the function approximators’ generalization to novel inputs [11]. The goal of this work is to improve the transfer by more tightly bounding the value approximation errors of successor features on the new target tasks. Given a set of source tasks with their successor features, we present lower and upper bounds on the optimal values for novel task vectors that are expressible as linear combinations of source task vectors. Based on the bounds, we propose constrained GPI as a simple test-time approach that can improve transfer by constraining action-value approximation errors on new target tasks. Through experiments in the Scavenger and Reacher environment with state observations as well as the DeepMind Lab environment with visual observations, we show that the proposed constrained GPI significantly outperforms the prior GPI’s transfer performance. Our code and additional information are available at https://jaekyeom.github.io/projects/cgpi/. | Accept | The reviewers agree that the paper is a valid contribution to the line of research on successor features (SFs) and generalized policy improvement (GPI). They also agree that the paper is well written and easy to follow.
Three points that may be worth taking into account when preparing the final version of the paper, all related to the presentation:
1. The paper has two main contributions: bounds that improve upon those of Nemecek and Parr [1] and a new application of bounds of this type to detect approximation errors in universal successor features approximators (USFAs). Although these two contributions are listed at the end of Section 1, in the rest of the paper the derived bounds are always mentioned in the context of their specific use with USFAs. In particular, I believe the authors never mention that their bounds could also be applied to decide when to add new policies to a set of policies to be used with GPI, as suggested by Nemecek and Parr. In summary, the authors may want to have a presentation that clearly disentangles the two contributions of the paper.
2. Although the writing is mostly clear, I feel like the core idea of the paper is never spelled out in a concise way (this is especially important for the abstract). Given a set of vectors $\mathbf{w}_1$,$\mathbf{w}_2$, $\dots$, $\mathbf{w}_n$, and an associated space of tasks $\mathcal{W}$ composed of all possible linear combinations of the vectors $\mathbf{w}_i$, the paper derives lower and upper bounds for the value functions of all tasks in $\mathcal{W}$ in terms of the value functions of the tasks $\mathbf{w}_i$. These bounds can then be used in several ways; one novel application proposed in the paper is to detect approximation errors in USFAs. Maybe spelling this out in the abstract and introduction would help the reader to quickly understand the message of the paper.
3. It seems like the $\max$ in the upper bound (Eq. 10) will always be resolved based on the sign of $\alpha_{\mathbf{w}}$, since the first term will always be larger when $\alpha_{\mathbf{w}} > 0$ and the second term will always be larger when $\alpha_{\mathbf{w}} < 0$. This seems to be the ``trick'' used to improve upon Nemecek and Parr's bound. The authors should consider adding this comment to the paper.
I hope the constructive feedback is useful to improve your paper.
[1] Nemecek, M. Parr, R. Policy Caches with Successor Features. ICML, 2021.
| train | [
"6ilKNVf-vHV",
"MSWjVB_r8Y",
"nSdQPziS6_J",
"CuuuRRyQq_7",
"rEjF8LhBvRTO",
"Fn5yMu8wkAG",
"MxoeuYx2Bf",
"ZAVa-mK1WX",
"7WZdppvCGRl",
"KU6fLCzDg-u",
"ypyWcNzy4AF",
"nA5z5VOll00"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the author's preliminary results on fetch tasks and I hope that they expand on this initial result in the camera-ready version of the paper. I am increasing my score to 6.",
" Thank you for the reply.\n\nFollowing your suggestion, we conducted additional experiments in the Fetch environment [1] fro... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"MSWjVB_r8Y",
"nSdQPziS6_J",
"7WZdppvCGRl",
"ZAVa-mK1WX",
"Fn5yMu8wkAG",
"MxoeuYx2Bf",
"nA5z5VOll00",
"ypyWcNzy4AF",
"KU6fLCzDg-u",
"nips_2022_sWNT5lT7l9G",
"nips_2022_sWNT5lT7l9G",
"nips_2022_sWNT5lT7l9G"
] |
nips_2022_h3jZCLjhtmV | Multi-agent Dynamic Algorithm Configuration | Automated algorithm configuration relieves users from tedious, trial-and-error tuning tasks. A popular algorithm configuration tuning paradigm is dynamic algorithm configuration (DAC), in which an agent learns dynamic configuration policies across instances by reinforcement learning (RL). However, in many complex algorithms, there may exist different types of configuration hyperparameters, and such heterogeneity may bring difficulties for classic DAC which uses a single-agent RL policy. In this paper, we aim to address this issue and propose multi-agent DAC (MA-DAC), with one agent working for one type of configuration hyperparameter. MA-DAC formulates the dynamic configuration of a complex algorithm with multiple types of hyperparameters as a contextual multi-agent Markov decision process and solves it by a cooperative multi-agent RL (MARL) algorithm. To instantiate, we apply MA-DAC to a well-known optimization algorithm for multi-objective optimization problems. Experimental results show the effectiveness of MA-DAC in not only achieving superior performance compared with other configuration tuning approaches based on heuristic rules, multi-armed bandits, and single-agent RL, but also being capable of generalizing to different problem classes. Furthermore, we release the environments in this paper as a benchmark for testing MARL algorithms, with the hope of facilitating the application of MARL. | Accept | After reading the reviews, feedbacks and discussions, I lean towards acceptance. Reviewers are all in favour of acceptance with different levels of enthusiasm. Some reviewers found particularly interesting that MARL methods could be applied to DAC problems but other reviewers were not entirely convinced why MARL is more suited that single RL or even classical optimization techniques for this particular problem. However, this could generate an interesting discussion between the DAC and MARL communities and this is the main reason why I vote for acceptance. The authors should consider adding a discussion section explaining why MARL could be fundamentally more advantageous/general than single RL methods in addition to their exhaustive experimental section. | train | [
"bJsvqoDqGVU",
"NKQJcfDaSmL",
"9dwlRBsqQrT",
"nGc6hApU_by",
"VQZQTOWPAMl",
"ZaHIltepS3x",
"XsC4DsIT2z7",
"MpqcqpGUoGG",
"H-lXEBC4OmPI",
"VhKWhnVDPmKm",
"jjbY429un4d",
"kTPZoPXnroT",
"Eovoi7vUgpt",
"uHxVjzI6Viy",
"jFNii5CfVA",
"F34ptRJ_mwS",
"SkF-pS4Tsy0",
"jIJj1HsXF5z",
"FJQWNnw4... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" Thanks for the authors' detailed reply. They are valuable. However, I still have concerns, especially on the theoretical advantage of using MARL methods than other single-agent RL method on DAC problems:\n\n> When solving the task through a single-agent RL algorithm, with the growth of the number of agents, the a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
4
] | [
"VhKWhnVDPmKm",
"H-lXEBC4OmPI",
"nGc6hApU_by",
"FJQWNnw4tmD",
"Pl53Vjo5qwd",
"SkF-pS4Tsy0",
"jFNii5CfVA",
"H-lXEBC4OmPI",
"nips_2022_h3jZCLjhtmV",
"Pl53Vjo5qwd",
"FJQWNnw4tmD",
"jIJj1HsXF5z",
"jIJj1HsXF5z",
"jIJj1HsXF5z",
"jIJj1HsXF5z",
"Pl53Vjo5qwd",
"4CJUJHh52Pr",
"nips_2022_h3jZ... |
nips_2022_fJt2KFnRqZ | An Adaptive Kernel Approach to Federated Learning of Heterogeneous Causal Effects | We propose a new causal inference framework to learn causal effects from multiple, decentralized data sources in a federated setting. We introduce an adaptive transfer algorithm that learns the similarities among the data sources by utilizing Random Fourier Features to disentangle the loss function into multiple components, each of which is associated with a data source. The data sources may have different distributions; the causal effects are independently and systematically incorporated. The proposed method estimates the similarities among the sources through transfer coefficients, and hence requiring no prior information about the similarity measures. The heterogeneous causal effects can be estimated with no sharing of the raw training data among the sources, thus minimizing the risk of privacy leak. We also provide minimax lower bounds to assess the quality of the parameters learned from the disparate sources. The proposed method is empirically shown to outperform the baselines on decentralized data sources with dissimilar distributions. | Accept | All reviewers found the problem setting, which spans federated learning and causal estimation, important and well motivated, and felt the paper made a solid technical contribution in this space. The reviewers raised questions about topics such as identifiability, the nature of the privacy guarantees, and some technical details. The author responses addressed these questions adequately; one reviewer raised their score. Overall, no major weaknesses were raised. Congratulations to the authors on a solid and well received paper.
| train | [
"r7s6rS2eHmf",
"gt7NogqDpxQP",
"8UwoQcxI1t2",
"p3Mmwx_8v",
"u3aGtUVfF7d",
"a3jO9spheOa",
"r-miQCrT4uM",
"_j59nMzD1o0",
"hsJkqhrEhbQ",
"U7gWqwhjK5W",
"Tt5bZsVw-H"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you so much for the reply and raising your score to 6.",
" Thanks for replying and addressing my concerns. I, therefore, raise my score.\n\nThe discussion on identifiability is helpful. Softing the claim and including the additional discussion can improve the paper.",
" ***(We continue our response here... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
3
] | [
"gt7NogqDpxQP",
"8UwoQcxI1t2",
"p3Mmwx_8v",
"_j59nMzD1o0",
"hsJkqhrEhbQ",
"U7gWqwhjK5W",
"Tt5bZsVw-H",
"nips_2022_fJt2KFnRqZ",
"nips_2022_fJt2KFnRqZ",
"nips_2022_fJt2KFnRqZ",
"nips_2022_fJt2KFnRqZ"
] |
nips_2022_wXNPMS11aUb | DevFly: Bio-Inspired Development of Binary Connections for Locality Preserving Sparse Codes | Neural circuits undergo developmental processes which can be influenced by experience. Here we explore a bio-inspired development process to form the connections in a network used for locality sensitive hashing. The network is a simplified model of the insect mushroom body, which has sparse connections from the input layer to a second layer of higher dimension, forming a sparse code. In previous versions of this model, connectivity between the layers is random. We investigate whether the performance of the hash, evaluated in nearest neighbour query tasks, can be improved by process of developing the connections, in which the strongest input dimensions in successive samples are wired to each successive coding dimension. Experiments show that the accuracy of searching for nearest neighbours is improved, although performance is dependent on the parameter values and datasets used. Our approach is also much faster than alternative methods that have been proposed for training the connections in this model. Importantly, the development process does not impact connections built at an earlier stage, which should provide stable coding results for simultaneous learning in a downstream network. | Accept | This paper provides learning algorithm to learn connections in a biologically-motivated network for locality sensitive hashing. This contribution extends previous work where such connections were randomly chosen. I recommend acceptance based on the overall sentiment of the reviews. | train | [
"we0MD9mrNv_",
"JUpLR8WxWO",
"TxCH3wODXaCt",
"FDAvrW-bjE1",
"KAobDZwB-Iqo",
"KVeaQYSOUD",
"pbj0MUJJro",
"kLJRDPDA8nI"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I hope that the final version takes this all into account and is clearer. I've revised my scores upwards.",
" My pleasure.",
" Thank you for the comments on our paper. The suggestions to revise by line number are very helpful and we would adopt all these suggestions. Please see the rebuttal revision.\n\n# Ans... | [
-1,
-1,
-1,
-1,
-1,
6,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
5
] | [
"TxCH3wODXaCt",
"KAobDZwB-Iqo",
"KVeaQYSOUD",
"pbj0MUJJro",
"kLJRDPDA8nI",
"nips_2022_wXNPMS11aUb",
"nips_2022_wXNPMS11aUb",
"nips_2022_wXNPMS11aUb"
] |
nips_2022_SUzPos_pUC | Monte Carlo Tree Search based Variable Selection for High Dimensional Bayesian Optimization | Bayesian optimization (BO) is a class of popular methods for expensive black-box optimization, and has been widely applied to many scenarios. However, BO suffers from the curse of dimensionality, and scaling it to high-dimensional problems is still a challenge. In this paper, we propose a variable selection method MCTS-VS based on Monte Carlo tree search (MCTS), to iteratively select and optimize a subset of variables. That is, MCTS-VS constructs a low-dimensional subspace via MCTS and optimizes in the subspace with any BO algorithm. We give a theoretical analysis of the general variable selection method to reveal how it can work. Experiments on high-dimensional synthetic functions and real-world problems (e.g., MuJoCo locomotion tasks) show that MCTS-VS equipped with a proper BO optimizer can achieve state-of-the-art performance. | Accept | This work proposes a variable selection technique based on MCTS that can be integrated with various acquisition functions. The authors show that this can significantly speed up the optimization of acquisition functions in Bayesian optimization, and in some cases, improve the optimization performance as well. The reviewers were quite positive about the paper, but few expressed high confidence in their rating.
There were some reservations that the work is not adequately benchmarked against recent algorithms for high-dimensional BayesOpt that include aspects of feature selection (e.g., SAASBO). During the rebuttal, the authors showed that MCTS-VS methods could successfully be used in combination with SAASBO, and that this can improve wall time performance. I hope that the authors can explore this in more detail in the final version.
The empirical results could be presented in a more rigorous and transparent fashion. The authors provide compelling evidence that MCTS-VS greatly speeds up "wall clock time". Table 1 does a good enough of a job of communicating the runtime advantages (although it is not clear if this comes from model fitting or AF optimization time). Such speedups can be particularly helpful in higher throughput scenarios when thousands of iterations occur, or one wishes to use more compute-intensive models, such as the SAAS model.
The goal of Bayesian optimization is to efficiently perform global optimization of expensive-to-evaluate functions. The performance with MCTS-VS with respect to this goal is a little less consistent, but still promising. This can be seen in Fig 11 from the rebuttal, where there is no significant difference between most HDBO methods in terms of BayesOpt performance (Fig 3, in this respect, is a somewhat out of place and might mislead readers; I would recommend eliminating such plots, since walltime vs performance plots are really only meaningful for cases like multi-fidelity BO or BO with early stopping, where wall-time refers to the expensive-to-evaluate functions).
Finally, real-world problems likely do not contain many parameters that are truly irrelevant in the same way as the synthetic problems do. Including more examples of real-world problems in the main text can highlight the practical benefits of this algorithm. I would recommend using real estate from e.g., the Levy problem, to explore other high-dimensional problems, such as feature selection type problems like the SVM benchmark in the SAASBO paper. | train | [
"wzhfo8c87za",
"bp0SxAkFa7A",
"_RBCyLmwjve",
"8Sm6kyfTER_",
"S10r29Xu24r3",
"UeY19A2BM6",
"92OOKKpB_whI",
"aOOqD59-TT-",
"nL6oaE-icv",
"Tb8dSTgqLll",
"tM_jaCqENU",
"R73CuxTtD6",
"6EfmvtAmwcc",
"6_HqYpThohp",
"TxC9HZewuiI",
"N3ry17Yx07n",
"UpUQ-IbL9cm",
"czzscehp_ju",
"WU84T7qH7Zb... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_re... | [
" Thank you very much for you detailed response, which address all my main concerns! ",
" Thank you. We will discuss the limitation of the proof appropriately in the final version.",
" Dear Area Chair aR8v, \n\nThank you for your comment. SAASBO employs sparsity-inducing function prior and No-Turn-U-Sampler (NU... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
2,
3,
4
] | [
"tM_jaCqENU",
"8Sm6kyfTER_",
"S10r29Xu24r3",
"Tb8dSTgqLll",
"nips_2022_SUzPos_pUC",
"aOOqD59-TT-",
"6EfmvtAmwcc",
"FfFIYWtN5N",
"WU84T7qH7Zb",
"WU84T7qH7Zb",
"czzscehp_ju",
"UpUQ-IbL9cm",
"UpUQ-IbL9cm",
"N3ry17Yx07n",
"nips_2022_SUzPos_pUC",
"nips_2022_SUzPos_pUC",
"nips_2022_SUzPos_... |
nips_2022_SWbdhfz3lBy | Learning from Few Samples: Transformation-Invariant SVMs with Composition and Locality at Multiple Scales | Motivated by the problem of learning with small sample sizes, this paper shows how to incorporate into support-vector machines (SVMs) those properties that have made convolutional neural networks (CNNs) successful. Particularly important is the ability to incorporate domain knowledge of invariances, e.g., translational invariance of images. Kernels based on the \textit{maximum} similarity over a group of transformations are not generally positive definite. Perhaps it is for this reason that they have not been studied theoretically. We address this lacuna and show that positive definiteness indeed holds \textit{with high probability} for kernels based on the maximum similarity in the small training sample set regime of interest, and that they do yield the best results in that regime. We also show how additional properties such as their ability to incorporate local features at multiple spatial scales, e.g., as done in CNNs through max pooling, and to provide the benefits of composition through the architecture of multiple layers, can also be embedded into SVMs. We verify through experiments on widely available image sets that the resulting SVMs do provide superior accuracy in comparison to well-established deep neural network benchmarks for small sample sizes. | Accept | All reviews are quite positive.
Accept. | train | [
"nWdjaxCPhGO",
"X0_Mfb7o93c",
"B0bsK4M5Kxc",
"ph8Fw5_wmOg",
"bcKDG0lSNd3",
"vS_WIRVwXzG",
"REKhb1aAkJD",
"G0AQDGrFwY1"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" 6. *\" provide std on the experiments.\"*\n\n*Response:* We had shown error bars (standard deviation) in Figure 2, but now provide the specific numerical values in parentheses of the following tables. The tables below demonstrate that SVM-based methods enjoy a smaller standard deviation compared with DL methods ... | [
-1,
-1,
-1,
-1,
-1,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
2
] | [
"X0_Mfb7o93c",
"G0AQDGrFwY1",
"ph8Fw5_wmOg",
"REKhb1aAkJD",
"vS_WIRVwXzG",
"nips_2022_SWbdhfz3lBy",
"nips_2022_SWbdhfz3lBy",
"nips_2022_SWbdhfz3lBy"
] |
nips_2022_dqgzfhHd2- | Recovering Private Text in Federated Learning of Language Models | Federated learning allows distributed users to collaboratively train a model while keeping each user’s data private. Recently, a growing body of work has demonstrated that an eavesdropping attacker can effectively recover image data from gradients transmitted during federated learning. However, little progress has been made in recovering text data. In this paper, we present a novel attack method FILM for federated learning of language models (LMs). For the first time, we show the feasibility of recovering text from large batch sizes of up to 128 sentences. Unlike image-recovery methods that are optimized to match gradients, we take a distinct approach that first identifies a set of words from gradients and then directly reconstructs sentences based on beam search and a prior-based reordering strategy.
We conduct the FILM attack on several large-scale datasets and show that it can successfully reconstruct single sentences with high fidelity for large batch sizes and even multiple sentences if applied iteratively.
We evaluate three defense methods: gradient pruning, DPSGD, and a simple approach to freeze word embeddings that we propose. We show that both gradient pruning and DPSGD lead to a significant drop in utility. However, if we fine-tune a public pre-trained LM on private text without updating word embeddings, it can effectively defend the attack with minimal data utility loss. Together, we hope that our results can encourage the community to rethink the privacy concerns of LM training and its standard practices in the future. Our code is publicly available at https://github.com/Princeton-SysML/FILM . | Accept | This paper proposes a novel gradient inversion attack to recover private text in FL training under the honest-but-curious server threat model. Through extensive experimentation, the authors demonstrate that their attack achieves improved attack performance compared to prior work under the same threat model, and performed plenty of ablation studies to analyze the effect of batch size, stage of model training, hyperparameters, etc.
One major weakness is that the method requires the token embedding layer to be trained, whereas real-world applications often use pre-trained embeddings. The authors showed that their attack performance degrades significantly when the token embeddings are frozen. Other weaknesses include limitation of the attack to autoregressive models, and inadequate discussion regarding the honest-but-curious threat model and secure aggregation (reviewer FFKN).
After the discussion phase, most reviewers agreed that the above weaknesses are minor and that the paper’s contribution warrants acceptance. AC therefore recommends acceptance to NeurIPS, but strongly encourages the authors to revise their draft to address concerns regarding frozen embedding—by performing more extensive experiments to demonstrate the method’s potential limitations, and more clearly position their attack under the setting of an honest-but-curious server with/without secure aggregation.
| train | [
"v5dztB1lHIS",
"EkXNz6_htJx",
"3nb-jV0z4Gs",
"xscTb36xfJq",
"CbfiNWnWdg",
"lYOhfj7AKwQ",
"RaMB-gyy1X2",
"6lAqOvLuag",
"SKassUPjB2",
"tNvf66x_DCpO",
"Gd5j_-VPre",
"LWgoAb_v--U",
"jV3X6fqvG6M",
"yKG64x8RqKy",
"7eWeYIV-r7s",
"KU9J1mmM5I1",
"pMkY0hDKFeR",
"Ab0VPfZwQa",
"PU9MOie6ywKn"... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",... | [
" Here, I will provide a reply to my individual comments rather than to the collective response of the authors. First of all, I would like to thank the authors for the detailed response and for addressing all of my comments and for open-sourcing their code. \n\nHowever, after the authors' elaboration, I have some c... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4,
5
] | [
"_PWrQnio12U",
"6lAqOvLuag",
"xscTb36xfJq",
"jV3X6fqvG6M",
"ylk5hhdQZnO",
"cZZMLelw244",
"_PWrQnio12U",
"SKassUPjB2",
"tNvf66x_DCpO",
"7eWeYIV-r7s",
"pMkY0hDKFeR",
"cZZMLelw244",
"yKG64x8RqKy",
"ylk5hhdQZnO",
"KU9J1mmM5I1",
"GpeUcCQSoyr",
"_PWrQnio12U",
"PU9MOie6ywKn",
"necwm689O... |
nips_2022_KSKyVYcgp1u | Improved Coresets for Euclidean $k$-Means | Given a set of $n$ points in $d$ dimensions, the Euclidean $k$-means problem consists of finding $k$ centers such that the sum of squared distances from every point to its closest center is minimized. The arguably most popular way of dealing with this problem in the big data setting is to first compress the data by computing a weighted subset known as a coreset and then run any algorithm on this subset. The guarantee of the coreset is that for any candidate solution, the ratio between coreset cost and the cost of the original instance is less than a $(1\pm \varepsilon)$ factor. The current state of the art coreset size for Euclidean $k$-means is $\tilde O(\min(k^{2} \cdot \varepsilon^{-2},k\cdot \varepsilon^{-4}))$. This matches the lower bound of $\Omega(k \varepsilon^{-2})$ up to a $\min(k,\varepsilon^{-2})$ factor. In this paper, we improve this bound to $\tilde O(\min(k^{1.5} \cdot \varepsilon^{-2},k\cdot \varepsilon^{-4}))$. In the regime where $k \leq \varepsilon^{-2}$, this is a strict improvement over the state of the art. In particular, ours is the first provable bound that breaks through the $k^2$ barrier while retaining an optimal dependency on $\varepsilon$. | Accept | I actually really liked the paper despite the weak accept reviews - and decided to bump this up. Improving the known coreset bound for k-means is extremely important, as that is a fundamental problem. While it does indeed heavily build upon prior work and doesn't get optimal bounds, it is giving a novel analysis for a very important problem (with a huge line of prior work with worse bounds), and thus I recommend acceptance. | train | [
"miFsB8QUhsh",
"_zp4HdyLAz0",
"BC9rlBZ3VAD",
"OEo5AsIb4ev",
"rA2EnpNUVlN",
"ne9YsHrk0is",
"B0qiKyVd_iw",
"sNFrXhpnI1p5",
"a-mJ60bbfrM",
"xpqI8YhFJnpC",
"cth9FxrW3JE",
"sAvnfyV9Oj",
"8Hu0PAeIr8Y",
"OVmjILjE8Qj",
"MrjRv2gcP_B",
"hSVmUSlVmW"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" There is still value in those plots. The only thing that is unimportant is the variance in those costs. But we can remove them, if you so insist. We have other plots for the theoretical hard instance in the supplementary material with which we could replace the cost plots. \n\nAs mentioned before, varying $\\vare... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
3
] | [
"_zp4HdyLAz0",
"OEo5AsIb4ev",
"xpqI8YhFJnpC",
"rA2EnpNUVlN",
"cth9FxrW3JE",
"hSVmUSlVmW",
"hSVmUSlVmW",
"hSVmUSlVmW",
"hSVmUSlVmW",
"MrjRv2gcP_B",
"OVmjILjE8Qj",
"8Hu0PAeIr8Y",
"nips_2022_KSKyVYcgp1u",
"nips_2022_KSKyVYcgp1u",
"nips_2022_KSKyVYcgp1u",
"nips_2022_KSKyVYcgp1u"
] |
nips_2022_7yvu4qOKtn1 | Computational Doob h-transforms for Online Filtering of Discretely Observed Diffusions | This paper is concerned with online filtering of discretely observed nonlinear diffusion processes. We propose to approximate the Fully Adapted Particle Filter algorithm by solving a single auxiliary stochastic control problem prior to the data-assimilation procedure. The methodology relies on the non-linear Feynman-Kac approach to solving semi-linear partial differential equations. Numerical experiments suggest that the proposed approach can be orders of magnitude more efficient than the bootstrap particle filter in the regime when observations are highly informative. | Reject | This is an interesting paper, but it also requires some additional work. The paper received mixed responses from the reviewers, but the main concerns were sorted out during the rebuttal and discussion. In the end, the reviewers ended up (weakly) recommending acceptance after the authors promised an extensive number of changes to the paper in the camera-ready stage. However, the promised changes appear very extensive and beyond what would be expected for a typical NeurIPS paper (list below). The authors have provided links to plots in the discussion, but have not revised the paper in OpenReview. Sorting out the issues counts as a 'major revision' and the conference review process does not recognise such a concept.
List of changes (based on rebuttals):
* Empirical comparison (& discussion) to GIRF instead of only bootstrap particle filter
* Adding a discussion section to the end of the paper
* Writing a more thorough Related work section
* A study on the impact of growing dimensionality (OU process)
* Plots and discussion on the learned dynamics (control)
* Plots and discussion on training loss
* A more intuitive explanation of the Doob's h-transform in the Background section
* Expanded Introduction to discuss applications (with references)
* Computational cost discussion (promised to add to the appendix)
* Figures illustrating Doob’s h-transform trajectories
* Discussion on how fine discretization is needed
* Discussion on parameter inference and the smoothing problem
* Discussion on irregular observation intervals | train | [
"DWikW5Y27K_",
"EkPbt4YzV61",
"e5CgtW9N1oM",
"w2KnMcNAv6V",
"W4V2wefamdj",
"ByC0okgSyaE",
"6PB68FTGA9v",
"5NHB1nLoc_6",
"wvxW0qdgdFT",
"_8_qHyDk5UPJ",
"egs5Pvla43h",
"KvCvPZIL4r8",
"zinc_3AhV5t",
"TV8-ws1hWi",
"zQJxAVrQgDg",
"7qkG2S421Qg",
"_QDzfITgTDd",
"6ASJzEyx18q",
"0BKvOKDZB... | [
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author... | [
" Thank you for taking the time to go through our rebuttal and for your kind response.",
" Here is an update. We have just gotten confirmation that the issue with the Anonymous Github server has been fixed. The links to all additional figures are now working properly. Thank you for bringing this issue to our atte... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4,
4
] | [
"e5CgtW9N1oM",
"W4V2wefamdj",
"0BKvOKDZBW2",
"6PB68FTGA9v",
"ByC0okgSyaE",
"Kv7svwMjUCm",
"zQJxAVrQgDg",
"z3-ecEfTXMP",
"z3-ecEfTXMP",
"fNvcdNKRZS",
"fNvcdNKRZS",
"fNvcdNKRZS",
"fNvcdNKRZS",
"fNvcdNKRZS",
"fNvcdNKRZS",
"3nOz-49IrCT",
"3nOz-49IrCT",
"3nOz-49IrCT",
"3nOz-49IrCT",
... |
nips_2022_pp7onaiM4VB | SPD domain-specific batch normalization to crack interpretable unsupervised domain adaptation in EEG | Electroencephalography (EEG) provides access to neuronal dynamics non-invasively with millisecond resolution, rendering it a viable method in neuroscience and healthcare. However, its utility is limited as current EEG technology does not generalize well across domains (i.e., sessions and subjects) without expensive supervised re-calibration. Contemporary methods cast this transfer learning (TL) problem as a multi-source/-target unsupervised domain adaptation (UDA) problem and address it with deep learning or shallow, Riemannian geometry aware alignment methods. Both directions have, so far, failed to consistently close the performance gap to state-of-the-art domain-specific methods based on tangent space mapping (TSM) on the symmetric, positive definite (SPD) manifold.
Here, we propose a machine learning framework that enables, for the first time, learning domain-invariant TSM models in an end-to-end fashion. To achieve this, we propose a new building block for geometric deep learning, which we denote SPD domain-specific momentum batch normalization (SPDDSMBN). A SPDDSMBN layer can transform domain-specific SPD inputs into domain-invariant SPD outputs, and can be readily applied to multi-source/-target and online UDA scenarios. In extensive experiments with 6 diverse EEG brain-computer interface (BCI) datasets, we obtain state-of-the-art performance in inter-session and -subject TL with a simple, intrinsically interpretable network architecture, which we denote TSMNet. Code: https://github.com/rkobler/TSMNet | Accept | The reviewers have now unanimously acknowledged the quality of the contribution both on the theory and experimental sides. This paper can be endorsed for publication at NeurIPS 2022. | train | [
"qPvK1F5pFi",
"Qc61TqNtU_P",
"m_rGmLho3qe",
"9RKCbMIVXn",
"YOKASuOYT_",
"wqNdSqHn9eq",
"vcplHsUVvwo",
"8CZdm--dSEB",
"xuEkwUQGWVI",
"hg-W7QQrUfB",
"vTL1G5ZIp3",
"Tutk9fXyM75",
"6L3Oe30gl7F",
"V7iVv0-nljc",
"zttNkQ06wOg",
"06hcqCfCmtp",
"7C4MYEpZpy-",
"Hd0pJCKvi7",
"qBGduR8HZU_",
... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" I thank the authors for answering all my questions.\n\nThe rebuttal has greatly improved the paper by making it much clearer.\n\nConsequently, I update the scores that I assigned:\n- presentation: 1 -> 3\n- rating: 4 -> 7\n\nHence, I now vote to accept the paper.",
" I thank the authors for their responses and ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5,
4
] | [
"IkxGHnbuG5D",
"I48YOVKzHgH",
"8CZdm--dSEB",
"8CZdm--dSEB",
"8CZdm--dSEB",
"8CZdm--dSEB",
"06hcqCfCmtp",
"IkxGHnbuG5D",
"IkxGHnbuG5D",
"IkxGHnbuG5D",
"I48YOVKzHgH",
"I48YOVKzHgH",
"I48YOVKzHgH",
"I48YOVKzHgH",
"qBGduR8HZU_",
"Hd0pJCKvi7",
"nips_2022_pp7onaiM4VB",
"nips_2022_pp7onai... |
nips_2022_9PNsCQpg-Ak | Better SGD using Second-order Momentum | We develop a new algorithm for non-convex stochastic optimization that finds an $\epsilon$-critical point in the optimal $O(\epsilon^{-3})$ stochastic gradient and Hessian-vector product computations. Our algorithm uses Hessian-vector products to "correct'' a bias term in the momentum of SGD with momentum. This leads to better gradient estimates in a manner analogous to variance reduction methods. In contrast to prior work, we do not require excessively large batch sizes and are able to provide an adaptive algorithm whose convergence rate automatically improves with decreasing variance in the gradient estimates. We validate our results on a variety of large-scale deep learning architectures and benchmarks tasks. | Accept | This papers develops a new efficient second order method for non-convex optimization. Reviewers seemed to agree that the paper offers a significant over the prior work of Arjevani, Yossi, et al "Second-order information in non-convex stochastic optimization: Power and limitations" by requiring a single Hessian vector product update in the momentum update, and therefore would be of interest to the NeurIPS audience. | train | [
"owAUaBc_4i",
"AalSEn-Ya2",
"ZacjtROCjk",
"71UAK4qUk9",
"vrribSG0qEX",
"i3yx9_kjux2",
"2_TFUG2DuLP",
"fnfI_ZQR-nt",
"1ObxLJ6VdjU"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the question!\n\nWe think the main problem here is that Algorithm 1 in [1] just inherently doesn’t work without a large batch size. Algorithm 1 in [1] is based on SARAH [2] which requires the optimizer to infrequently compute a checkpoint (which is also used in SVRG). Algorithm 1 of [1]... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"AalSEn-Ya2",
"vrribSG0qEX",
"2_TFUG2DuLP",
"1ObxLJ6VdjU",
"fnfI_ZQR-nt",
"2_TFUG2DuLP",
"nips_2022_9PNsCQpg-Ak",
"nips_2022_9PNsCQpg-Ak",
"nips_2022_9PNsCQpg-Ak"
] |
nips_2022_uCXNOeL0TG | Fairness for Workers Who Pull the Arms: An Index Based Policy for Allocation of Restless Bandit Tasks | Motivated by applications such as machine repair, project monitoring, and anti-poaching patrol scheduling, we study intervention planning of stochastic processes under resource constraints. This planning problem has previously been modeled as restless multi-armed bandits (RMAB), where each arm is an intervention-dependent Markov Decision Process. However, the existing literature assumes all intervention resources belong to a single uniform pool, limiting their applicability to real-world settings where interventions are carried out by a set of workers, each with their own costs, budgets, and intervention effects. In this work, we consider a novel RMAB setting, called multi-worker restless bandits (MWRMAB) with heterogeneous workers. The goal is to plan an intervention schedule that maximizes the expected reward while satisfying budget constraints on each worker as well as fairness in terms of the load assigned to each worker. Our contributions are two-fold: (1)~we provide a multi-worker extension of the Whittle index to tackle heterogeneous costs and per-worker budget and (2)~ we develop an index-based scheduling policy to achieve fairness. Further, we evaluate our method on various cost structures and show that our method significantly outperforms other baselines in terms of fairness without sacrificing much in reward accumulated. | Reject | This paper looks at the restless MAB (RMAB) problem through the lens of fairness, an increasingly-active area over the last few years. This works generalizations to fairness (over the arms that are pulled) but spread out across multiple workers who are pulling the arms. This is a well-motivated setting, and one that might see application in various mobile-health-related scenarios that are popular in this space, and where the ML community has a presence. Still, reviewers raised many questions about theoretical bounds for cases under the purview of the settings investigated. We would appreciate a stronger rebuttal and/or stronger edits to a camera-ready or next submission for this work. | train | [
"rh6yl-Yuvw3",
"cnsrRrRZ2E",
"mWEcBwFTZV",
"qM-5aqfAGmf",
"dRZmUKbfmWk",
"VUbHSibVTUY",
"8Fa1_xp8u11"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for the time spent addressing my concerns.\n\nI think that my opinion on the paper did not change substantially.\n\nAbout my comment about the rearrangement of Figure 4, I meant that the figures and the table should be provided into two different graphical elements (for instance, using \\minip... | [
-1,
-1,
-1,
-1,
6,
3,
7
] | [
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"qM-5aqfAGmf",
"8Fa1_xp8u11",
"VUbHSibVTUY",
"dRZmUKbfmWk",
"nips_2022_uCXNOeL0TG",
"nips_2022_uCXNOeL0TG",
"nips_2022_uCXNOeL0TG"
] |
nips_2022_ArZWGF0Ifl7 | Archimedes Meets Privacy: On Privately Estimating Quantiles in High Dimensions Under Minimal Assumptions | The last few years have seen a surge of work on high dimensional statistics under privacy constraints, mostly following two main lines of work: the "worst case" line, which does not make any distributional assumptions on the input data; and the "strong assumptions" line, which assumes that the data is generated from specific families, e.g., subgaussian distributions.
In this work we take a middle ground, obtaining new differentially private algorithms with polynomial sample complexity for estimating quantiles in high-dimensions, as well as estimating and sampling points of high Tukey depth, all working under very mild distributional assumptions.
From the technical perspective, our work relies upon fundamental robustness results in the convex geometry literature, demonstrating how such results can be used in a private context. Our main object of interest is the (convex) floating body (FB), a notion going back to Archimedes, which is a robust and well studied high-dimensional analogue of the interquantile range of a distribution. We show how one can privately, and with polynomially many samples, (a) output an approximate interior point of the FB -- e.g., "a typical user" in a high-dimensional database -- by leveraging the robustness of the Steiner point of the FB; and at the expense of polynomially many more samples, (b) produce an approximate uniform sample from the FB, by constructing a private noisy projection oracle.
| Accept | Estimating quantiles of a dataset is a fundamental problem. This is a challenging and interesting question in high dimensions. The authors use convex geometry to obtain algorithms for private estimation of quantiles. The reviewers all liked the work and the presentation.
A comment from me: In the second paragraph of the abstract, you use "relies upon deep robustness results". The word deep here is highly subjective, and while one can use it in a talk, it does not fit nicely in the abstract. | train | [
"jDrsnhxbCki",
"FdyGZ2rr5de",
"14N-FU42an5",
"WaIm6_yeHn",
"htbMrl5mIhn",
"dCHNsBt8m0_",
"vJPHnjAOe20",
"SawEq59DJco",
"gBAHr38d9y",
"RrVnMf9kuL3",
"KtIq-UFMGi",
"fiPQJml09U9",
"hDOPqQiLdu3"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response, which resolves my question.",
" Thank you for the response, which has resolved my questions.",
" We are thankful for your response and revised score. We would be happy of course to elaborate more if you have any further questions or concerns.",
" Thanks for the thorough response ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3,
3
] | [
"SawEq59DJco",
"vJPHnjAOe20",
"WaIm6_yeHn",
"htbMrl5mIhn",
"hDOPqQiLdu3",
"fiPQJml09U9",
"KtIq-UFMGi",
"RrVnMf9kuL3",
"nips_2022_ArZWGF0Ifl7",
"nips_2022_ArZWGF0Ifl7",
"nips_2022_ArZWGF0Ifl7",
"nips_2022_ArZWGF0Ifl7",
"nips_2022_ArZWGF0Ifl7"
] |
nips_2022_GNt5ntEGjD3 | A Unified Hard-Constraint Framework for Solving Geometrically Complex PDEs | We present a unified hard-constraint framework for solving geometrically complex PDEs with neural networks, where the most commonly used Dirichlet, Neumann, and Robin boundary conditions (BCs) are considered. Specifically, we first introduce the "extra fields'' from the mixed finite element method to reformulate the PDEs so as to equivalently transform the three types of BCs into linear forms. Based on the reformulation, we derive the general solutions of the BCs analytically, which are employed to construct an ansatz that automatically satisfies the BCs. With such a framework, we can train the neural networks without adding extra loss terms and thus efficiently handle geometrically complex PDEs, alleviating the unbalanced competition between the loss terms corresponding to the BCs and PDEs. We theoretically demonstrate that the "extra fields'' can stabilize the training process. Experimental results on real-world geometrically complex PDEs showcase the effectiveness of our method compared with state-of-the-art baselines. | Accept | The paper considers using neural networks to solve PDEs with complex geometry by incorporating hard constraint into the approximation function class. The method is interesting and useful for practical applications of neural-network based PDE solvers. The authors have adequately addressed the concerns by the referee. The meta-reviewer recommends acceptance of the paper. | train | [
"mdQplQ_p2ss",
"Xyau11xDDaH",
"8iZSi-G8BqX",
"HmTHOKMEyq",
"BLAuiAllfCo",
"AU0_ZgyFoVr",
"xcnY82BuWxo",
"CEQBbA6qPuS",
"zZoOD5s9K8F",
"0UgmIFNl8w9",
"NWZmswnnCey",
"NxlO-p394JD",
"S7CeFUsq1kI",
"rBSjICWGkiO",
"7LzwSVkmOiQ",
"sdk11lxafPw"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for the feedback. After the reformulation, we have three classes of equations, the reformulated PDEs, $\\boldsymbol{p}=\\nabla \\boldsymbol{u}$, and the reformulated boundary conditions. The first two classes are equations defined inside (although $\\boldsymbol{p}=\\nabla \\boldsymbol{u}$ is t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"Xyau11xDDaH",
"0UgmIFNl8w9",
"sdk11lxafPw",
"BLAuiAllfCo",
"NWZmswnnCey",
"sdk11lxafPw",
"nips_2022_GNt5ntEGjD3",
"sdk11lxafPw",
"sdk11lxafPw",
"7LzwSVkmOiQ",
"rBSjICWGkiO",
"S7CeFUsq1kI",
"nips_2022_GNt5ntEGjD3",
"nips_2022_GNt5ntEGjD3",
"nips_2022_GNt5ntEGjD3",
"nips_2022_GNt5ntEGjD... |
nips_2022_yZgxl3bgumu | PhysGNN: A Physics--Driven Graph Neural Network Based Model for Predicting Soft Tissue Deformation in Image--Guided Neurosurgery | Correctly capturing intraoperative brain shift in image-guided neurosurgical procedures is a critical task for aligning preoperative data with intraoperative geometry for ensuring accurate surgical navigation. While the finite element method (FEM) is a proven technique to effectively approximate soft tissue deformation through biomechanical formulations, their degree of success boils down to a trade-off between accuracy and speed. To circumvent this problem, the most recent works in this domain have proposed leveraging data-driven models obtained by training various machine learning algorithms---e.g., random forests, artificial neural networks (ANNs)---with the results of finite element analysis (FEA) to speed up tissue deformation approximations by prediction. These methods, however, do not account for the structure of the finite element (FE) mesh during training that provides information on node connectivities as well as the distance between them, which can aid with approximating tissue deformation based on the proximity of force load points with the rest of the mesh nodes. Therefore, this work proposes a novel framework, PhysGNN, a data-driven model that approximates the solution of the FEM by leveraging graph neural networks (GNNs), which are capable of accounting for the mesh structural information and inductive learning over unstructured grids and complex topological structures. Empirically, we demonstrate that the proposed architecture, PhysGNN, promises accurate and fast soft tissue deformation approximations, and is competitive with the state-of-the-art (SOTA) algorithms while promising enhanced computational feasibility, therefore suitable for neurosurgical settings. | Accept | The paper proposes a method for modeling soft tissue deformations using a graph neural network trained to approximate the solution of the more classic finite element method. The proposed method is significantly faster than the classic method and with sufficient accuracy, enabling the prospect of using the method to predict tissue deformation in image-guided neurosurgery.
All reviewers appreciated the novelty and utility of the proposed method. The biggest weakness in the study had to do with the limited validation performed. This limitation was clearly addressed by the authors both in the limitations section and in their response to the reviewers. Despite the limitation, the reviewers unanimously recommend acceptance. | val | [
"Dj8veNUzVrx",
"zSKcjN_mU9x",
"7-m6h5wp42",
"1N1rMKCk8kU",
"c4MigVP8XKc",
"uziCewb0N3",
"YvU4OmKgKsO"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank you for your time and positive review regarding the quality of our paper and its contributions. We now answer the questions you have raised in your review below. \n\n**Application of PhysGNN in Real Scenarios**\n\nOur framework captures tissue deformation by taking the amount of force appli... | [
-1,
-1,
-1,
-1,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"YvU4OmKgKsO",
"uziCewb0N3",
"c4MigVP8XKc",
"nips_2022_yZgxl3bgumu",
"nips_2022_yZgxl3bgumu",
"nips_2022_yZgxl3bgumu",
"nips_2022_yZgxl3bgumu"
] |
nips_2022_5xiLuNutzJG | Rethinking Knowledge Graph Evaluation Under the Open-World Assumption | Most knowledge graphs (KGs) are incomplete, which motivates one important research topic on automatically complementing knowledge graphs. However, evaluation of knowledge graph completion (KGC) models often ignores the incompleteness---facts in the test set are ranked against all unknown triplets which may contain a large number of missing facts not included in the KG yet. Treating all unknown triplets as false is called the closed-world assumption. This closed-world assumption might negatively affect the fairness and consistency of the evaluation metrics. In this paper, we study KGC evaluation under a more realistic setting, namely the open-world assumption, where unknown triplets are considered to include many missing facts not included in the training or test sets. For the currently most used metrics such as mean reciprocal rank (MRR) and Hits@K, we point out that their behavior may be unexpected under the open-world assumption. Specifically, with not many missing facts, their numbers show a logarithmic trend with respect to the true strength of the model, and thus, the metric increase could be insignificant in terms of reflecting the true model improvement. Further, considering the variance, we show that the degradation in the reported numbers may result in incorrect comparisons between different models, where stronger models may have lower metric numbers. We validate the phenomenon both theoretically and experimentally. Finally, we suggest possible causes and solutions for this problem. Our code and data are available at https://github.com/GraphPKU/Open-World-KG . | Accept | This paper studies the evaluation of knowledge graph completion under an open-world assumption, where the training or test sets may include many unknown missing facts. It shows that the currently most-used metrics may not sufficiently reflect the true model performance and suggests alternative metrics to address this issue. The reviewers found the problem this paper studies is fundamental, and well investigated both theoretically and empirically, which should benefit the research community in this field, although reviewers still have concerns about the assumption in this study and the lack of experiments on more realistic KGs. Overall the merits of the paper outweigh the drawbacks and an acceptance is recommended. | train | [
"BDNvOUrGbDE",
"V30j4Gtk1t-",
"dF4acy8KO_t",
"06tvboQsaeJ",
"caMAGHvj11v",
"mz3ry6GImUy",
"KX8RNgRammq",
"ION5AlubQ1f",
"jSyjaYsntsp",
"YY1h1kSTvW"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank reviewer LAYz again for the inspiring comments to help us improve the paper.\n\nIn response to the comments, we address the main concerns as follows:\n\n1. Convincingness of the proposed evaluation. Our main aim is to highlight the degradation and inconsistency problems and then point out their reason, t... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"ION5AlubQ1f",
"YY1h1kSTvW",
"jSyjaYsntsp",
"caMAGHvj11v",
"ION5AlubQ1f",
"KX8RNgRammq",
"nips_2022_5xiLuNutzJG",
"nips_2022_5xiLuNutzJG",
"nips_2022_5xiLuNutzJG",
"nips_2022_5xiLuNutzJG"
] |
nips_2022_fUeOyt-2EOp | Thor: Wielding Hammers to Integrate Language Models and Automated Theorem Provers | In theorem proving, the task of selecting useful premises from a large library to unlock the proof of a given conjecture is crucially important. This presents a challenge for all theorem provers, especially the ones based on language models, due to their relative inability to reason over huge volumes of premises in text form. This paper introduces Thor, a framework integrating language models and automated theorem provers to overcome this difficulty. In Thor, a class of methods called hammers that leverage the power of automated theorem provers are used for premise selection, while all other tasks are designated to language models. Thor increases a language model's success rate on the PISA dataset from $39\%$ to $57\%$, while solving $8.2\%$ of problems neither language models nor automated theorem provers are able to solve on their own. Furthermore, with a significantly smaller computational budget, Thor can achieve a success rate on the MiniF2F dataset that is on par with the best existing methods. Thor can be instantiated for the majority of popular interactive theorem provers via a straightforward protocol we provide. | Accept | The paper presents a method to integrate language models and hammers (Automated Theorem Provers) for Interactive Theorem Proving. The authors train the language model to recognize an opportunity to invoke a hammer by transforming the training data: they check whether the hammer can be applied at each proof state.
The approach is novel, while being simple. The reviewers are generally happy with the writing of the work. There were some concerns (such as obvious baselines, pointed out byuriH) that seem to be mitigated.
vtgR pointed out the slowness of preprocessing, which I think is an issue that should be explicitly acknowledged in the revision and addressed in future work.
Given the overall positive feedback of the reviewers, I am recommending an "accept" for this work. | train | [
"sQklia_4EjY",
"uhbenBggSfg",
"HTKnAkK1myH",
"oRGOfU3fStT",
"BZ8OjohSXmv",
"8jjq4k1L7aF",
"gX6hxyHJJIq",
"kvaXqXqTBJr",
"VW1FyWr4loL"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Does the rebuttal we provided address your concern regarding the baseline and do the statistics convince you that learning to use hammer is essential? If so, please consider increasing your score. If not, can you please provide further feedback on weaknesses so we can improve? Many thanks!",
" Thank you. Prepro... | [
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"oRGOfU3fStT",
"8jjq4k1L7aF",
"nips_2022_fUeOyt-2EOp",
"kvaXqXqTBJr",
"gX6hxyHJJIq",
"VW1FyWr4loL",
"nips_2022_fUeOyt-2EOp",
"nips_2022_fUeOyt-2EOp",
"nips_2022_fUeOyt-2EOp"
] |
nips_2022_klElp42K9U0 | Data-Efficient Pipeline for Offline Reinforcement Learning with Limited Data | Offline reinforcement learning (RL) can be used to improve future performance by leveraging historical data. There exist many different algorithms for offline RL, and it is well recognized that these algorithms, and their hyperparameter settings, can lead to decision policies with substantially differing performance. This prompts the need for pipelines that allow practitioners to systematically perform algorithm-hyperparameter selection for their setting. Critically, in most real-world settings, this pipeline must only involve the use of historical data. Inspired by statistical model selection methods for supervised learning, we introduce a task- and method-agnostic pipeline for automatically training, comparing, selecting, and deploying the best policy when the provided dataset is limited in size. In particular, our work highlights the importance of performing multiple data splits to produce more reliable algorithm-hyperparameter selection: while this is a common approach in supervised learning, to our knowledge, this has not been discussed in detail in the offline RL setting, and we show it can have substantial impacts when the dataset is small. Compared to alternate approaches, our proposed pipeline outputs higher-performing deployed policies from a broad range of offline policy learning algorithms and across various simulation domains in healthcare, education, and robotics. This work contributes toward the development of a general-purpose meta-algorithm for automatic algorithm-hyperparameter selection for offline RL. | Accept | ## Summary
In offline RL, typically it is not possible to run the policies to evaluate them in the environment for model selection, as a result one needs to rely on the offline approaches such as OPE methods to do the model selection. This paper points to the importance of data partitioning over single partitioning for offline model and hyper parameter selection. They propose a simple but effective method called SSR for this purpose and compare non-splitting and different ways of cross-validation splitting. The paper has comprehensive experiments on D4RL, Robomimic, Tutorbot and Sepsis domains. They show very promising results on those tasks over other approaches for offline policy selection.
## Decision
Overall this paper is very well-written and studying a very important problem for offline RL. The paper was already in a good shape when submitted, however, after a few small points made by the reviewers I think it will be in a much better shape. I think the offline RL and NeurIPS community would benefit from the findings in this paper and deserves to be accepted.
I would still recommend the authors to clarify the points that authors were confused in the paper as they did in the rebuttal and include some of the results presented in the rebuttal in the main paper.
| train | [
"YakbZxwL5u",
"aEDlmObY0by",
"Mzc4bdf2Asd",
"wAvxzFl9qir",
"wrU68UcEBnB",
"MgorY77uCAc",
"ixAKZZWOtP6",
"L_E6iNkq_SZ",
"C5rLpA1EZNR",
"2PiA0w-NqRT",
"vhfBGc1A9Wq",
"trW0jHGxvk",
"CH8KoNPpTd-P",
"99G1oiFa1d",
"JzGOhpv1MtC",
"FwLLVh6ke3I",
"FfjDBvuIJa",
"EwYfNI7wof-",
"znYUQvlB1VQ"... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_re... | [
" Thanks for the author response and the additional effort on running the experiments of different OPEs. Though i am still a little bit concerned about the computationally complexity (which might be an interesting future work), I am willing to adjust my score. ",
" I thank the authors for the detailed response an... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
3,
2,
4
] | [
"MgorY77uCAc",
"wrU68UcEBnB",
"wAvxzFl9qir",
"99G1oiFa1d",
"2PiA0w-NqRT",
"vhfBGc1A9Wq",
"C5rLpA1EZNR",
"JzGOhpv1MtC",
"FBU-LtmQLn2",
"znYUQvlB1VQ",
"trW0jHGxvk",
"CH8KoNPpTd-P",
"EwYfNI7wof-",
"FfjDBvuIJa",
"FwLLVh6ke3I",
"nips_2022_klElp42K9U0",
"nips_2022_klElp42K9U0",
"nips_202... |
nips_2022_DVfZKXSFW5m | Diversity vs. Recognizability: Human-like generalization in one-shot generative models | Robust generalization to new concepts has long remained a distinctive feature of human intelligence. However, recent progress in deep generative models has now led to neural architectures capable of synthesizing novel instances of unknown visual concepts from a single training example. Yet, a more precise comparison between these models and humans is not possible because existing performance metrics for generative models (i.e., FID, IS, likelihood) are not appropriate for the one-shot generation scenario. Here, we propose a new framework to evaluate one-shot generative models along two axes: sample recognizability vs. diversity (i.e., intra-class variability). Using this framework, we perform a systematic evaluation of representative one-shot generative models on the Omniglot handwritten dataset. We first show that GAN-like and VAE-like models fall on opposite ends of the diversity-recognizability space. Extensive analyses of the effect of key model parameters further revealed that spatial attention and context integration have a linear contribution to the diversity-recognizability trade-off. In contrast, disentanglement transports the model along a parabolic curve that could be used to maximize recognizability. Using the diversity-recognizability framework, we were able to identify models and parameters that closely approximate human data. | Accept | This paper studies the problem of few-shot generation on the Omniglot dataset, contrasting few-shot generative models against humans. They introduce "diversity" and "recognizability" metrics and perform an empirical analysis of how various models are situated in the diversity-recognizability plane relative to humans.
Overall this is a well-written paper, with a clear story and experiments. It provides interesting insights on how various generative models architectures relate to humans in a particular task. | train | [
"cEnpMsrDq1",
"76_2IUB4hdP",
"KaJyxNu5rL",
"lJ0agG9Zntc",
"UMAgOGdF77",
"-5KhWBo7Wh8",
"Cf7fVuvc0dAQ",
"jl_krWib8_F",
"UX-dPDshTs",
"O56lWxBj44Y",
"QDBxQzdZLif",
"Yj2AcIShZ45",
"6NxdWM0ozNa",
"AHh-31YTlqS",
"eCALGwiPgR"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer D3RB for the response as well as the increase of the score.\n\nWe agree with the reviewer that the additional experiment we have run during the rebuttal is important to better motivate the article ! Thus, If the paper is accepted, we plan to use the additional 1-page to detail and include th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"KaJyxNu5rL",
"Yj2AcIShZ45",
"jl_krWib8_F",
"UMAgOGdF77",
"Cf7fVuvc0dAQ",
"nips_2022_DVfZKXSFW5m",
"eCALGwiPgR",
"UX-dPDshTs",
"O56lWxBj44Y",
"QDBxQzdZLif",
"AHh-31YTlqS",
"6NxdWM0ozNa",
"nips_2022_DVfZKXSFW5m",
"nips_2022_DVfZKXSFW5m",
"nips_2022_DVfZKXSFW5m"
] |
nips_2022_dLL4KXzKUpS | Where2comm: Communication-Efficient Collaborative Perception via Spatial Confidence Maps | Multi-agent collaborative perception could significantly upgrade the perception performance by enabling agents to share complementary information with each other through communication. It inevitably results in a fundamental trade-off between perception performance and communication bandwidth. To tackle this bottleneck issue, we propose a spatial confidence map, which reflects the spatial heterogeneity of perceptual information. It empowers agents to only share spatially sparse, yet perceptually critical information, contributing to where to communicate. Based on this novel spatial confidence map, we propose Where2comm, a communication-efficient collaborative perception framework. Where2comm has two distinct advantages: i) it considers pragmatic compression and uses less communication to achieve higher perception performance by focusing on perceptually critical areas; and ii) it can handle varying communication bandwidth by dynamically adjusting spatial areas involved in communication. To evaluate Where2comm, we consider 3D object detection in both real-world and simulation scenarios with two modalities (camera/LiDAR) and two agent types (cars/drones) on four datasets: OPV2V, V2X-Sim, DAIR-V2X, and our original CoPerception-UAVs. Where2comm consistently outperforms previous methods; for example, it achieves more than $100,000 \times$ lower communication volume and still outperforms DiscoNet and V2X-ViT on OPV2V. Our code is available at~\url{https://github.com/MediaBrain-SJTU/where2comm}. | Accept | This paper proposes a multi-agent collaborative perception algorithm where agents exchange perceived sensors (e.g., LIDAR) and share their observations with other agents sparsily by maintaining spatial confidence maps that determine the communication connectivity matrix. Communication happens over multiple rounds and incoming messages are fused using multi-head attention. The method is evaluated on synthetic drone and car data from popular simulators, where it achieves superior results with significantly lower communication volume.
Reviewers praised the experiements and the performance of the model (cKVM, VaH1, Jgi5), the large reduction in communication overhead (cKVM, YUZ9), the novelty of the confidences-map based communication framework (VaH1, YUZ9, Jgi5).
Reviewer cKVM noted that the model was evaluated only on synthetic data (cKVM) and a missing section on data synchronisation and availability; in the rebuttal, authors argued about limited availability of simulated and real-world data, and explained how depth was extracted from RGB observations. Reviewer VaH1 suggested existing literature on local grid map pooling of probabilistic occupancy maps, questioned performance in case of noisy observations (the authors replied with additional results in those conditions), and doing ablation experiments. Reviewer YUZ9 had some questions, answered in the rebuttal. Reviewer Jgi5 had several questions on the complexity of the model, the fairness of some comparisons and the reproducibility of the method - points addressed in the rebuttal.
The reviewers agree on the score (6) and on the fact that the paper should be accepted, and the AC concurs.
Sincerely,
Area Chair
| train | [
"82k-Nwp-Qfn",
"gCxUGJFXqgm",
"0IAb1NhT7c-",
"bzk2WBh1Nm6",
"SXSWuu9fBn",
"jXBx6tZuKe",
"3M1E1UEa18v",
"KoVtX7FbP4t",
"iqCMxk4cIl",
"zh1LbxWHXMG",
"FFILY040pi",
"A14V_MSjirK",
"cv-UAzykZI-",
"R_kq5JkjeVy",
"9Cj2P8o3Dvi",
"JQNFR_Txx0-",
"XqIQyx9TBN5",
"2I3CEPQHGrd",
"-3kv6rFNue",
... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" Thanks very much for reading our responses! In the final version, we will benchmark our method and previous SOTAs on this just-released real dataset, DAIR-V2X, incorporate more details, discussions and highlight the realistic limitations, if the submission is accepted.\n\nParticularly, we appreciate your interest... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
4
] | [
"0IAb1NhT7c-",
"SXSWuu9fBn",
"A14V_MSjirK",
"jXBx6tZuKe",
"jXBx6tZuKe",
"-3kv6rFNue",
"KoVtX7FbP4t",
"XqIQyx9TBN5",
"UaKsG38hkZ",
"j36VTWhSf3u",
"UaKsG38hkZ",
"UaKsG38hkZ",
"R4gA9S1R7m",
"R4gA9S1R7m",
"R4gA9S1R7m",
"R4gA9S1R7m",
"R4gA9S1R7m",
"xAF8-sgM_I1",
"j36VTWhSf3u",
"nips... |
nips_2022_vRrFVHxFiXJ | Predicting Cellular Responses to Novel Drug Perturbations at a Single-Cell Resolution | Single-cell transcriptomics enabled the study of cellular heterogeneity in response to perturbations at the resolution of individual cells. However, scaling high-throughput screens (HTSs) to measure cellular responses for many drugs remains a challenge due to technical limitations and, more importantly, the cost of such multiplexed experiments. Thus, transferring information from routinely performed bulk RNA HTS is required to enrich single-cell data meaningfully.
We introduce chemCPA, a new encoder-decoder architecture to study the perturbational effects of unseen drugs. We combine the model with an architecture surgery for transfer learning and demonstrate how training on existing bulk RNA HTS datasets can improve generalisation performance. Better generalisation reduces the need for extensive and costly screens at single-cell resolution.
We envision that our proposed method will facilitate more efficient experiment designs through its ability to generate in-silico hypotheses, ultimately accelerating drug discovery. | Accept | All four reviewers liked aspects of the paper with one still on the fence. All reviewers appreciated the authors' feedback to the their comments both in the discussion and the extensive updates of the paper.
Accept is recommended. | train | [
"NyS_wdeJHn",
"33qAbe4sJFA",
"XmBcSQhnzam",
"bfYkmz16QQ",
"kjqw0ra4-n",
"_VPmLDNoM0B",
"NUICf0fiYqN",
"FEfx2oP4reA",
"_PZwMvDQ3_kh",
"K7RqM9wxflS",
"ntw3EiWFh5P",
"NjZxT0ExHW6",
"m0AgK82sHz",
"xecSyvbBZ6e"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The model evaluations are much clearer now, and the manuscript has been improved.\n\nOne remaining concern is your decision to test on only nine held out molecules. (I assume you're training on all 150+ ?) Given that an important novel aspect of your work is the molecule encoder, I'd recommend a much more thoroug... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"XmBcSQhnzam",
"kjqw0ra4-n",
"bfYkmz16QQ",
"ntw3EiWFh5P",
"NUICf0fiYqN",
"K7RqM9wxflS",
"_PZwMvDQ3_kh",
"nips_2022_vRrFVHxFiXJ",
"xecSyvbBZ6e",
"NjZxT0ExHW6",
"m0AgK82sHz",
"nips_2022_vRrFVHxFiXJ",
"nips_2022_vRrFVHxFiXJ",
"nips_2022_vRrFVHxFiXJ"
] |
nips_2022_ttQ_3CiZqd3 | The least-control principle for local learning at equilibrium | Equilibrium systems are a powerful way to express neural computations. As special cases, they include models of great current interest in both neuroscience and machine learning, such as deep neural networks, equilibrium recurrent neural networks, deep equilibrium models, or meta-learning. Here, we present a new principle for learning such systems with a temporally- and spatially-local rule. Our principle casts learning as a \emph{least-control} problem, where we first introduce an optimal controller to lead the system towards a solution state, and then define learning as reducing the amount of control needed to reach such a state. We show that incorporating learning signals within a dynamics as an optimal control enables transmitting activity-dependent credit assignment information, avoids storing intermediate states in memory, and does not rely on infinitesimal learning signals. In practice, our principle leads to strong performance matching that of leading gradient-based learning methods when applied to an array of problems involving recurrent neural networks and meta-learning. Our results shed light on how the brain might learn and offer new ways of approaching a broad class of machine learning problems. | Accept | As summarized by reviewer 5Kjg, this work proposes a theoretical framework for optimizing equilibrium neural networks by reformulating the original constrained optimization problem into a least-control problem. The least-control problem is further solved with a two-steps iterative procedure that first drives the network dynamics with a control signal towards minimizing fixed points verifying the initial problem constraints, then taking a gradient step to minimize the norm of the control terminal cost.
Motivations for such a principle are manifold: First, solutions are shown to coincide with the initial optimization problem under feasible conditions on the control that leave flexibility for its explicit construction. Second, credit assignment and activation dynamics are originally combined in a single dynamic that result in a spatially and temporally local explicit weight update rule. Third, this formulation can also be linked to models of free energy minimization, offering an original perspective on the learning objective operated by such models as a relaxation of the proposed least-control objective.
Several experiments were conducted to showcase the learnability of such a system for supervised image classification, implicit neural representation and meta-learning. The technique is shown to have comparative performance to the current leading technique for training equilibrium model recurrent backpropagation (RBP) and can encompass meta-learning with good results compared to gradient-based methods.
Almost all reviewers, including myself, consider this work to have potentially a high impact, linking control theory with neural network learning, backed by convincing experiments. They introduce 'the least control principle' paradigm for training neural networks with local learning rules, and such learning algorithms, which are biologically inspired to be modular, are intimately related to predictive coding and energy based models. I believe this work will definitely encourage much discussion in the research community, and open up many avenues of further investigation, and for that, I strongly recommend acceptance (as a spotlight or oral presentation, or equivalent format, if applicable for this year).
| train | [
"wH6La6HsIv7",
"5oZL6q2MuA",
"rAknVine-X",
"6AIWBXvgMHV",
"wOT9Vnl8X-w",
"_FgfKckk1I",
"Ac5U3Dir64G",
"-Syw8dsx7Zn",
"5HNvOeNMnIn",
"zWxwpvTO9XV",
"BEr0vEo45i",
"6z6G4m7W0W6",
"fu_vYEU8SoO",
"vxzpzYHD-7sY",
"giAi0AmVEa8",
"IHlOjb-UHOK",
"JjED5qaehu",
"j8LXVkYoiYm",
"QxF_eJCFhe0",... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"... | [
" Dear authors,\n\nThank you a lot for the detailed answer and sorry for the misunderstanding. I have updated my score to strong accept.\nI wish I have written this paper myself!\n\n",
" Dear reviewer,\n\nMany thanks for participating in the discussion and for your detailed and constructive comments that keep hel... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
8,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
5,
5,
5
] | [
"5oZL6q2MuA",
"-Syw8dsx7Zn",
"zWxwpvTO9XV",
"wOT9Vnl8X-w",
"BEr0vEo45i",
"6z6G4m7W0W6",
"p6nn2-MmHkV",
"VQUwvkMTFwW",
"QxF_eJCFhe0",
"5IGgd3lJOJm",
"XLUDDQrwEqm",
"pF8pJtJNVZA",
"giAi0AmVEa8",
"rnPywoUqrQL",
"IHlOjb-UHOK",
"JjED5qaehu",
"j8LXVkYoiYm",
"9-LrnNVyjN",
"5IGgd3lJOJm",... |
nips_2022_MVDzIreiRqW | Evolution of Neural Tangent Kernels under Benign and Adversarial Training | Two key challenges facing modern deep learning is mitigating deep networks vulnerability to adversarial attacks, and understanding deep learning's generalization capabilities. Towards the first issue, many defense strategies have been developed, with the most common being Adversarial Training (AT). Towards the second challenge, one of the dominant theories that has emerged is the Neural Tangent Kernel (NTK) -- a characterization of neural network behavior in the infinite-width limit. In this limit, the kernel is frozen and the underlying feature map is fixed. In finite-widths however, there is evidence that feature learning happens at the earlier stages of the training (kernel learning) before a second phase where the kernel remains fixed (lazy training). While prior work has aimed at studying adversarial vulnerability through the lens of the frozen infinite-width NTK, there is no work which studies adversarial robustness of NTK during training. In this work, we perform an empirical study of the evolution of the NTK under standard and adversarial training, aiming to disambiguate the effect of adversarial training on kernel learning and lazy training. We find under adversarial training, the NTK rapidly converges to a different kernel (and feature map) than standard training. This new kernel provides adversarial robustness, even when non-robust training is performed on top of it. Furthermore, we find that adversarial training on top of a fixed kernel can yield a classifier with $76.1\%$ robust accuracy under PGD attacks with $\varepsilon = 4/255$ on CIFAR-10. | Accept | The paper studies the evolution of NTK under adversarial training. The empirical studies show that the NTK of adversarial training converges to a different kernel compared to standard training. All the reviewers agree that the empirical study is interesting and should be accepted.
Missing citations: The actual notion of the NTK the authors consider is called finite-width NTK, which is proposed in the paper "Convergence analysis of deep learning via over-parameterization". | train | [
"68-2LC1IaMU",
"mt0lhfOgMBt",
"WFzOKfbfEeU",
"AroWRu09yMe",
"kEVk6_SEuBc",
"uhD5PTEa7ay",
"2Qw33gAdDzk",
"YQo3REmK9X",
"d6LWb4QJ6M",
"B-KAWFZcR18",
"3IFuOimxigZ",
"zPBcPKH4S5_",
"R4uhmkRyNHE"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to take this last opportunity to kindly ask Reviewer 36HK whether their concerns are addressed with our response and if they have any additional concerns that they would like us to provide clarifications on?\n\nIf their concerns are addressed, we hope that they get to agree with all other reviewers ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
8,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
5,
4
] | [
"YQo3REmK9X",
"kEVk6_SEuBc",
"AroWRu09yMe",
"2Qw33gAdDzk",
"uhD5PTEa7ay",
"R4uhmkRyNHE",
"zPBcPKH4S5_",
"3IFuOimxigZ",
"B-KAWFZcR18",
"nips_2022_MVDzIreiRqW",
"nips_2022_MVDzIreiRqW",
"nips_2022_MVDzIreiRqW",
"nips_2022_MVDzIreiRqW"
] |
nips_2022_DwHIcEyias | Generalizing Consistent Multi-Class Classification with Rejection to be Compatible with Arbitrary Losses | \emph{Classification with rejection} (CwR) refrains from making a prediction to avoid critical misclassification when encountering test samples that are difficult to classify. Though previous methods for CwR have been provided with theoretical guarantees, they are only compatible with certain loss functions, making them not flexible enough when the loss needs to be changed with the dataset in practice. In this paper, we derive a novel formulation for CwR that can be equipped with arbitrary loss functions while maintaining the theoretical guarantees. First, we show that $K$-class CwR is equivalent to a $(K\!+\!1)$-class classification problem on the original data distribution with an augmented class, and propose an empirical risk minimization formulation to solve this problem with an estimation error bound. Then, we find necessary and sufficient conditions for the learning \emph{consistency} of the surrogates constructed on our proposed formulation equipped with any classification-calibrated multi-class losses, where consistency means the surrogate risk minimization implies the target risk minimization for CwR. Finally, experiments on benchmark datasets validate the effectiveness of our proposed method. | Accept | The consensus view was that this was solid and practically relevant theoretical research.
| train | [
"1sIY3pi5yJE",
"hxkOqvllRG",
"LcaL5MCh9P3",
"T4sM2SnHVq",
"7M8TvKfrL98",
"zTL8B2LcKq",
"hcuCLo_Zhz",
"Qufx2_dkYVO",
"INDklzP5ts7",
"tLlPK6PMhjE",
"Z70Y9l0-JXN",
"-VclgHYNCSw",
"-bCRrW0WMrB",
"wDnkXcCgl71",
"Or_OejzEhI4",
"LzogwMUATBd",
"awWWQfgXnxf",
"8-UV9-jps-Q",
"RZ3v0pnJw14"
... | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_rev... | [
" We have corrected this point and updated it in the supplementary material. We really appreciate your efforts, and thank you again for making this paper more accurate! ",
" Thank you for your response. I think it addresses my concerns. I would like to keep my rating as 'accept'.",
" Thanks for the feedback. I'... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
2,
4
] | [
"T4sM2SnHVq",
"-bCRrW0WMrB",
"wDnkXcCgl71",
"Qufx2_dkYVO",
"8-UV9-jps-Q",
"awWWQfgXnxf",
"LzogwMUATBd",
"INDklzP5ts7",
"Z70Y9l0-JXN",
"nips_2022_DwHIcEyias",
"-VclgHYNCSw",
"RZ3v0pnJw14",
"8-UV9-jps-Q",
"awWWQfgXnxf",
"LzogwMUATBd",
"nips_2022_DwHIcEyias",
"nips_2022_DwHIcEyias",
"... |
nips_2022_tglniD_fn9 | Addressing Leakage in Concept Bottleneck Models | Concept bottleneck models (CBMs) enhance the interpretability of their predictions by first predicting high-level concepts given features, and subsequently predicting outcomes on the basis of these concepts. Recently, it was demonstrated that training the label predictor directly on the probabilities produced by the concept predictor as opposed to the ground-truth concepts, improves label predictions. However, this results in corruptions in the concept predictions that impact the concept accuracy as well as our ability to intervene on the concepts -- a key proposed benefit of CBMs. In this work, we investigate and address two issues with CBMs that cause this disparity in performance: having an insufficient concept set and using inexpressive concept predictor. With our modifications, CBMs become competitive in terms of predictive performance, with models that otherwise leak additional information in the concept probabilities, while having dramatically increased concept accuracy and intervention accuracy. | Accept | The paper contains an interesting discussion on Concept Bottleneck Models pointing out their main flaws. The proposed remedies are plausible and documented empirically. The reviewers have converged to the consensus and all have scored the paper above the bar.
Minor comments:
- The caption of Figure 1 seems to be wrong. The middle part is swapped with the right part.
- It might be worth to say that the concept learning is a multi-label classification problem. The last method resembles classifier chains (particularly probabilistic classifier chains). | train | [
"PRBSZV6L4Wk",
"5xYQH5M6IP",
"c9h7ri6tHgj",
"2QN-dtkk_zZ",
"0oydK5NHhrA",
"I6HFo05AXFY",
"jxsB2-YdSoK",
"TxzzSTV8tT",
"d28SoFnL8s7",
"fxiuYwy2pS",
"hp6Wu7A_IGF"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate the response, and we are glad that our reply addressed your concerns.\n\nPlease remember to update your score before the metareviewer discussion begins by editing your original review.",
" Thanks for clarifying these aspects, I encourage you to add a brief discussion to the paper, especially for t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"c9h7ri6tHgj",
"0oydK5NHhrA",
"I6HFo05AXFY",
"nips_2022_tglniD_fn9",
"hp6Wu7A_IGF",
"fxiuYwy2pS",
"TxzzSTV8tT",
"d28SoFnL8s7",
"nips_2022_tglniD_fn9",
"nips_2022_tglniD_fn9",
"nips_2022_tglniD_fn9"
] |
nips_2022_jQgsZDspz5h | Contrastive and Non-Contrastive Self-Supervised Learning Recover Global and Local Spectral Embedding Methods | Self-Supervised Learning (SSL) surmises that inputs and pairwise positive relationships are enough to learn meaningful representations. Although SSL has recently reached a milestone: outperforming supervised methods in many modalities\dots the theoretical foundations are limited, method-specific, and fail to provide principled design guidelines to practitioners. In this paper, we propose a unifying framework under the helm of spectral manifold learning. Through the course of this study, we will demonstrate that VICReg, SimCLR, BarlowTwins et al. correspond to eponymous spectral methods such as Laplacian Eigenmaps, ISOMAP et al.
From this unified viewpoint, we obtain (i) the close-form optimal representation, (ii) the close-form optimal network parameters in the linear regime, (iii) the impact of the pairwise relations used during training on each of those quantities and on downstream task performances, and most importantly, (iv) the first theoretical bridge between contrastive and non-contrastive methods to global and local spectral methods respectively hinting at the benefits and limitations of each. For example, if the pairwise relation is aligned with the downstream task, all SSL methods produce optimal representations for that downstream task. | Accept | This paper focuses on providing some theoretical intuition/understandings of popular self-supervised learning (SSL) methods. The authors develop closed-form optimal representations for various method as a function of the training data and the sample-relation matrix. The authors also provide further intuition by developing simplified versions of these expressions in linear settings which they use to show an equivalence of sorts between SSL and various spectral methods and how it affects downstream tasks. Overall the reviewers were positive and thought the paper had nice insights. They did raise some concerns about the quality of exposition and various detailed technical issues. Most of the technical issues seems to have been addressed by the authors in their response. I concur with the reviewers. The paper has nice insights and therefore I recommend acceptance. I do however recommend that the authors further polish the paper for the camera ready version by addressing the issues raised by the reviewers especially about the exposition. | train | [
"7r-EgbITA0W",
"Y9PB8tDgwAI",
"YRX1AOkz7O",
"COa1Hj_LuvT",
"KhhSAzjw7Gm",
"rtoQerDo1b",
"TBOo5fxbMr",
"7qdihsXYR6",
"fL90BJPdv0G",
"UuIfl0-KSFt",
"JJKMsjBr2M",
"UxGBer9VLPo",
"TCsc3767vFU",
"Qy2g9aD5Xe",
"HRzPZVVNvnE",
"mBnB4yV901C"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank Reviewer VtRd for going through our revision and comments. We are delighted to read that we addressed the reviewer's questions.\n\nWe remain happy to answer any further questions that might occur to the reviewer before the end of the discussion period.",
" We are grateful to Reviewer W5rv for not only ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"KhhSAzjw7Gm",
"rtoQerDo1b",
"COa1Hj_LuvT",
"7qdihsXYR6",
"UuIfl0-KSFt",
"JJKMsjBr2M",
"HRzPZVVNvnE",
"HRzPZVVNvnE",
"mBnB4yV901C",
"Qy2g9aD5Xe",
"TCsc3767vFU",
"nips_2022_jQgsZDspz5h",
"nips_2022_jQgsZDspz5h",
"nips_2022_jQgsZDspz5h",
"nips_2022_jQgsZDspz5h",
"nips_2022_jQgsZDspz5h"
] |
nips_2022_6UtOXn1LwNE | Models of human preference for learning reward functions | The utility of reinforcement learning is limited by the alignment of reward functions with the interests of human stakeholders. One promising method for alignment is to learn the reward function from human-generated preferences between pairs of trajectory segments. These human preferences are typically assumed to be informed solely by partial return, the sum of rewards along each segment. We find this assumption to be flawed and propose modeling preferences instead as arising from a different statistic: each segment's regret, a measure of a segment's deviation from optimal decision-making. Given infinitely many preferences generated according to regret, we prove that we can identify a reward function equivalent to the reward function that generated those preferences. We also prove that the previous partial return model lacks this identifiability property without preference noise that reveals rewards' relative proportions, and we empirically show that our proposed regret preference model outperforms it with finite training data in otherwise the same setting. Additionally, our proposed regret preference model better predicts real human preferences and also learns reward functions from these preferences that lead to policies that are better human-aligned. Overall, this work establishes that the choice of preference model is impactful, and our proposed regret preference model provides an improvement upon a core assumption of recent research. | Reject | The submitted paper was reviewed by 4 knowledgable reviewers and the reviewers and authors enganged in intense discussions. The authors clarified many details in these discussion but could not convince the reviewers in all regards (there are still open concerns regardings the proofs and the update proofs came in rather late so that there was insufficient time for the reviewers to further interact; there concerns regarding experiments although I discounted most of those regarding to scalability as I agree with the authors in that regard to some extent; etc.). Moreover, looking at the discussions and the authors' responses, the paper would benefit from making several points more clear/improving their presentation, likely by including parts which came up in the discussions in the paper. Considering all this, I think this paper should go through another round of reviews before it should be accepted and I am recommending rejection of the paper. Please note that it was not easy to come to this decision - there are some important insights and experiments in the paper which should be made available to the community asap. Thus I would honestly encourage the authors to improve their paper considering the reviewers' comments and take-aways from the discussion and submit a revised version of the paper at one of the upcoming conferences. I am already looking forward to seeing an improved version of the paper being published. | train | [
"Q3R-cxcQoke",
"iIZ-k4os-kp",
"02hkRIywWB",
"sZTn5_QoKRC",
"BJ2Tzrn6jJc",
"SxGr_BLFMl1",
"3KddY4BdXaE",
"EczQRsbt7Tg",
"HdiHOg3ZLK",
"2N6eDn4tzA",
"5Eg4LFVc94B",
"eXpYT0v_GV",
"KS33hv1S1j",
"p-z5ilzAs5",
"E8AG-IJiMFb",
"mBAuO0Nxf0",
"q69d1pOaQP",
"XFxlZNrtnhn",
"2uZBzxzAyHj",
"... | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" Regarding that we only use simple grid worlds (though up to 100 random versions of them), we can only make subjective arguments about the necessity of scalability in addition to our various contributions, which we understand may not persuade you, especially if your mind is already made. \n\nIn our responses here,... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
3,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
3,
4
] | [
"BJ2Tzrn6jJc",
"BJ2Tzrn6jJc",
"3KddY4BdXaE",
"SxGr_BLFMl1",
"2N6eDn4tzA",
"EczQRsbt7Tg",
"eXpYT0v_GV",
"HdiHOg3ZLK",
"KS33hv1S1j",
"5Eg4LFVc94B",
"DtNlCuPRJy",
"p-z5ilzAs5",
"2uZBzxzAyHj",
"BpAFeMpOkZI",
"XFxlZNrtnhn",
"q69d1pOaQP",
"nips_2022_6UtOXn1LwNE",
"nips_2022_6UtOXn1LwNE",... |
nips_2022_zGPeowwxWb | Deep Equilibrium Approaches to Diffusion Models | Diffusion-based generative models are extremely effective in generating high-quality images, with generated samples often surpassing the quality of those produced by other models under several metrics. One distinguishing feature of these models, however, is that they typically require long sampling chains in order to produce high-fidelity images. This presents a challenge not only from the lenses of sampling time, but also from the inherent difficulty in backpropagating through these chains in order to accomplish tasks such as model inversion, i.e., approximately finding latent states that generate known images. In this paper, we look at diffusion models through a different perspective, that of a (deep) equilibrium (DEQ) fixed point model. Specifically, we extend the recent denoising diffusion implicit model (DDIM), and model the entire sampling chain as a joint, multi-variate fixed point system. This setup provides an elegant unification of diffusion and equilibrium models, and shows benefits in 1) single-shot image sampling, as it replaces the fully-serial typical sampling process with a parallel one; and 2) model inversion, where we can leverage fast gradients in the DEQ setting to much more quickly find the noise that generates a given image. The approach is also orthogonal and thus complementary to other methods used to reduce the sampling time, or improve model inversion. We demonstrate our method's strong performance across several datasets, including CIFAR10, CelebA, and LSUN Bedroom and Churches. | Accept | This paper develops a variant (deep equilibrium model (DEQ)) of an existing method (DDIM) with the goal of efficient sampling and model inversion while still maintaining the performance. In particular, it treats the sampling step sequence of DDIM(denoising diffusion implicit model, where the sampling step is deterministic and no longer a Markov chain.) as a fixed-point set, so that a fixed-point solver (e.g., Anderson acceleration, etc.) is able to jointly minimize the entire sampling chain.
The committee all agree that the methodology proposed in this work, although is built on prior work, is novel. The presentation of the paper is clear and the reported results are promising. The committee appreciates the authors' effort in both revising the manuscript and providing a conclusive response. Therefore, we recommend acceptance of this manuscript.
| train | [
"3pJHbP67fmE",
"I6_ZDMcSRUw",
"hkOXlk6rDIQ",
"MTEevYRLsS-",
"IJMD7Kzj0CN",
"QFCzgSv1K-OC",
"afvehBeZyPB",
"M1UMBsMP5S",
"ZLAbbvBISae",
"KybxDiFBnB",
"LDYC4ofhu3a"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The reviewer would like to first thank the authors for their great efforts in both revising the manuscript and providing a conclusive response. The authors put great effort into addressing my concerns. However, the reviewer still doubts the practical usage of this proposal for small image generatiation, which mak... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4,
4
] | [
"afvehBeZyPB",
"M1UMBsMP5S",
"LDYC4ofhu3a",
"LDYC4ofhu3a",
"KybxDiFBnB",
"KybxDiFBnB",
"KybxDiFBnB",
"ZLAbbvBISae",
"nips_2022_zGPeowwxWb",
"nips_2022_zGPeowwxWb",
"nips_2022_zGPeowwxWb"
] |
nips_2022_lKFOwaYNQlb | Distributed Influence-Augmented Local Simulators for Parallel MARL in Large Networked Systems | Due to its high sample complexity, simulation is, as of today, critical for the successful application of reinforcement learning. Many real-world problems, however, exhibit overly complex dynamics, which makes their full-scale simulation computationally slow. In this paper, we show how to factorize large networked systems of many agents into multiple local regions such that we can build separate simulators that run independently and in parallel. To monitor the influence that the different local regions exert on one another, each of these simulators is equipped with a learned model that is periodically trained on real trajectories. Our empirical results reveal that distributing the simulation among different processes not only makes it possible to train large multi-agent systems in just a few hours but also helps mitigate the negative effects of simultaneous learning. | Accept |
All the reviewers agree that the paper present novel and interesting contributions. As such I recommend acceptance.
Please incorporate the last feedback from Reviewer iVuf.
| train | [
"-I2p6hPg6eP",
"fi3Ht0v7iLC",
"-AKnD24HwHOv",
"SJv1ih8Xg14",
"7pZM6L33yws",
"_FIBByyir93",
"qFsP22wLJZw",
"m83hdlLMNQV"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the discussion in Appendix C regarding the scope and limitations of the simulator, I think it adds more clarity to the paper. I will increase my score to 7, mainly because the authors' response regarding the limited baselines in the experiment is convincing to me, but also because I think the limitat... | [
-1,
-1,
-1,
-1,
-1,
5,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"-AKnD24HwHOv",
"SJv1ih8Xg14",
"m83hdlLMNQV",
"qFsP22wLJZw",
"_FIBByyir93",
"nips_2022_lKFOwaYNQlb",
"nips_2022_lKFOwaYNQlb",
"nips_2022_lKFOwaYNQlb"
] |
nips_2022_NqbktPUkZf7 | Neural Temporal Walks: Motif-Aware Representation Learning on Continuous-Time Dynamic Graphs | Continuous-time dynamic graphs naturally abstract many real-world systems, such as social and transactional networks. While the research on continuous-time dynamic graph representation learning has made significant advances recently, neither graph topological properties nor temporal dependencies have been well-considered and explicitly modeled in capturing dynamic patterns. In this paper, we introduce a new approach, Neural Temporal Walks ($\texttt{NeurTWs}$), for representation learning on continuous-time dynamic graphs. By considering not only time constraints but also structural and tree traversal properties, our method conducts spatiotemporal-biased random walks to retrieve a set of representative motifs, enabling temporal nodes to be characterized effectively. With a component based on neural ordinary differential equations, the extracted motifs allow for irregularly-sampled temporal nodes to be embedded explicitly over multiple different interaction time intervals, enabling the effective capture of the underlying spatiotemporal dynamics. To enrich supervision signals, we further design a harder contrastive pretext task for model optimization. Our method demonstrates overwhelming superiority under both transductive and inductive settings on six real-world datasets. | Accept | The paper proposes a method based on the concepts of temporal walks and neural ordinary differential equations to learn effective node representations on continuous-time dynamic graphs. All the reviewers are positive about the paper and the rebuttal/discussion helped clearing out the concerns they had. | train | [
"VypDHxtFBEm",
"Fb9dujK76c",
"vFTcYP_Vj1l",
"JglA6YvTQQe",
"21giDhyl7Dn",
"IQ9uBP7Fqzw",
"egD3FfoLmS3",
"cGi0o1hszf",
"mYUbJQUY3bF",
"TtMdCRqi0ss",
"FKjDpArP1_k",
"_oUZ_6iyKcD",
"maHfa313o1C"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the follow-up comments and are delighted that some of your concerns have been addressed. Here, we provide brief remarks in response to the listed questions/concerns, and we hope they are somewhat helpful.\n\n> The high accuracy of CAW on Reddit and Wikipedia is not a strong reason to dis... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"Fb9dujK76c",
"IQ9uBP7Fqzw",
"FKjDpArP1_k",
"TtMdCRqi0ss",
"nips_2022_NqbktPUkZf7",
"egD3FfoLmS3",
"cGi0o1hszf",
"FKjDpArP1_k",
"maHfa313o1C",
"_oUZ_6iyKcD",
"nips_2022_NqbktPUkZf7",
"nips_2022_NqbktPUkZf7",
"nips_2022_NqbktPUkZf7"
] |
nips_2022_tVHh_vD84EK | AutoST: Towards the Universal Modeling of Spatio-temporal Sequences | The analysis of spatio-temporal sequences plays an important role in many real-world applications, demanding a high model capacity to capture the interdependence among spatial and temporal dimensions. Previous studies provided separated network design in three categories: spatial first, temporal first, and spatio-temporal synchronous. However, the manually-designed heterogeneous models can hardly meet the spatio-temporal dependency capturing priority for various tasks. To address this, we proposed a universal modeling framework with three distinctive characteristics: (i) Attention-based network backbone, including S2T Layer (spatial first), T2S Layer (temporal first), and STS Layer (spatio-temporal synchronous). (ii) The universal modeling framework, named UniST, with a unified architecture that enables flexible modeling priorities with the proposed three different modules. (iii) An automatic search strategy, named AutoST, automatically searches the optimal spatio-temporal modeling priority by network architecture search. Extensive experiments on five real-world datasets demonstrate that UniST with any single type of our three proposed modules can achieve state-of-the-art performance. Furthermore, AutoST can achieve overwhelming performance with UniST. | Accept | This paper presents a framework for spatio-temporal sequence analysis. The framework contains spatial, temporal, and spatio-temporal building blocks and is operationalized in an architecture search approach. The reviewers considered the authors' rebuttal and engaged in further discussion regarding the merits of the paper.
There are concerns over the overall novelty of the approach and the combination of techniques that are assembled in the method. However, the potential of the proposed building blocks and overall empirical results were deemed to be a solid contribution. For these reasons, this paper is recommended to be accepted to NeurIPS. | train | [
"cY2Ys3oYvA",
"pgicE6RPePy",
"lxd4bjqMCjk",
"IWVjLN33Fe",
"u7tLWgBlov",
"Y0CfL67fIZb6",
"-70avpbrEPL",
"zdR78n1xQfF",
"-QTlvud19hY",
"I9sV6noP0cH",
"s6dsWaU6OZd"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to thank all the reviewers for the constructive feedback and your patience in waiting for our extended experiments. We appreciate Reviewer dPZy's suggestion and encouragement on the experiment.\n\nWe tried our best in the past few days to apply our proposed UniST / AutoST to the human pose predictio... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"IWVjLN33Fe",
"-QTlvud19hY",
"IWVjLN33Fe",
"Y0CfL67fIZb6",
"nips_2022_tVHh_vD84EK",
"s6dsWaU6OZd",
"I9sV6noP0cH",
"-QTlvud19hY",
"nips_2022_tVHh_vD84EK",
"nips_2022_tVHh_vD84EK",
"nips_2022_tVHh_vD84EK"
] |
nips_2022_BlF6CWzWKT7 | Estimating individual treatment effects under unobserved confounding using binary instruments | Estimating individual treatment effects (ITEs) from observational data is relevant in many fields such as personalized medicine. However, in practice, the treatment assignment is usually confounded by unobserved variables and thus introduces bias. A remedy to remove the bias is the use of instrumental variables (IVs). Such settings are widespread in medicine (e.g., trials where compliance is used as binary IV). In this paper, we propose a novel, multiple robust machine learning framework, called MRIV, for estimating ITEs using binary IVs and thus yield an unbiased ITE estimator. Different from previous work for binary IVs, our framework estimates the ITE directly via a pseudo outcome regression. (1) We provide a theoretical analysis where we show that our framework yields multiple robust convergence rates: our ITE estimator achieves fast convergence even if several nuisance estimators converge slowly. (2) We further show that our framework asymptotically outperforms state-of-the-art plug-in IV methods for ITE estimation. (3) We build upon our theoretical results and propose a tailored neural network architecture called MRIV-Net for ITE estimation using binary IVs. Across various computational experiments, we demonstrate empirically that our \modelname achieves state-of-the-art performance. To the best of our knowledge, our MRIV is the first multiple robust machine learning framework tailored to estimating ITEs in the binary IV setting. | Reject | The authors offer a methodologically novel and very interesting approach to an important problem of CATE (or conditional LATE) with binary instrument/treatment. The paper should be published at a good ML venue.
Unfortunately, I need to recommend rejection primarily because the key proofs supporting their main theorems were missing from the original supplementary material and were only submitted after the rebuttal. Given that these were crucial elements supporting their main results and that they were submitted post the original submission deadline it seems unfair (even if it was most probably an honest mistake of the authors).
Another issue that came up in the discussion phase that seems crucial to revise before a resubmission is that the authors main estimation rate result relies on a main theorem of the unpublished work of Kennedy 2020, which has since been revised and the new theorem in Kennedy 2020, is technically very different and requires different assumptions. Hence the authors need to revise their main estimation rate result accordingly.
However, I acknowledge that this is not the main contribution of this work and a simple invocation of past work, but rather the main contribution is to formulate a loss for CATE using the idea of Want and Tchetgen-Tchetgen. So this wouldn't most probably be a reason for rejection. | val | [
"t9B2WnPTiua",
"nuHprnZg-Ts",
"3LDY8sD1nvy",
"qUuYCJE2fE4o",
"7UpSwZfQbtT",
"SPjqJwV0DTu",
"3EagjYV3X2K",
"x9w-2vkGEIQ",
"FfQknu4vi6I"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear authors,\n\nThanks for your responses and updates to the paper. I think it is more clear now. I do have some issues with the emphasis on multiple robustness, and questions about it's significance. However, it does seem that there are some practical advantages to this formulation, and the contextualization is... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"3LDY8sD1nvy",
"7UpSwZfQbtT",
"nips_2022_BlF6CWzWKT7",
"3EagjYV3X2K",
"x9w-2vkGEIQ",
"FfQknu4vi6I",
"nips_2022_BlF6CWzWKT7",
"nips_2022_BlF6CWzWKT7",
"nips_2022_BlF6CWzWKT7"
] |
nips_2022_wA7vZS-mSxv | Intrinsic dimensionality estimation using Normalizing Flows | How many degrees of freedom are there in a dataset consisting of $M$ samples embedded in $\mathbb{R}^D$? This number, formally known as \textsl{intrinsic dimensionality}, can be estimated using nearest neighbor statistics. However, nearest neighbor statistics do not scale to large datasets as their complexity scales quadratically in $M$, $\mathcal{O}(M^2)$. Additionally, methods based on nearest neighbor statistics perform poorly on datasets embedded in high dimensions where $D\gg 1$. In this paper, we propose a novel method to estimate the intrinsic dimensionality using Normalizing Flows that scale to large datasets and high dimensions. The method is based on some simple back-of-the-envelope calculations predicting how the singular values of the flow's Jacobian change when inflating the dataset with different noise magnitudes. Singular values associated with directions normal to the manifold evolve differently than singular values associated with directions tangent to the manifold. We test our method on various datasets, including 64x64 RGB images, where we achieve state-of-the-art results. | Accept | This paper discusses the use of normalizing flows for estimating the intrinsic dimensionality of the data sub-manifold embedded in high-dimensional ambient space. The idea of using NFs to estimate the intrinsic dimensionality via analyzing the eigenvalues of the Jacobian matrices is novel. The method is technically sound, and the paper presents detailed experiments on pedagogical examples, low-dimensional synthetic examples, and high-dimensional images generated by StyleGAN, which show the effects of this method.
| train | [
"-P_GpBUZgA0",
"cAqxdny7ArK",
"b2ZIKV5W0mE",
"sxNN5SlZfYi",
"ghspMyRJCDK",
"4am3hAqrGm",
"__Q3d2UjaIP",
"uvRiDmATC1S",
"g6jL2844S0vF",
"yen9pagOX29",
"lm1IYW-ooHt",
"eqv9XwbA70X",
"pBV5ZufHcNt",
"6zlQ-MUTPrH",
"1zJ3AmD1tlC",
"LKLo-nPmoxl",
"kjHO9zgOz7v",
"nzMD9PW1OTz",
"fKQFo5lyD... | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
... | [
" Thanks for your detailed response. My concerns are well resolved. ",
" We thank all the reviewers for this very fruitful discussion period. It will help us to improve our manuscript substantially. In particular, we commit to the following changes for the final version:\n\n+ Update the result on the StyleGan 64d... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"b2ZIKV5W0mE",
"nips_2022_wA7vZS-mSxv",
"4am3hAqrGm",
"6zlQ-MUTPrH",
"__Q3d2UjaIP",
"pBV5ZufHcNt",
"eqv9XwbA70X",
"g6jL2844S0vF",
"yen9pagOX29",
"6zlQ-MUTPrH",
"LKLo-nPmoxl",
"1zJ3AmD1tlC",
"UlfAKmyZYJt",
"Q9Xd4vHOXYp",
"1QWOijnh6ak",
"fKQFo5lyDPZ",
"nzMD9PW1OTz",
"nips_2022_wA7vZS... |
nips_2022_QvlcRh8hd8X | Neural Lyapunov Control of Unknown Nonlinear Systems with Stability Guarantees | Learning for control of dynamical systems with formal guarantees remains a challenging task. This paper proposes a learning framework to simultaneously stabilize an unknown nonlinear system with a neural controller and learn a neural Lyapunov function to certify a region of attraction (ROA) for the closed-loop system with provable guarantees. The algorithmic structure consists of two neural networks and a satisfiability modulo theories (SMT) solver. The first neural network is responsible for learning the unknown dynamics. The second neural network aims to identify a valid Lyapunov function and a provably stabilizing nonlinear controller. The SMT solver verifies the candidate Lyapunov function satisfies the Lyapunov conditions. We further provide theoretical guarantees of the proposed learning framework and show that the obtained Lyapunov function indeed verifies for the unknown nonlinear system under mild assumptions. We illustrate the effectiveness of the results with a few numerical experiments. | Accept | The problem statement and the technical approach are interesting. Some concerns about the scalability of the approach remain; however, given the novelty of the approach, I am recommending that the paper be accepted in spite of these concerns. Please make sure to incorporate the reviewers' feedback into the final version. | train | [
"0GwwOz_XF6G",
"jMqDdtJDj5",
"2xWKYQTzhHz",
"_TXGTP8hsdq",
"NRaVHse_7h",
"T_zPQrszwjd",
"tEhecTz8AX_",
"SLpRRtrTXbw",
"5tKoZ6wkI8m",
"dYkivZ2AoX9",
"8JTl2a-T8HQ",
"WzZDtDV7Z9q",
"5AfDNJmpAuT",
"scoLZxMgLjB",
"ZhtIbsM1Dp8",
"FmKqPI2ol1"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer KQuS,\n\nSince the author/reviewer discussion period will end tomorrow, if there is anything else you would like us to clarify, please let us know before then. Also, we would greatly appreciate it if you could adjust your score so that we know your stance on the paper before the discussion period en... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5
] | [
"2xWKYQTzhHz",
"_TXGTP8hsdq",
"NRaVHse_7h",
"WzZDtDV7Z9q",
"tEhecTz8AX_",
"tEhecTz8AX_",
"SLpRRtrTXbw",
"5tKoZ6wkI8m",
"dYkivZ2AoX9",
"8JTl2a-T8HQ",
"FmKqPI2ol1",
"ZhtIbsM1Dp8",
"scoLZxMgLjB",
"nips_2022_QvlcRh8hd8X",
"nips_2022_QvlcRh8hd8X",
"nips_2022_QvlcRh8hd8X"
] |
nips_2022_lfe1CdzuXBJ | Group Meritocratic Fairness in Linear Contextual Bandits | We study the linear contextual bandit problem where an agent has to select one candidate from a pool and each candidate belongs to a sensitive group. In this setting, candidates' rewards may not be directly comparable between groups, for example when the agent is an employer hiring candidates from different ethnic groups and some groups have a lower reward due to discriminatory bias and/or social injustice. We propose a notion of fairness that states that the agent's policy is fair when it selects a candidate with highest relative rank, which measures how good the reward is when compared to candidates from the same group. This is a very strong notion of fairness, since the relative rank is not directly observed by the agent and depends on the underlying reward model and on the distribution of rewards. Thus we study the problem of learning a policy which approximates a fair policy under the condition that the contexts are independent between groups and the distribution of rewards of each group is absolutely continuous. In particular, we design a greedy policy which at each round constructs a ridge regression estimate from the observed context-reward pairs, and then computes an estimate of the relative rank of each candidate using the empirical cumulative distribution function. We prove that, despite its simplicity and the lack of an initial exploration phase, the greedy policy achieves, up to log factors and with high probability, a fair pseudo-regret of order $\sqrt{dT}$ after $T$ rounds, where $d$ is the dimension of the context vectors. The policy also satisfies demographic parity at each round when averaged over all possible information available before the selection. Finally, we use simulated settings and experiments on the US census data to show that our policy achieves sub-linear fair pseudo-regret also in practice. | Accept | Reviewers agree on the merits of sharing the paper with the community.The authors are highly encouraged to incorporate the many constructive suggestions offered. | train | [
"ZKAk5c6ebRd",
"NvwCk-XiKEV",
"TEYIp51OUvF",
"zIyvX2Ufk5",
"wtNMHT7J6qg",
"CeDsCjTyUBf",
"nAZhwVM0prq",
"yMIs0E4CpfW",
"kkdAFcwgadh",
"Td0luMOFWYp",
"N_4k6AP0CU",
"eo21kmPj52t",
"MlwNhJvZlFX",
"qmYkQ_JH73b",
"31lB6-DKg53",
"NLgIN5l3tQL",
"Wyvqjvv-VNv",
"Xql1EVAPkY"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Many thanks for increasing the score and for your valuable feedback!",
" I really appreciated the detailed response provided by the authors and the important additions made to the paper, which effectively address my main concern about the setting \"one arm = one group.\" In the previous version of the paper, I ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4
] | [
"NvwCk-XiKEV",
"kkdAFcwgadh",
"nAZhwVM0prq",
"Xql1EVAPkY",
"CeDsCjTyUBf",
"Td0luMOFWYp",
"yMIs0E4CpfW",
"eo21kmPj52t",
"Xql1EVAPkY",
"Wyvqjvv-VNv",
"NLgIN5l3tQL",
"31lB6-DKg53",
"qmYkQ_JH73b",
"nips_2022_lfe1CdzuXBJ",
"nips_2022_lfe1CdzuXBJ",
"nips_2022_lfe1CdzuXBJ",
"nips_2022_lfe1C... |
nips_2022_K8cD1Uv3wZy | Private Multiparty Perception for Navigation | We introduce a framework for navigating through cluttered environments by connecting multiple cameras together while simultanously preserving privacy. Occlusions and obstacles in large environments are often challenging situations for navigation agents because the environment is not fully observable from a single camera view. Given multiple camera views of an environment, our approach learns to produce a multiview scene representation that can only be used for navigation, provably preventing one party from inferring anything beyond the output task. On a new navigation dataset that we will publicly release, experiments show that private multiparty representations allow navigation through complex scenes and around obstacles while jointly preserving privacy. Our approach scales to an arbitrary number of camera viewpoints. We believe developing visual representations that preserve privacy is increasingly important for many applications such as navigation. | Accept | *Summary*
- The paper proposes a framework for multiparty perception: on leverage multiparty computing techniques in a fully observable environments (with multiple cameras), such that the agent learns to take actions (e.g., driving direction) without observing the raw images from the multiple cameras.
- The core idea is that only encrypted features extracted from the cameras are publicly exchanged between the parties, which are used to determine the actions.
- The approach is evaluated on an "obstacle world" dataset which will be released publicly. The goal of the agent is to reach a goal location while avoiding obstacles.
- Evaluation indicates that the performance of the proposed model reaches accuracy very close to the non-private model (96.9% vs 97.1%).
*Reviews*
- The paper received 3 reviews with final ratings: 7 (Accept), 6 (Weak Accept) and 4 (Borderline Reject).
- In general the reviewers found the paper to be well-motivated (mitigate privacy costs), well-written, with rigorous experiments and a dataset contribution.
- The reviewers raised some minor concerns regarding terminology and presentation, which have been resolved and are easily addressed in a camera-ready version.
- The more substantive concerns raised by reviewers were:
- It might be possible to train a network to reconstruct camera images from the encrypted feature vectors. The authors' have convincingly responded to this concern by pointing to prior work and providing an additional experiment.
- Would the method work in a real-world environment where there is a lot more variation between scenes and the map and cameras exhibit noise, occlusion and artifacts? The authors' argue that the paper is a first step towards this aim, which would constitute follow-up work.
*Decision*
- It's the view of the AC that the 'Borderline Reject' review (N4st) did not actually identify any substantive weaknesses in the paper that would warrant rejection. Reviewer N4st's main concern is 'limited technical contributions' which is convincingly rebutted by the authors. I'm swayed by the two positive reviews; the paper should be accepted. | train | [
"g6RMzhuX1X",
"3myTiSVroyB",
"ojJfnLhW8Hz",
"GrsZzwgJmeU",
"PpbexpJ3QIm",
"mdM_fKYThih",
"hirEJVD5QnB"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your comments, and we are glad that you appreciate the novelty of our approach. Our paper makes technical contributions on multiple levels: \n\nNew Framework for Navigation: Navigation is a longstanding challenge in the machine learning, robotics, and computer vision fields, with many papers publish... | [
-1,
-1,
-1,
-1,
7,
6,
4
] | [
-1,
-1,
-1,
-1,
3,
2,
2
] | [
"hirEJVD5QnB",
"ojJfnLhW8Hz",
"mdM_fKYThih",
"PpbexpJ3QIm",
"nips_2022_K8cD1Uv3wZy",
"nips_2022_K8cD1Uv3wZy",
"nips_2022_K8cD1Uv3wZy"
] |
nips_2022_AYkBQEm5AY | Scalable Multi-agent Covering Option Discovery based on Kronecker Graphs | Covering option discovery has been developed to improve the exploration of RL in single-agent scenarios with sparse reward signals, through connecting the most distant states in the embedding space provided by the Fiedler vector of the state transition graph. Given that joint state space grows exponentially with the number of agents in multi-agent systems, existing researches still relying on single-agent option discovery either become prohibitive or fail to directly discover joint options that improve the connectivity of the joint state space. In this paper, we show how to directly compute multi-agent options with collaborative exploratory behaviors while still enjoying the ease of decomposition. Our key idea is to approximate the joint state space as a Kronecker graph, based on which we can directly estimate its Fiedler vector using the Laplacian spectrum of individual agents' transition graphs. Further, considering that directly computing the Laplacian spectrum is intractable for tasks with infinite-scale state spaces, we further propose a deep learning extension of our method by estimating eigenfunctions through NN-based representation learning techniques. The evaluation on multi-agent tasks built with simulators like Mujoco, shows that the proposed algorithm can successfully identify multi-agent options, and significantly outperforms the state-of-the-art. Codes are available at: https://github.itap.purdue.edu/Clan-labs/Scalable_MAOD_via_KP. | Accept | This paper proposes an extension of the covering discovery framework from single-agent to MARL. There was unanimous support for accepting this paper and agreement that it is an interesting and very useful contribution.
The responses to reviewers helped to clarify a few points. When preparing a revision, we strongly encourage you to address these points which came up during the review process:
* Consider putting more emphasis on the Mujoco experiments and DL extension in the main paper, to illustrate the generality of the results
* Discuss limitations and future work related to communication complexity of the proposed approach
* Clarify/justify details of how tabular and NN-based methods are compared in the grid world experiments
* Clarify the setup in the option discovery phase | test | [
"-EOh6sQLukvm",
"98F4fqh2F-6",
"dy-zgyb5TQ",
"ZWSMR1VajgP",
"H79DXe38OboN",
"vj4GtUyZ4FM",
"dtmsEO0uKHV",
"YuV02qfGOAn",
"AL8mU74zJH0",
"1bn8MgUDFpn",
"HAlS0XSMFsJ",
"tsa2voeXUa"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I will raise my score to 6. I believe this is hard to parse from the paper itself (without your comment above). Please consider emphasizing the Mujoco results and the deep learning extension.",
" ## About the approximation error of the estimated eigenvectors:\nAs mentioned in the third paragraph of Section 4.3,... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
4,
3
] | [
"dtmsEO0uKHV",
"dy-zgyb5TQ",
"HAlS0XSMFsJ",
"YuV02qfGOAn",
"tsa2voeXUa",
"1bn8MgUDFpn",
"AL8mU74zJH0",
"HAlS0XSMFsJ",
"nips_2022_AYkBQEm5AY",
"nips_2022_AYkBQEm5AY",
"nips_2022_AYkBQEm5AY",
"nips_2022_AYkBQEm5AY"
] |
nips_2022_NI6hB70ajO7 | Beyond IID: data-driven decision-making in heterogeneous environments | In this work, we study data-driven decision-making and depart from the classical identically and independently distributed (i.i.d.) assumption. We present a new framework in which historical samples are generated from unknown and different distributions, which we dub \textit{heterogeneous environments}. These distributions are assumed to lie in a heterogeneity ball with known radius and centered around the (also) unknown future (out-of-sample) distribution on which the performance of a decision will be evaluated. We quantify the asymptotic worst-case regret that is achievable by central data-driven policies such as Sample Average Approximation, but also by rate-optimal ones, as a function of the radius of the heterogeneity ball. Our work shows that the type of achievable performance varies considerably across different combinations of problem classes and notions of heterogeneity. We demonstrate the versatility of our framework by comparing achievable guarantees for the heterogeneous version of widely studied data-driven problems such as pricing, ski-rental, and newsvendor.
En route, we establish a new connection between data-driven decision-making and distributionally robust optimization. | Accept | This paper studies an interesting setting in which the future training data and the past data are generated from different distributions, but the two distributions are close in terms of Kolmogorov or Wasserstein distances. The paper gives a nice set of algorithmic and impossibility results. Initially, some reviewers had questions about the novelty of the framework and technical approaches; the authors have addressed most of these questions properly. | test | [
"Iz_tV_GKA-W",
"udYGDff9qB-j",
"IF8uvl8d5GA",
"mGhk2XvpQAk",
"veUG4f6MMZ6",
"DyETcXUEGb",
"-Bvv6JFC05J",
"OiSCIR83XE",
"W4aRhHDx7fx",
"xH5fxgAWWSV",
"RHaR67g99WF",
"zzkE_7aAfyJ",
"ghfPrnaC7wM",
"3IXjtzC3rcz",
"HdxCArb7c1"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear authors,\n\nThanks for the clarification. The revision does look much better. \n\nGiven the other scores, I wouldn't be in objection to this paper going through. Yet I am personally still not quite convinced about the technical challenge of the analysis, say, after reading Line 342-350. So I would like to ke... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
2,
3,
4
] | [
"xH5fxgAWWSV",
"DyETcXUEGb",
"OiSCIR83XE",
"W4aRhHDx7fx",
"-Bvv6JFC05J",
"HdxCArb7c1",
"3IXjtzC3rcz",
"ghfPrnaC7wM",
"zzkE_7aAfyJ",
"RHaR67g99WF",
"nips_2022_NI6hB70ajO7",
"nips_2022_NI6hB70ajO7",
"nips_2022_NI6hB70ajO7",
"nips_2022_NI6hB70ajO7",
"nips_2022_NI6hB70ajO7"
] |
nips_2022_wmwgLEPjL9 | In Defense of the Unitary Scalarization for Deep Multi-Task Learning | Recent multi-task learning research argues against unitary scalarization, where training simply minimizes the sum of the task losses. Several ad-hoc multi-task optimization algorithms have instead been proposed, inspired by various hypotheses about what makes multi-task settings difficult. The majority of these optimizers require per-task gradients, and introduce significant memory, runtime, and implementation overhead. We show that unitary scalarization, coupled with standard regularization and stabilization techniques from single-task learning, matches or improves upon the performance of complex multi-task optimizers in popular supervised and reinforcement learning settings. We then present an analysis suggesting that many specialized multi-task optimizers can be partly interpreted as forms of regularization, potentially explaining our surprising results. We believe our results call for a critical reevaluation of recent research in the area. | Accept | This paper criticizes the SMTO methods by building a single experimental pipeline and finding that none of the SMTOs consistently outperform unitary scalarization method, which is the simplest and cheapest method for multi-task learning. To explain the findings, the authors postulate that SMTOs act as regularizers and present an analysis.
Generally, I like such a kind of paper as it would provide a criticized view and corresponding evidence as well as an in-depth analysis, which let us stop and think about the real progress we have made so far, and indeed helps the long-term development of the studied area.
The reviewers generally provided supportive comments and positive overall ratings. I also make an acceptance recommendation for this paper.
| train | [
"gKRXrV5-oeb",
"nMCSJTKJIHd",
"bsFdTI6y_3",
"DMsfdUHBWBW",
"MyAQcbqFBe",
"cK2XnZ9O1lW",
"vzmVi2HoBSI",
"LqxNTcJs743",
"oT8K0LaGaAj",
"9X1_PXwFSrT",
"nEeXfFSNBX",
"Di1-irScJqf",
"NS3LNw-k1cE",
"P3Xv3fqcSH",
"ykHBZaTQ-cL",
"yKX9JZKek3r"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the authors' response. It's a fair point that unitary scalarization's ease of tuning can be considered a virtue of the method, and that the results may be immediately actionable to practitioners. I'm also glad to see Figure 5 updated for improved accessibility!\n\nHowever, regarding the authors' argu... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"nEeXfFSNBX",
"DMsfdUHBWBW",
"MyAQcbqFBe",
"Di1-irScJqf",
"cK2XnZ9O1lW",
"vzmVi2HoBSI",
"LqxNTcJs743",
"oT8K0LaGaAj",
"9X1_PXwFSrT",
"yKX9JZKek3r",
"ykHBZaTQ-cL",
"P3Xv3fqcSH",
"nips_2022_wmwgLEPjL9",
"nips_2022_wmwgLEPjL9",
"nips_2022_wmwgLEPjL9",
"nips_2022_wmwgLEPjL9"
] |
nips_2022_eyE9Fb2AvOT | The Gyro-Structure of Some Matrix Manifolds | In this paper, we study the gyrovector space structure (gyro-structure) of matrix manifolds. Our work is motivated by the success of hyperbolic neural networks (HNNs) that have demonstrated impressive performance in a variety of applications. At the heart of HNNs is the theory of gyrovector spaces that provides a powerful tool for studying hyperbolic geometry. Here we focus on two matrix manifolds, i.e., Symmetric Positive Definite (SPD) and Grassmann manifolds, and consider connecting the Riemannian geometry of these manifolds with the basic operations, i.e., the binary operation and scalar multiplication on gyrovector spaces. Our work reveals some interesting facts about SPD and Grassmann manifolds. First, SPD matrices with an Affine-Invariant (AI) or a Log-Euclidean (LE) geometry have rich structure with strong connection to hyperbolic geometry. Second, linear subspaces, when equipped with our proposed basic operations, form what we call gyrocommutative and gyrononreductive gyrogroups. Furthermore, they share remarkable analogies with gyrovector spaces. We demonstrate the applicability of our approach for human activity understanding and question answering. | Accept | The paper studies the gyrovector space structure of a few matrix manifolds. This is of broader interest to practitioners who may be interested in using geometric tools in applications. As the reviewers mention that although interesting, the analysis is per se straightforward for many manifolds. Having said that, this AC believes it is a worthwhile effort to compile such results in one place and show the practical benefits. This paper is a good attempt in this direction. In the final version, please include all the suggestions both in explanations and as well as the new results presented in the discussion period.
| test | [
"AiIjIuov6Nl",
"sB4QsQR7Q_",
"xyZXPf-_P7",
"N4RbF1Qqigl",
"Q2FMgrXWft",
"-TnJpqaTIyD"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate the observation about the presentation in Section 3. This helps us to better clarify which parts of the paper are new with respect to the literature.\n* Eqs. (1) and (2) generalize the work of [11] to the matrix manifold setting. We will add this information when we introduce the method in Section 3... | [
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
3,
3,
2
] | [
"Q2FMgrXWft",
"N4RbF1Qqigl",
"-TnJpqaTIyD",
"nips_2022_eyE9Fb2AvOT",
"nips_2022_eyE9Fb2AvOT",
"nips_2022_eyE9Fb2AvOT"
] |
nips_2022_1BJUwgi3ed | Controlling Confusion via Generalisation Bounds | We establish new generalisation bounds for multiclass classification by abstracting to a more general setting of discretised error types. Extending the PAC-Bayes theory, we are hence able to provide fine-grained bounds on performance for multiclass classification, as well as applications to other learning problems including discretisation of regression losses. Tractable training objectives are derived from the bounds. The bounds are uniform over all weightings of the discretised error types and thus can be used to bound weightings not foreseen at training, including the full confusion matrix in the multiclass classification case. | Reject |
The consensus was that the reviewers were not convinced that the results were significant, and did not see significant fundamental novelty in the analysis.
| test | [
"wnVPOS_vn8C",
"20XF7WhZ4ID",
"oT4RMZIwtkx",
"zIrlB8JqLu",
"4GVZxQjECgF",
"ChVqC6w038s",
"6ZEBXqxdfy7",
"hdVTKzfZMP",
"bOwnxrMgO-2",
"fAQGA2YMt6W",
"7tV-vcAH_mk",
"YU6xPyqafRP",
"4erNs1LSu0w",
"QyLgW4_I7BF"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response to our comment. We would hope that our example, along with the detailed explanation in the paper and the supplementary material of how to turn the bound into a training objective would be sufficient to convince readers of the practical utility of our method and to demonstrate what it c... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
2
] | [
"20XF7WhZ4ID",
"fAQGA2YMt6W",
"fAQGA2YMt6W",
"4GVZxQjECgF",
"bOwnxrMgO-2",
"4erNs1LSu0w",
"hdVTKzfZMP",
"QyLgW4_I7BF",
"4erNs1LSu0w",
"YU6xPyqafRP",
"nips_2022_1BJUwgi3ed",
"nips_2022_1BJUwgi3ed",
"nips_2022_1BJUwgi3ed",
"nips_2022_1BJUwgi3ed"
] |
nips_2022_0xdH-09oGD7 | Effective Dimension in Bandit Problems under Censorship | In this paper, we study both multi-armed and contextual bandit problems in censored environments. Our goal is to estimate the performance loss due to censorship in the context of classical algorithms designed for uncensored environments. Our main contributions include the introduction of a broad class of censorship models and their analysis in terms of the effective dimension of the problem -- a natural measure of its underlying statistical complexity and main driver of the regret bound. In particular, the effective dimension allows us to maintain the structure of the original problem at first order, while embedding it in a bigger space, and thus naturally leads to results analogous to uncensored settings. Our analysis involves a continuous generalization of the Elliptical Potential Inequality, which we believe is of independent interest. We also discover an interesting property of decision-making under censorship: a transient phase during which initial misspecification of censorship is self-corrected at an extra cost; followed by a stationary phase that reflects the inherent slowdown of learning governed by the effective dimension. Our results are useful for applications of sequential decision-making models where the feedback received depends on strategic uncertainty (e.g., agents’ willingness to follow a recommendation) and/or random uncertainty (e.g., loss or delay in arrival of information). | Accept | The reviews for this paper have a high variance (ratings 3,4,6,7). The reviewer who gave a 3 seems unfamiliar with the basic bandit literature and I don't find the review particularly insightful. The other three reviewers mention that the theory is useful and solid, and the setting is interesting to them. On the negative side the reviewers mention that the paper is quite dense and some motivating examples would be useful. These are both minor points and, in my opinion, the positive aspects outweigh the negatives. | test | [
"5WHRMNiYb6H",
"NHXEYn15Flx",
"1fpAxYJMFzl",
"OlCScNFUacB",
"T0OFkyPedN1",
"0ndV2jnMiR",
"uqA0DxiVp6-",
"vfKPib37Zbo",
"-zU1dCJOeEg",
"z5xEfozt046",
"OSzaNzy6VkT",
"rjq_tdNMdbe",
"7C9z5XpGIgR",
"SVGxkrI0Ifv"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for taking the time to respond. I read the rebuttal and my opinion has not changed due to that i don't think my concerns are addressed. ",
" Please see our answer to questions 6-7 below:\n\n> 4.6) _“It seems that a key notation is ignored. \\tilde{\\Delta}_t^{\\lambda} is ignored. It lets reading the pap... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
6,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
1,
2
] | [
"OlCScNFUacB",
"SVGxkrI0Ifv",
"SVGxkrI0Ifv",
"SVGxkrI0Ifv",
"7C9z5XpGIgR",
"7C9z5XpGIgR",
"rjq_tdNMdbe",
"rjq_tdNMdbe",
"OSzaNzy6VkT",
"OSzaNzy6VkT",
"nips_2022_0xdH-09oGD7",
"nips_2022_0xdH-09oGD7",
"nips_2022_0xdH-09oGD7",
"nips_2022_0xdH-09oGD7"
] |
nips_2022_pHd0v8W30O | LAPO: Latent-Variable Advantage-Weighted Policy Optimization for Offline Reinforcement Learning | Offline reinforcement learning methods hold the promise of learning policies from pre-collected datasets without the need to query the environment for new samples. This setting is particularly well-suited for continuous control robotic applications for which online data collection based on trial-and-error is costly and potentially unsafe. In practice, offline datasets are often heterogeneous, i.e., collected in a variety of scenarios, such as data from several human demonstrators or from policies that act with different purposes. Unfortunately, such datasets often contain action distributions with multiple modes and, in some cases, lack a sufficient number of high-reward trajectories, which render offline policy training inefficient. To address this challenge, we propose to leverage latent-variable generative model to represent high-advantage state-action pairs leading to better adherence to data distributions that contributes to solving the task, while maximizing reward via a policy over the latent variable. As we empirically show on a range of simulated locomotion, navigation, and manipulation tasks, our method referred to as latent-variable advantage-weighted policy optimization (LAPO), improves the average performance of the next best-performing offline reinforcement learning methods by 49\% on heterogeneous datasets, and by 8\% on datasets with narrow and biased distributions. | Accept | The paper studies the setting with heterogeneous datasets in offline reinforcement learning
The paper proposes an offline-RL algorithm that is designed to efficiently handle the scenario.
The paper received 4 expert reviews with a broad consensus on value of the problem and strengths of the method. While there is a lack of unanimous consensus, all reviewers however agree that the problem setting is of interest to broader community, and while there may be some concerns on the method, it would be to the community's benefit to see this work. | train | [
"5jZtGpCNLr",
"CVRoxTBvhW",
"-O3sxeJJONc",
"1GPgNN3Cj3",
"6lgAA3qmRwK",
"v3-jfmYP2KI",
"qGvdq4djAYH",
"HV1Y1Dy19kA",
"xpyDmrhC1Cp",
"3f_Yq9v5-T9",
"1rBMF_RKAgL",
"ESn4RRNgAE",
"ox7Z108hFkr",
"-G5TQAO8_PD",
"fl83CkWgVqE"
] | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We consider the type of latent policy as an implementation choice, which can be implemented as deterministic or non-deterministic.\nWe also mentioned in Line 162 that \"For deterministic versions of the latent policy, the policy output can be limited to [−zmax, zmax] using a tanh function\", and we implemented th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"CVRoxTBvhW",
"-O3sxeJJONc",
"1GPgNN3Cj3",
"6lgAA3qmRwK",
"qGvdq4djAYH",
"xpyDmrhC1Cp",
"HV1Y1Dy19kA",
"fl83CkWgVqE",
"-G5TQAO8_PD",
"ox7Z108hFkr",
"ESn4RRNgAE",
"nips_2022_pHd0v8W30O",
"nips_2022_pHd0v8W30O",
"nips_2022_pHd0v8W30O",
"nips_2022_pHd0v8W30O"
] |
nips_2022_xOqqlH_E5k0 | Augmented Deep Unrolling Networks for Snapshot Compressive Hyperspectral Imaging | Snapshot compressive hyperspectral imaging requires reconstructing a hyperspectral image from its snapshot measurement. This paper proposes an augmented deep unrolling neural network for solving such a challenging reconstruction problem. The proposed network is based on the unrolling of a proximal gradient descent algorithm with two innovative modules for gradient update and proximal mapping. The gradient update is modeled by a memory-assistant descent module motivated by the momentum-based acceleration heuristics. The proximal mapping is modeled by a sub-network with a cross-stage self-attention which effectively exploits inherent self-similarities of a hyperspectral image along the spectral axis, as well as enhancing the feature flow through the network. Moreover, a spectral geometry consistency loss is proposed to encourage the model to concentrate more on the geometric layer of spectral curves for better reconstruction. Extensive experiments on several datasets showed the performance advantage of our approach over the latest methods. | Reject | The paper received mixed reviews with two weak accepts and two borderline rejects, making it a borderline case. Based on the reviews and the rebuttal and on their own reading of the paper, the area chair would like to make a few remarks:
- the paper achieves good empirical results in terms of PSNR for the task of compressive HSI. This is the main strenght of the paper.
- the code for reproducing the experiments is not provided. Even though this is not a critical requirement, this would definitely help assessing the quality of the experiments, given that the contribution is mostly methodological.
- the method consists of modifying several components of classical deep unrolling networks. Some of them are related to existing work, such as the idea to use lstms to exploit past gradients, as discussed in the rebuttal. Such discussions should be included in the paper, even though this idea was not investigated for HSI in this prior work. More generally, the modifications of the DUN method seem quite generic and not specific to compressive HSI. Evaluating their effect on other HSI tasks would be helpful to get a better understanding of the importance of these modifications: are they effective beyond compressive HSI, beyond HSI. If not, why?
- The rebuttal was useful. Numerous additional experiments were conducted. Yet, it would have been good to include them in a revised version of the pdf.
At this point, it seems that the method is promising, but that a major revision of the paper is required, leading to a reject decision for NeurIPS this year.
| train | [
"FGNAlHnqFpB",
"891EdvZ5Zmd",
"LFILB2DND5J",
"Lhaac3dhDup",
"xTY9wXo9sYs",
"wp_Nu7QZ90B",
"uoWdw1zEa6d",
"s3MJyMOSAV",
"aSOzpinN63K"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" ***1、''All quantitative results shown are valid on noise-free measurements only. No quantitative results are provided for noisy measurements, although the image formation model described in the introduction explicitly mentions the presence of noise on measurements. This is even more dissonant as the model is fine... | [
-1,
-1,
-1,
-1,
-1,
6,
4,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
5
] | [
"s3MJyMOSAV",
"wp_Nu7QZ90B",
"s3MJyMOSAV",
"aSOzpinN63K",
"uoWdw1zEa6d",
"nips_2022_xOqqlH_E5k0",
"nips_2022_xOqqlH_E5k0",
"nips_2022_xOqqlH_E5k0",
"nips_2022_xOqqlH_E5k0"
] |
nips_2022_0tpZgkAKVjB | Luckiness in Multiscale Online Learning | Algorithms for full-information online learning are classically tuned to minimize their worst-case regret. Modern algorithms additionally provide tighter guarantees outside the adversarial regime, most notably in the form of constant pseudoregret bounds under statistical margin assumptions. We investigate the multiscale extension of the problem where the loss ranges of the experts are vastly different. Here, the regret with respect to each expert needs to scale with its range, instead of the maximum overall range. We develop new multiscale algorithms, tuning schemes and analysis techniques to show that worst-case robustness and adaptation to easy data can be combined at a negligible cost. We further develop an extension with optimism and apply it to solve multiscale two-player zero-sum games. We demonstrate experimentally the superior performance of our scale-adaptive algorithm and discuss the subtle relationship of our results to Freund's 2016 open problem.
| Accept | The reviewers were generally happy with this paper. There were some comments about better experiments and one comment about properly comparing this work to Bubeck et al (e.g. difference in rates and comparative advantages). I encourage the authors to better explain this related work and also incorporate some of the clarifying discussions with reviewers in the final version of the paper. | train | [
"_AtpyOrH4wI",
"pkB4EE9Lej",
"dtuJUVEUOFD",
"zfr66oIu4wvL",
"LU46ml3ZiaD",
"MN9QNRB6P_z",
"y5zqeT-rauZ",
"D5wMEJp0I8R",
"H9XadHevc7",
"0TCm9rF7UL6",
"mvf0hhoIMpa",
"OKioU7ufsLc"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks a lot for your rebuttal. The authors comment that the algorithm by Bubeck et al also achieves constant regret under Massart noise but needs their analysis -- what is the advantage of the proposed algorithm in the paper in this case? Why not just analyze the existing algorithm by Bubeck et al.? Also, I am r... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
3,
3
] | [
"D5wMEJp0I8R",
"dtuJUVEUOFD",
"zfr66oIu4wvL",
"MN9QNRB6P_z",
"OKioU7ufsLc",
"mvf0hhoIMpa",
"0TCm9rF7UL6",
"H9XadHevc7",
"nips_2022_0tpZgkAKVjB",
"nips_2022_0tpZgkAKVjB",
"nips_2022_0tpZgkAKVjB",
"nips_2022_0tpZgkAKVjB"
] |
nips_2022_wgRQ1IM4g_w | On Kernelized Multi-Armed Bandits with Constraints | We study a stochastic bandit problem with a general unknown reward function and a general unknown constraint function. Both functions can be non-linear (even non-convex) and are assumed to lie in a reproducing kernel Hilbert space (RKHS) with a bounded norm. This kernelized bandit setup strictly generalizes standard multi-armed bandits and linear bandits. In contrast to safety-type hard constraints studied in prior works, we consider soft constraints that may be violated in any round as long as the cumulative violations are small, which is motivated by various practical applications. Our ultimate goal is to study how to utilize the nature of soft constraints to attain a finer complexity-regret-constraint trade-off in the kernelized bandit setting. To this end, leveraging primal-dual optimization, we propose a general framework for both algorithm design and performance analysis. This framework builds upon a novel sufficient condition, which not only is satisfied under general exploration strategies, including \emph{upper confidence bound} (UCB), \emph{Thompson sampling} (TS), and new ones based on \emph{random exploration}, but also enables a unified analysis for showing both sublinear regret and sublinear or even zero constraint violation. We demonstrate the superior performance of our proposed algorithms via numerical experiments based on both synthetic and real-world datasets. Along the way, we also make the first detailed comparison between two popular methods for analyzing constrained bandits and Markov decision processes (MDPs) by discussing the key difference and some subtleties in the analysis, which could be of independent interest to the communities. | Accept | The paper provides new techniques (algorithmic as well as analytical) to solve black box optimization of smooth functions with constraints. The reviewers are largely in favor of the paper's contributions, and the author responses have helped to clarify several aspects of the presentation and connections to existing work. Therefore, I recommend that the paper be accepted. | test | [
"z8o99Eq5LX",
"lINwdI-5Odj",
"xVgmQqzN9x9",
"lnVFVXOVoIc",
"w_xO1oPVORv",
"RQo5oGzzKRr",
"A8FtQ4g7Rv9",
"9weEavfs3F_",
"FEY0kuxerYC",
"Fozv8XLNfM0",
"1rsa1yNzNyX",
"UcebwpvHCf",
"OYBKIna1Rqj",
"qTCffVuY41o",
"puTlJ9-nZkw"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are glad to hear that the reviewer finds our rebuttal satisfactory. We thank the reviewer again for the appreciation of our discussion and comparison of convex-opt and Lyapunovp-drift methods for constrained bandits/RL, which is indeed one of our main contributions.",
" We are glad to hear that the reviewer ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3,
4
] | [
"lnVFVXOVoIc",
"w_xO1oPVORv",
"1rsa1yNzNyX",
"RQo5oGzzKRr",
"FEY0kuxerYC",
"A8FtQ4g7Rv9",
"9weEavfs3F_",
"puTlJ9-nZkw",
"qTCffVuY41o",
"OYBKIna1Rqj",
"UcebwpvHCf",
"nips_2022_wgRQ1IM4g_w",
"nips_2022_wgRQ1IM4g_w",
"nips_2022_wgRQ1IM4g_w",
"nips_2022_wgRQ1IM4g_w"
] |
nips_2022_SiQAZV0yEny | Beyond Rewards: a Hierarchical Perspective on Offline Multiagent Behavioral Analysis | Each year, expert-level performance is attained in increasingly-complex multiagent domains, where notable examples include Go, Poker, and StarCraft II. This rapid progression is accompanied by a commensurate need to better understand how such agents attain this performance, to enable their safe deployment, identify limitations, and reveal potential means of improving them. In this paper we take a step back from performance-focused multiagent learning, and instead turn our attention towards agent behavior analysis. We introduce a model-agnostic method for discovery of behavior clusters in multiagent domains, using variational inference to learn a hierarchy of behaviors at the joint and local agent levels. Our framework makes no assumption about agents' underlying learning algorithms, does not require access to their latent states or policies, and is trained using only offline observational data. We illustrate the effectiveness of our method for enabling the coupled understanding of behaviors at the joint and local agent level, detection of behavior changepoints throughout training, discovery of core behavioral concepts, demonstrate the approach's scalability to a high-dimensional multiagent MuJoCo control domain, and also illustrate that the approach can disentangle previously-trained policies in OpenAI's hide-and-seek domain. | Accept | The reviewers agreed this paper studies an interesting problem and provide an interesting contribution to the multi-agent community. We urge the authors to include the added experiments and information (e.g., suggested related work) into the main text. | train | [
"8lKrVD6xVXe",
"FLUu5qs1Q0",
"nw0dWXLaaaX",
"fw7tWs-AzW",
"Lh5aTQGecoDN",
"cGHlHMTZGz",
"-K1V-xwxXbNG",
"319mMrASomi",
"sTpG-FtMw0_",
"LpX67X33lXt",
"iBSnSDUi6mS",
"XEIkoZ7WZtX",
"belPX0DtLn0",
"2gBHBb9wIjNZ",
"6cj5XMUiRzQ",
"hXRnqZK3Nhp",
"-fhwwX6kL8r",
"2mvWUpE64vU",
"PQSMjxL6v... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",... | [
" I'm totally surprised by the updated draft and sincerely appreciate the efforts from the authors. This paper has reached its potential. :)\n\nI'm even looking forward to having an in-person conversation with the authors at NeurIPS. Good job.",
" I thank the authors for the detailed responses. They addressed al... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4,
4
] | [
"sTpG-FtMw0_",
"PQSMjxL6vfx",
"LpX67X33lXt",
"Lh5aTQGecoDN",
"cGHlHMTZGz",
"-K1V-xwxXbNG",
"319mMrASomi",
"sTpG-FtMw0_",
"_Y7vd-Fd9pC",
"iBSnSDUi6mS",
"XEIkoZ7WZtX",
"belPX0DtLn0",
"kQl8Idjkvv0",
"6cj5XMUiRzQ",
"hXRnqZK3Nhp",
"-fhwwX6kL8r",
"2mvWUpE64vU",
"CLceFnhD9ZE",
"9JsmbKNW... |
nips_2022_qmm__jMjMlL | Unsupervised Object Representation Learning using Translation and Rotation Group Equivariant VAE | In many imaging modalities, objects of interest can occur in a variety of locations and poses (i.e. are subject to translations and rotations in 2d or 3d), but the location and pose of an object does not change its semantics (i.e. the object's essence). That is, the specific location and rotation of an airplane in satellite imagery, or the 3d rotation of a chair in a natural image, or the rotation of a particle in a cryo-electron micrograph, do not change the intrinsic nature of those objects. Here, we consider the problem of learning semantic representations of objects that are invariant to pose and location in a fully unsupervised manner. We address shortcomings in previous approaches to this problem by introducing TARGET-VAE, a translation and rotation group-equivariant variational autoencoder framework. TARGET-VAE combines three core innovations: 1) a rotation and translation group-equivariant encoder architecture, 2) a structurally disentangled distribution over latent rotation, translation, and a rotation-translation-invariant semantic object representation, which are jointly inferred by the approximate inference network, and 3) a spatially equivariant generator network. In comprehensive experiments, we show that TARGET-VAE learns disentangled representations without supervision that significantly improve upon, and avoid the pathologies of, previous methods. When trained on images highly corrupted by rotation and translation, the semantic representations learned by TARGET-VAE are similar to those learned on consistently posed objects, dramatically improving clustering in the semantic latent space. Furthermore, TARGET-VAE is able to perform remarkably accurate unsupervised pose and location inference. We expect methods like TARGET-VAE will underpin future approaches for unsupervised object generation, pose prediction, and object detection. Our code is available at https://github.com/SMLC-NYSBC/TARGET-VAE. | Accept | The authors propose an effective method for group-equivariant representation learning in an unsupervised setting. The authors use group convolutional encoder in the VAE setting. This is an interesting problem in the community. The proposed method makes sense and seems technically sound. The experiment results are good and convincing in general. As pointed by multiple reviewers, however it would have been better to focus more on multi-object settings. Most of the concerns raised by the reviewers are well addressed. Although reviewer 2xXA gave relatively lower score 4, the reviewer admitted his/her lack of related background knowledge and the confidence score was low. | train | [
"t0oWFZ6S8e",
"l0TYi73wh-_",
"NCfzTuSfLnr",
"fTREvowLHyi",
"dGnib-V9BQy",
"z4H9g3TH5fo",
"PNeF5Tbakxu",
"evrgd-3U_Aw"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response! Most of my concerns are addressed. I maintain my score and continue to support accepting the paper. \n\n1. Yes, I agree that there is a benefit to having explicit factors grounded as real-world quantities that humans can easily interpret (e.g. rotation/scale). \n2. The presentation wa... | [
-1,
-1,
-1,
-1,
-1,
6,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"dGnib-V9BQy",
"fTREvowLHyi",
"evrgd-3U_Aw",
"PNeF5Tbakxu",
"z4H9g3TH5fo",
"nips_2022_qmm__jMjMlL",
"nips_2022_qmm__jMjMlL",
"nips_2022_qmm__jMjMlL"
] |
nips_2022_9zWlrwlT9-j | Unbiased Estimates for Multilabel Reductions of Extreme Classification with Missing Labels | This paper considers the missing-labels problem in the extreme multilabel classification (XMC) setting, i.e. a setting
with a very large label space. The goal in XMC often is to maximize either precision or recall of the top-ranked
predictions, which can be achieved by reducing the multilabel problem into a series of binary (One-vs-All) or multiclass
(Pick-all-Labels) problems. Missing labels are a ubiquitous phenomenon in XMC tasks, yet the interaction between missing
labels and multilabel reductions has hitherto only been investigated for the case of One-vs-All reduction. In this
paper, we close this gap by providing unbiased estimates for general (non-decomposable) multilabel losses, which enables
unbiased estimates of the Pick-all-Labels reduction, as well as the normalized reductions which are required for
consistency with the recall metric. We show that these estimators suffer from increased variance and may lead to
ill-posed optimization problems. To address this issue, we propose to use convex upper bounds which trade off an
increase in bias against a strong decrease in variance. | Reject | This is a borderline paper. The reviewers lean towards acceptance but do have reservation about the paper. I have also read the paper in order to make an informed recommendation. In the end I've decided to recommend against acceptance at this point for the reasons I'll describe below. I do understand, and sympathize with the authors' frustration at my decision, but I truly believe that the paper can be significantly improved with some additional work.
The authors tackle an interesting and important problem in the area of extreme classification (and multi-label classification in general): the problem of missing labels due to various biases in how the data is annotated. This is a well recognized and important problem both in the extreme classification literature, and also in the benchmark creation literature, especially for computer vision tasks. (I recommend the authors look at the work on Imagenet 2.0 and similar papers where due to the shortcoming of the annotation protocol relevant labels are missing). The paper makes an interesting theoretical contribution on how to obtain unbiased loss estimates in this case, identifies an issue with the high variance of these estimates, and proposed a lower-variance upper-bound that could be used as a surrogate loss.
The main drawback of the paper, as originally submitted is the very limited experimental evaluation. The authors explain the lack of empirical results by the fact that there are no datasets that have accurate propensity scores available, so even evaluating on such datasets would be meaningless. While it is true that existing datasets do have this problem, it is incumbent on the authors to find an application where their work would be applicable. Otherwise, if there is no application that can benefit from this work, what is it useful for? One suggestion I can make to do the following evaluation: take a (sub-sample) of the instances that method M assigned label L to and, using human labelers, determine how many of those documents should truly be assigned label L and how many should not have label L. Do this for multiple labels L. While I admit that this would be a tedious endeavor, it would be possible to achieve with current crowdsourcing technology, and it would significantly improve the paper.
During the author response period, the authors have submitted additional results on some real datasets. These results do look pretty strong, but, because they have been rushed and not really integrated in the paper, I fear is difficult for the reviewers to truly scrutinize their validity, and to draw the correct conclusions from them. The paper would benefit from having the authors integrate the results in the paper and fully analyze them.
I do believe this is interesting work, and I encourage the authors to revise and resubmit their paper at a future conference. But as it is, the paper is not yet ready for publication. | train | [
"8nM08N_7xA",
"maUJo3uyWiZI",
"oXxH0D2HoKHo",
"pFP5aejihq4",
"EtCTgBvJ0JI",
"WR_141VtQL",
"6l1rgJYMArt",
"7IRcOKRoeJh",
"u9AD1Sd9YX",
"ssMSbU07RQ7",
"d1dlEeVTKiL"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your response. I agree with your comment that its hard to interpret results as the unbiased estimates do not fall in [0 -100] and just normalising them to fall in between 0-100 is probably not the best idea. However, we can still do some sort of proxy/qualitative analysis where we check if the model tr... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"maUJo3uyWiZI",
"7IRcOKRoeJh",
"nips_2022_9zWlrwlT9-j",
"ssMSbU07RQ7",
"d1dlEeVTKiL",
"nips_2022_9zWlrwlT9-j",
"u9AD1Sd9YX",
"nips_2022_9zWlrwlT9-j",
"nips_2022_9zWlrwlT9-j",
"nips_2022_9zWlrwlT9-j",
"nips_2022_9zWlrwlT9-j"
] |
nips_2022_EqZuN4V_FLF | A Solver-free Framework for Scalable Learning in Neural ILP Architectures | There is a recent focus on designing architectures that have an Integer Linear Programming (ILP) layer within a neural model (referred to as \emph{Neural ILP} in this paper). Neural ILP architectures are suitable for pure reasoning tasks that require data-driven constraint learning or for tasks requiring both perception (neural) and reasoning (ILP). A recent SOTA approach for end-to-end training of Neural ILP explicitly defines gradients through the ILP black box [Paulus et al. [2021]] – this trains extremely slowly, owing to a call to the underlying ILP solver for every training data point in a minibatch. In response, we present an alternative training strategy that is \emph{solver-free}, i.e., does not call the ILP solver at all at training time. Neural ILP has a set of trainable hyperplanes (for cost and constraints in ILP), together representing a polyhedron. Our key idea is that the training loss should impose that the final polyhedron separates the positives (all constraints satisfied) from the negatives (at least one violated constraint or a suboptimal cost value), via a soft-margin formulation. While positive example(s) are provided as part of the training data, we devise novel techniques for generating negative samples. Our solution is flexible enough to handle equality as well as inequality constraints. Experiments on several problems, both perceptual as well as symbolic, which require learning the constraints of an ILP, show that our approach has superior performance and scales much better compared to purely neural baselines and other state-of-the-art models that require solver-based training. In particular, we are able to obtain excellent performance in 9 x 9 symbolic and visual Sudoku, to which the other Neural ILP solver is not able to scale. | Accept |
The paper presents the first solver-free training approach for learning neural integer linear programs. The idea is to encode within the loss that the final polyhedron separates the positives (all constraints satisfied) from the negatives (negatives (at least one violated constraint or a suboptimal cost value), via a soft-margin formulation. Compared to the Neural ILP baselines, this turns out to be faster without sacrificing accuracy. All reviewers agree that this is solid work. I fully agree. | train | [
"x9qVXqhBJfr",
"_U43rBpgl2O",
"CASmxt3Diak",
"P7o_KzmApJ",
"IKVHcCeo-IG",
"MIUlk7Xh8NG",
"QyEDCv1P8L",
"mUiCxNgYMxI",
"N5lHj5gY_s9Z",
"Bkka303lxl",
"JDu0hr3Y26E",
"btIGWvSoT3"
] | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer ZncX,\n\nWe have now uploaded a rebuttal revision of our paper and have also posted a common comment describing all the changes in the revised manuscript. We hope that all of your concerns have been appropriately addressed in the revision. If not, please let us know, and we will address them.\n\nTh... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
5
] | [
"P7o_KzmApJ",
"nips_2022_EqZuN4V_FLF",
"QyEDCv1P8L",
"IKVHcCeo-IG",
"MIUlk7Xh8NG",
"btIGWvSoT3",
"JDu0hr3Y26E",
"N5lHj5gY_s9Z",
"Bkka303lxl",
"nips_2022_EqZuN4V_FLF",
"nips_2022_EqZuN4V_FLF",
"nips_2022_EqZuN4V_FLF"
] |
nips_2022_ZE4lUw2iGcZ | Better Best of Both Worlds Bounds for Bandits with Switching Costs | We study best-of-both-worlds algorithms for bandits with switching cost, recently addressed by Rouyer et al., 2021. We introduce a surprisingly simple and effective algorithm that simultaneously achieves minimax optimal regret bound (up to logarithmic factors) of $\mathcal{O}(T^{2/3})$ in the oblivious adversarial setting and a bound of $\mathcal{O}(\min\{\log (T)/\Delta^2,T^{2/3}\})$ in the stochastically-constrained regime, both with (unit) switching costs, where $\Delta$ is the gap between the arms.
In the stochastically constrained case, our bound improves over previous results due to Rouyer et al., 2021, that achieved regret of $\mathcal{O}(T^{1/3}/\Delta)$.
We accompany our results with a lower bound showing that, in general, $\tilde{\mathcal{\Omega}}(\min\{1/\Delta^2,T^{2/3}\})$ switching cost regret is unavoidable in the stochastically-constrained case for algorithms with $\mathcal{O}(T^{2/3})$ worst-case switching cost regret.
| Accept | Reviewers all agree that this is an interesting work with significant contribution to the best-of-both-worlds literature. Clear accept. Please do still address the minor issues pointed out by the reviewers in the final version. | train | [
"rcnaveizfm6",
"sujQ5GRB1gY",
"5oJqV__VW8M",
"GUj0Hnnf7aC",
"VRvxxZ1hLm",
"XRTqD6gjbgS",
"meK4_M5oFX1",
"xyd-uxyk_Ij",
"HO_fHJDfxaf",
"GRJH0tmWuGB",
"pT1herPx3f2"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for addressing my concerns and questions. I gladly increase my score.",
" Thanks for the authors' response and my issues are addressed and I keep my score unchanged.",
" Hi, \nThank you for the clarifications. I believe that because handling the time horizon with a doubling trick provides an extra loga... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
5
] | [
"meK4_M5oFX1",
"XRTqD6gjbgS",
"GUj0Hnnf7aC",
"pT1herPx3f2",
"GRJH0tmWuGB",
"HO_fHJDfxaf",
"xyd-uxyk_Ij",
"nips_2022_ZE4lUw2iGcZ",
"nips_2022_ZE4lUw2iGcZ",
"nips_2022_ZE4lUw2iGcZ",
"nips_2022_ZE4lUw2iGcZ"
] |
nips_2022_wUUutywJY6 | SoLar: Sinkhorn Label Refinery for Imbalanced Partial-Label Learning | Partial-label learning (PLL) is a peculiar weakly-supervised learning task where the training samples are generally associated with a set of candidate labels instead of single ground truth. While a variety of label disambiguation methods have been proposed in this domain, they normally assume a class-balanced scenario that may not hold in many real-world applications. Empirically, we observe degenerated performance of the prior methods when facing the combinatorial challenge from the long-tailed distribution and partial-labeling. In this work, we first identify the major reasons that the prior work failed. We subsequently propose SoLar, a novel Optimal Transport-based framework that allows to refine the disambiguated labels towards matching the marginal class prior distribution. SoLar additionally incorporates a new and systematic mechanism for estimating the long-tailed class prior distribution under the PLL setup. Through extensive experiments, SoLar exhibits substantially superior results on standardized benchmarks compared to the previous state-of-the-art PLL methods. | Accept | Most approaches to partial-label learning (PLL) tasks assume the label distribution is balanced, which may not hold in practice. This paper provides a principled optimal transport-based framework to resolve the issues with performance degradation caused by skewed label distributions in PLL. The reviewers found that the work makes a theoretically solid and experimentally comprehensive attempt towards solving this practical and under-addressed problem. | train | [
"MREvZrqR6z2",
"j3j94Xs1K6S",
"-VOVyjx-pce",
"oHJj-y4JVzQ",
"9zREq-QSyt5",
"uZDcTWrimKR",
"2TsWV7YdtpY",
"hLptfn-0wDh",
"2X4VpZfeK7S",
"wes4DAKRNz",
"YDpySj6_4Xp",
"eB_xcSc8Qz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank the authors for addressing all my concerns. I think this is a good paper and I will keep my score.",
" Dear reviewer,\n\nIt seems that the author rebuttal directly addresses what you have identified as weaknesses of the paper: that the SOTA PLL methods already handle the long-tailed setting adequately, th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8,
2
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
"hLptfn-0wDh",
"eB_xcSc8Qz",
"2X4VpZfeK7S",
"nips_2022_wUUutywJY6",
"nips_2022_wUUutywJY6",
"2TsWV7YdtpY",
"eB_xcSc8Qz",
"YDpySj6_4Xp",
"wes4DAKRNz",
"nips_2022_wUUutywJY6",
"nips_2022_wUUutywJY6",
"nips_2022_wUUutywJY6"
] |
nips_2022_n7Rk_RDh90 | 3D Concept Grounding on Neural Fields | In this paper, we address the challenging problem of 3D concept grounding (i.e., segmenting and learning visual concepts) by looking at RGBD images and reasoning about paired questions and answers. Existing visual reasoning approaches typically utilize supervised methods to extract 2D segmentation masks on which concepts are grounded. In contrast, humans are capable of grounding concepts on the underlying 3D representation of images. However, traditionally inferred 3D representations (e.g., point clouds, voxelgrids and meshes) cannot capture continuous 3D features flexibly, thus making it challenging to ground concepts to 3D regions based on the language description of the object being referred to. To address both issues, we propose to leverage the continuous, differentiable nature of neural fields to segment and learn concepts. Specifically, each 3D coordinate in a scene is represented as a high dimensional descriptor. Concept grounding can then be performed by computing the similarity between the descriptor vector of a 3D coordinate and the vector embedding of a language concept, which enables segmentations and concept learning to be jointly learned on neural fields in a differentiable fashion. As a result, both 3D semantic and instance segmentations can emerge directly from question answering supervision using a set of defined neural operators on top of neural fields (e.g., filtering and counting). Experimental results show that our proposed framework outperforms unsupervised / language-mediated segmentation models on semantic and instance segmentation tasks, as well as outperforms existing models on the challenging 3D aware visual reasoning tasks. Furthermore, our framework can generalize well to unseen shape categories and real scans. | Accept | The paper received all positive reviews (3x weak accept ratings, 1x strong accept rating). The meta-reviewer agrees with the reviewers' assessment of the paper. | train | [
"8KNjqtZQhQe",
"_lhMDzwVLd4",
"WGT2IPhmBf3",
"ta1hakEnw8L",
"_z-XRd87K3H",
"hc-D_4Vv8vO",
"2qF1WGh4SkD",
"kFYIYV7vjb",
"skPI6J8Ep8A",
"-Mch2EXT8r3",
"INOT4_XOl7d",
"aEerPvHikC",
"9NLd5bazWXz",
"YqVBDJ4aL8",
"dzL8_qSIax2",
"Pqc6lu1Kwm",
"dbNsexIP1qY"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" *We genuinely thank all reviewers and ACs for their efforts and time in reviewing our paper, as well as their constructive suggestions that contribute to the improvement of our paper! We sincerely appreciate the positive 6-8-6-6 evaluation from reviewers owKC, 1ERZ, Xamz, gaS1.*\n\n \n\nHere is a summary of ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"nips_2022_n7Rk_RDh90",
"kFYIYV7vjb",
"_z-XRd87K3H",
"hc-D_4Vv8vO",
"hc-D_4Vv8vO",
"9NLd5bazWXz",
"dzL8_qSIax2",
"dzL8_qSIax2",
"Pqc6lu1Kwm",
"dbNsexIP1qY",
"nips_2022_n7Rk_RDh90",
"YqVBDJ4aL8",
"YqVBDJ4aL8",
"nips_2022_n7Rk_RDh90",
"nips_2022_n7Rk_RDh90",
"nips_2022_n7Rk_RDh90",
"ni... |
nips_2022_Tz1lknIPVfp | Learning Dynamical Systems via Koopman Operator Regression in Reproducing Kernel Hilbert Spaces | We study a class of dynamical systems modelled as stationary Markov chains that admit an invariant distribution via the corresponding transfer or Koopman operator. While data-driven algorithms to reconstruct such operators are well known, their relationship with statistical learning is largely unexplored. We formalize a framework to learn the Koopman operator from finite data trajectories of the dynamical system. We consider the restriction of this operator to a reproducing kernel Hilbert space and introduce a notion of risk, from which different estimators naturally arise. We link the risk with the estimation of the spectral decomposition of the Koopman operator. These observations motivate a reduced-rank operator regression (RRR) estimator. We derive learning bounds for the proposed estimator, holding both in i.i.d and non i.i.d. settings, the latter in terms of mixing coefficients. Our results suggest RRR might be beneficial over other widely used estimators as confirmed in numerical experiments both for forecasting and mode decomposition. | Accept | This paper provides a connection between the Koopman operator theory and statistical learning theory, enabling one to approximate the Koopman operator from empirical data using the Hilbert-Schmidt operator on a reproducing kernel Hilbert space (RKHS). The expert reviewers agree that this paper contains substantial contributions that are deemed adequate for publication at NeurIPS2022.
Nevertheless, the major concerns raised by one review include the claimed novelty and the practical value of the main theoretical result (Theorem 1). In my opinion, the authors did a superb job at responding to these concerns by providing detailed responses as well as additional empirical results. The reviewer is of course free to disagree with the authors and to maintain a low score. After a reviewer discussion phase, the remaining reviewers came to a conclusion that the criticisms raised by this reviewer are valid (and hence should be addressed in the camera-ready version), but do not outweigh the merits of this paper. Specifically, the reviewers recommended that the authors ought to weaken the claim of novelty and make clearer the existing literature, especially [1] and related works in the conditional mean embedding (CME) literature.
[1] Stefan Klus et. al. 2019; Eigendecompositions of Transfer Operators in Reproducing Kernel Hilbert Spaces.
Last but not least, I do hope that the authors will respect the time the reviewers spent providing constructive criticisms by implementing the suggested changes summarized by Reviewer `bVru`. | train | [
"dWMftyAFHmm",
"plL_p9Z-kO8",
"OnZzjE8JPeE",
"5shPlpZHs2-",
"rS-gLeKF3W",
"iAdosiyf9nm",
"D71KbfPynbW",
"jYOLn5FEguk",
"5ApyaAMsuME",
"wFVkka25dJhE",
"R2GPj5doiNF",
"22KSsQqqB8Y",
"dcyxBEzn1sE",
"SSH7GEZfZR6",
"Z-e5WYHfJtO",
"4oRvQq5n0i",
"hMv3GxVFXwA",
"CVlTrQ9T2J",
"nQjuCSqODdf... | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_re... | [
" Thank you for your response and your time. As you have requested __a revised version of the paper and supplementary material is uploaded__ (we would like to point out that this is the first year that this has become possible at NeurIPS). Changes are colored in red. Due to current space constraints, all experiment... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
7,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
3,
3
] | [
"plL_p9Z-kO8",
"4oRvQq5n0i",
"CVlTrQ9T2J",
"rS-gLeKF3W",
"jYOLn5FEguk",
"wFVkka25dJhE",
"5ApyaAMsuME",
"R2GPj5doiNF",
"dcyxBEzn1sE",
"Z-e5WYHfJtO",
"22KSsQqqB8Y",
"SSH7GEZfZR6",
"nQjuCSqODdf",
"x1uz90Joxej",
"wXSrbPAc_1M",
"CVlTrQ9T2J",
"nips_2022_Tz1lknIPVfp",
"nips_2022_Tz1lknIPV... |
nips_2022_uLYc4L3C81A | Confident Adaptive Language Modeling | Recent advances in Transformer-based large language models (LLMs) have led to significant performance improvements across many tasks. These gains come with a drastic increase in the models' size, potentially leading to slow and costly use at inference time. In practice, however, the series of generations made by LLMs is composed of varying levels of difficulty. While certain predictions truly benefit from the models' full capacity, other continuations are more trivial and can be solved with reduced compute. In this work, we introduce Confident Adaptive Language Modeling (CALM), a framework for dynamically allocating different amounts of compute per input and generation timestep. Early exit decoding involves several challenges that we address here, such as: (1) what confidence measure to use; (2) connecting sequence-level constraints to local per-token exit decisions; and (3) attending back to missing hidden representations due to early exits in previous tokens. Through theoretical analysis and empirical experiments on three diverse text generation tasks, we demonstrate the efficacy of our framework in reducing compute---potential speedup of up to $\times 3$---while provably maintaining high performance. | Accept | This paper studies the error of early exit in decoding Transformer Language models, and proposes a method CALM to calibrate and accelerate model inference. Experiments on a variety of tasks (summarization, MT, QA) show effectiveness of the proposed method.
All reviewers find the paper solid and the author feedback convincing. | train | [
"CFj2GuD2Iie",
"En7DsycaL1R",
"_geyy9xyPyS",
"VC57GWY6gUx",
"iGj0aFgbMw-",
"0TTJt5_zSeZ",
"1LvNcSCS_pl",
"KLs3yJ_vfAV",
"5rsjfAuGY-y"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the helpful comments. We respond to your questions below:\n\n> *Comparison to baselines*\n\nIn Figure 3 we compare to static baselines that use the same number of layers for all tokens. We also report the local oracle baseline as an approximate upper bound. We note that most previous pap... | [
-1,
-1,
-1,
-1,
-1,
8,
7,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
3
] | [
"5rsjfAuGY-y",
"_geyy9xyPyS",
"KLs3yJ_vfAV",
"1LvNcSCS_pl",
"0TTJt5_zSeZ",
"nips_2022_uLYc4L3C81A",
"nips_2022_uLYc4L3C81A",
"nips_2022_uLYc4L3C81A",
"nips_2022_uLYc4L3C81A"
] |
nips_2022_G7MX_0J6JKX | Is Integer Arithmetic Enough for Deep Learning Training? | The ever-increasing computational complexity of deep learning models makes their training and deployment difficult on various cloud and edge platforms. Replacing floating-point arithmetic with low-bit integer arithmetic is a promising approach to save energy, memory footprint, and latency of deep learning models. As such, quantization has attracted the attention of researchers in recent years. However, using integer numbers to form a fully functional integer training pipeline including forward pass, back-propagation, and stochastic gradient descent is not studied in detail. Our empirical and mathematical results reveal that integer arithmetic seems to be enough to train deep learning models. Unlike recent proposals, instead of quantization, we directly switch the number representation of computations. Our novel training method forms a fully integer training pipeline that does not change the trajectory of the loss and accuracy compared to floating-point, nor does it need any special hyper-parameter tuning, distribution adjustment, or gradient clipping. Our experimental results show that our proposed method is effective in a wide variety of tasks such as classification (including vision transformers), object detection, and semantic segmentation. | Accept | This paper proposes methods for using integer arithmetic to train deep learning models. reviewers arrived at a consensus to accept the paper. Concerns do exist on "The title and abstract are misleading to me. There are also floating-point arithmetic in the model, so it is not integer arithmetic only". Hope the author can fix it. | train | [
"T4y_RP-XLY_",
"fHPLXBR9eYH",
"1AyUXrvalRX",
"0burIIZvCIB",
"vwCh7OwEQDM",
"mBPT-hkLUzS",
"2jvIOf8zVbU",
"6OC6bzSS4_j",
"Q-6xkyy--NZ",
"n3huF8tJ04r",
"xnAnIWXVfb",
"SbMYwRHVRU",
"WWxor9__uqT"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your detailed comments to improve our paper, we have addressed all your comments as follows:\n\n**Q: I did read A.1 previously but I had to guess how it would align into the context of the mantissas rounding...**\n\nFollowing the example that you mentioned if shifted mantissa is $x= (0.010110010101... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
8,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"vwCh7OwEQDM",
"0burIIZvCIB",
"mBPT-hkLUzS",
"6OC6bzSS4_j",
"2jvIOf8zVbU",
"WWxor9__uqT",
"SbMYwRHVRU",
"xnAnIWXVfb",
"n3huF8tJ04r",
"nips_2022_G7MX_0J6JKX",
"nips_2022_G7MX_0J6JKX",
"nips_2022_G7MX_0J6JKX",
"nips_2022_G7MX_0J6JKX"
] |
nips_2022_tjFaqsSK2I3 | A Unified Sequence Interface for Vision Tasks | While language tasks are naturally expressed in a single, unified, modeling framework, i.e., generating sequences of tokens, this has not been the case in computer vision. As a result, there is a proliferation of distinct architectures and loss functions for different vision tasks. In this work we show that a diverse set of "core" computer vision tasks can also be unified if formulated in terms of a shared pixel-to-sequence interface. We focus on four tasks, namely, object detection, instance segmentation, keypoint detection, and image captioning, all with diverse types of outputs, e.g., bounding boxes or dense masks. Despite that, by formulating the output of each task as a sequence of discrete tokens with a unified interface, we show that one can train a neural network with a single model architecture and loss function on all these tasks, with no task-specific customization. To solve a specific task, we use a short prompt as task description, and the sequence output adapts to the prompt so it can produce task-specific output. We show that such a model can achieve competitive performance compared to well-established task-specific models. | Accept | Four reviewers provided reviews for this submission. Several reviewers felt that the idea to unify core vision tasks into a sequential output format is interesting and an entirely new approach and can have a large impact on how we train vision models in the future. There were a few concerns discussed between reviewers and authors. One concern was the comparison to past works that pre-trained on ImageNet vs the proposed model that was pre-trained on Objects365. The second concern was differentiating this work with Pix2Seq. In my opinion, the authors were able to answer both questions well. Overall, given the positive reviews, novelty of the work, potential to cause a significant shift in the approach of future modeling and discussion, I recommend acceptance.
| train | [
"9wLT2-wT2qv",
"qnkUEy9Nyff",
"qmghZ7sub4",
"bA7nJnrfF1A",
"8Fg3ayp91v",
"x0uIvElVvSN",
"gEGKmMezAMM",
"24vmQdetdc3e",
"prqn2ir0G9i",
"Bim7pRS3moS",
"bKHtJAvaZxd",
"fXXaFc5VU5S",
"l4itDbVQh9",
"mAgy3x1hyQJ"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the last minute reply. I agree that the authors have made some fair points in the last response. I am willing to update my score to boarderline accept. ",
" Thanks for the follow-up, we appreciate it. We fully agree that a paper needs novelty to be worthy of publication, but please note that novelty ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
5
] | [
"qnkUEy9Nyff",
"bA7nJnrfF1A",
"8Fg3ayp91v",
"gEGKmMezAMM",
"x0uIvElVvSN",
"mAgy3x1hyQJ",
"l4itDbVQh9",
"prqn2ir0G9i",
"fXXaFc5VU5S",
"bKHtJAvaZxd",
"nips_2022_tjFaqsSK2I3",
"nips_2022_tjFaqsSK2I3",
"nips_2022_tjFaqsSK2I3",
"nips_2022_tjFaqsSK2I3"
] |
nips_2022_-NOQJw5z_KY | Semantic Exploration from Language Abstractions and Pretrained Representations | Effective exploration is a challenge in reinforcement learning (RL). Novelty-based exploration methods can suffer in high-dimensional state spaces, such as continuous partially-observable 3D environments. We address this challenge by defining novelty using semantically meaningful state abstractions, which can be found in learned representations shaped by natural language. In particular, we evaluate vision-language representations, pretrained on natural image captioning datasets. We show that these pretrained representations drive meaningful, task-relevant exploration and improve performance on 3D simulated environments. We also characterize why and how language provides useful abstractions for exploration by considering the impacts of using representations from a pretrained model, a language oracle, and several ablations. We demonstrate the benefits of our approach with on- and off-policy RL algorithms and in two very different task domains---one that stresses the identification and manipulation of everyday objects, and one that requires navigational exploration in an expansive world. Our results suggest that using language-shaped representations could improve exploration for various algorithms and agents in challenging environments. | Accept | The proposed work aims to address a concern in novelty-based RL exploration of weak representations for guiding exploration. They leverage image captioning to provide a semantic categorization of states for defining novelty. The reviewers were happy with the conceptual contributions as well as the experimental results on appropriate domains.
The "missing piece" is a better theoretical understanding of why this approach is good. Specifically, what about language is helpful and how might such properties be guaranteed in future trained captioning models? | train | [
"D1WpM_cVO46",
"567MBel_Qay",
"lCZ1SQZft8e",
"RwM74IxAuum",
"4iw63pbNPz6",
"XVgi02w9VBP",
"QE79ZN6kmHl",
"LZesCfIrwTS",
"qGyXK5FrsPb",
"SeEf9i4cKvS",
"o_mN1_y6aO",
"2lx1LTFRAa3",
"zqI_qV4abr",
"TZPGZilLxE2"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear reviewer,\n\nThank you for your time. We appreciate your questions and hope that our additional edits and figures improve the clarity of the work. Please let us know if you have additional questions. Alternatively, if you feel that your original concerns are addressed, we would appreciate updating your evalu... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
8,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
3
] | [
"LZesCfIrwTS",
"QE79ZN6kmHl",
"nips_2022_-NOQJw5z_KY",
"XVgi02w9VBP",
"TZPGZilLxE2",
"zqI_qV4abr",
"2lx1LTFRAa3",
"o_mN1_y6aO",
"nips_2022_-NOQJw5z_KY",
"nips_2022_-NOQJw5z_KY",
"nips_2022_-NOQJw5z_KY",
"nips_2022_-NOQJw5z_KY",
"nips_2022_-NOQJw5z_KY",
"nips_2022_-NOQJw5z_KY"
] |
nips_2022_6wLXvkHstNR | Semantic uncertainty intervals for disentangled latent spaces | Meaningful uncertainty quantification in computer vision requires reasoning about semantic information---say, the hair color of the person in a photo or the location of a car on the street. To this end, recent breakthroughs in generative modeling allow us to represent semantic information in disentangled latent spaces, but providing uncertainties on the semantic latent variables has remained challenging. In this work, we provide principled uncertainty intervals that are guaranteed to contain the true semantic factors for any underlying generative model. The method does the following: (1) it uses quantile regression to output a heuristic uncertainty interval for each element in the latent space (2) calibrates these uncertainties such that they contain the true value of the latent for a new, unseen input. The endpoints of these calibrated intervals can then be propagated through the generator to produce interpretable uncertainty visualizations for each semantic factor. This technique reliably communicates semantically meaningful, principled, and instance-adaptive uncertainty in inverse problems like image super-resolution and image completion. Project page: https://swamiviv.github.io/semantic_uncertainty_intervals/ | Accept | While the reviewers raised concerns about assumptions of disentangling uncertainty from spurious disentanglement (and the strong disentanglement hypothesis used), they believe the work is interesting and novel, and I concur. I think even if it is not perfect (re assumptions, etc.) this is a nice place to start a conversation. | train | [
"HxdvQ6ybo6_",
"jIK63RnTFuP",
"hMO-o3f4jPXf",
"loEI-qxosae",
"jJXdzgEboZ0",
"CexYFylvmW",
"aztTk30--M",
"K4puAIKTTEI",
"eHYhFftsaSC"
] | [
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" > Q5: Other questions\n\n * Tuning c: We found that the performance of our encoder is robust to a range of `c` values, so it doesn't matter much. We picked a value that best balances the scale of the two losses during training.\n * Scanning (vs) Binary search: Great question! Both approaches work. Binary sear... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
1
] | [
"jIK63RnTFuP",
"eHYhFftsaSC",
"K4puAIKTTEI",
"jJXdzgEboZ0",
"aztTk30--M",
"nips_2022_6wLXvkHstNR",
"nips_2022_6wLXvkHstNR",
"nips_2022_6wLXvkHstNR",
"nips_2022_6wLXvkHstNR"
] |
nips_2022_X4WAq7JQHbA | Decomposable Non-Smooth Convex Optimization with Nearly-Linear Gradient Oracle Complexity | Many fundamental problems in machine learning can be formulated by the convex program
\[ \min_{\theta\in \mathbb{R}^d}\ \sum_{i=1}^{n}f_{i}(\theta), \]
where each $f_i$ is a convex, Lipschitz function supported on a subset of $d_i$ coordinates of $\theta$. One common approach to this problem, exemplified by stochastic gradient descent, involves sampling one $f_i$ term at every iteration to make progress. This approach crucially relies on a notion of uniformity across the $f_i$'s, formally captured by their condition number. In this work, we give an algorithm that minimizes the above convex formulation to $\epsilon$-accuracy in $\widetilde{O}(\sum_{i=1}^n d_i \log (1 /\epsilon))$ gradient computations, with no assumptions on the condition number. The previous best algorithm independent of the condition number is the standard cutting plane method, which requires $O(nd \log (1/\epsilon))$ gradient computations. As a corollary, we improve upon the evaluation oracle complexity for decomposable submodular minimization by [Axiotis, Karczmarz, Mukherjee, Sankowski and Vladu, ICML 2021]. Our main technical contribution is an adaptive procedure to select an $f_i$ term at every iteration via a novel combination of cutting-plane and interior-point methods.
| Accept | This paper studies the finite-sum optimization problem with convex yet possibly non-smooth objective functions. The authors develop an efficient algorithm that allows one to solve this problem with a nearly linear number of calls to the gradient oracle (in terms of the effective dimensionality), thus improving upon prior art. The key idea lies in combining the cutting-plane method and the interior-point method in a novel manner. While a reviewer has raised concerns about implementation of the proposed algorithms, the theoretical contributions of this paper are solid and important, and hence I recommend acceptance. It would still be great if the authors could consider discussing practical implementation of the proposed algorithms. | train | [
"LETE8QbWS61",
"UzuVKX0XiAI",
"0uHM4z9PMzD",
"eragkcwywNS",
"CyX-1FBYKsr",
"0gtNxhs6RBi",
"OcfApX1cAG3",
"YrJX7Q1dPh",
"jwav05PBqUB",
"e5hK-zuCOH_v",
"wQefSZyY8KR",
"lieCAIImCM3",
"oK9ul88jeeG"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" With all respect to the Spielman and Teng work, I would reject that paper from NeurIPS because of similar reasons. There are different venues for different papers, that's why they have chosen a tcs conference/journal and not ml conference.\n\nI do appreciate the obtained result. But I want to appreciate the techn... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4
] | [
"0uHM4z9PMzD",
"eragkcwywNS",
"CyX-1FBYKsr",
"OcfApX1cAG3",
"YrJX7Q1dPh",
"e5hK-zuCOH_v",
"oK9ul88jeeG",
"lieCAIImCM3",
"wQefSZyY8KR",
"nips_2022_X4WAq7JQHbA",
"nips_2022_X4WAq7JQHbA",
"nips_2022_X4WAq7JQHbA",
"nips_2022_X4WAq7JQHbA"
] |
nips_2022_PikKk2lF6P | Better Uncertainty Calibration via Proper Scores for Classification and Beyond | With model trustworthiness being crucial for sensitive real-world applications, practitioners are putting more and more focus on improving the uncertainty calibration of deep neural networks.
Calibration errors are designed to quantify the reliability of probabilistic predictions but their estimators are usually biased and inconsistent.
In this work, we introduce the framework of \textit{proper calibration errors}, which relates every calibration error to a proper score and provides a respective upper bound with optimal estimation properties.
This relationship can be used to reliably quantify the model calibration improvement.
We theoretically and empirically demonstrate the shortcomings of commonly used estimators compared to our approach.
Due to the wide applicability of proper scores, this gives a natural extension of recalibration beyond classification. | Accept | This paper addresses issues with the reliability of calibration estimators by introducing “proper calibration errors” - low variance estimators that are upper bounds of calibration error. The paper has a strong theoretical grounding and addressees an important issue that is relevant to the machine learning community at large. Though the original draft had some missing notation and definitions, the authors promise to fix these issues in the revision. Overall, this paper is well suited for publication at NeurIPS and will be a nice addition to the program. | test | [
"00mPYRU3otI",
"DQnPshZO6TL",
"UpyZPvWgomg",
"1FmKSJsWqRr",
"wyRgu6uiand",
"ZHDIvMTVGu2",
"2fxgS4qOv7",
"Zh-atoPw1f",
"9XTMff0JoYa",
"1W2COVfIdSx",
"KCS3W1DWse",
"b_b5KOYk2A",
"RunfpzHBC7P"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response, I think the definition of RBS is now much more clear.",
" Thank you for the responses and including the definition of RBS in the manuscript.",
" We thank the reviewer for the positive feedback.\nWe agree that there are many downstream applications of proper calibration, many requir... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
8,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"1FmKSJsWqRr",
"KCS3W1DWse",
"RunfpzHBC7P",
"b_b5KOYk2A",
"KCS3W1DWse",
"KCS3W1DWse",
"KCS3W1DWse",
"KCS3W1DWse",
"nips_2022_PikKk2lF6P",
"nips_2022_PikKk2lF6P",
"nips_2022_PikKk2lF6P",
"nips_2022_PikKk2lF6P",
"nips_2022_PikKk2lF6P"
] |
nips_2022_-_I3i2orAV | Look where you look! Saliency-guided Q-networks for generalization in visual Reinforcement Learning | Deep reinforcement learning policies, despite their outstanding efficiency in simulated visual control tasks, have shown disappointing ability to generalize across disturbances in the input training images.
Changes in image statistics or distracting background elements are pitfalls that prevent generalization and real-world applicability of such control policies.
We elaborate on the intuition that a good visual policy should be able to identify which pixels are important for its decision, and preserve this identification of important sources of information across images.
This implies that training of a policy with small generalization gap should focus on such important pixels and ignore the others.
This leads to the introduction of saliency-guided Q-networks (SGQN), a generic method for visual reinforcement learning, that is compatible with any value function learning method.
SGQN vastly improves the generalization capability of Soft Actor-Critic agents and outperforms existing state-of-the-art methods on the Deepmind Control Generalization benchmark, setting a new reference in terms of training efficiency, generalization gap, and policy interpretability. | Accept | The paper proposes a method for ignoring task-irrelevant background information for RL algorithms trained directly from visual inputs. The method consists of two parts: (a) a consistency loss; (b) a self-supervised learning objective on masks. Reviewers unanimously agree that contributions made by the paper are significant, and I agree. The authors should address the remaining minor concerns of the reviewers about the presentation and keep the promises made in the rebuttal in the camera-ready version. | train | [
"QSJTI1NDXzo",
"DQWRmVUkCqW",
"u8jDLrvO2ca",
"O0D12-LzwkZ",
"lMMxHJqtW98",
"2lz6s7VxORxn",
"4OTfN-0njW3",
"tL31z6e-Hya",
"OzwmvaicQXO",
"2PQB1Fd-GBt",
"tsRcKVWhEJL",
"rcyPTObQtXaC",
"V_8VrJWH6w",
"hCugTpwzFPP",
"h76Wo7SYT-R"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for their answers. I keep my rating.",
" Thank you for these constructive comments.\nWe now better understand your concerns and have updated the paper to address them and go along with your suggestions to clarify section 3.\n\n>I looked in more detail at saliency-guided training (e.g. in Ism... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
5
] | [
"V_8VrJWH6w",
"hCugTpwzFPP",
"O0D12-LzwkZ",
"OzwmvaicQXO",
"tsRcKVWhEJL",
"rcyPTObQtXaC",
"nips_2022_-_I3i2orAV",
"h76Wo7SYT-R",
"h76Wo7SYT-R",
"hCugTpwzFPP",
"hCugTpwzFPP",
"V_8VrJWH6w",
"nips_2022_-_I3i2orAV",
"nips_2022_-_I3i2orAV",
"nips_2022_-_I3i2orAV"
] |
nips_2022_h73nTbImOt9 | Multi-Lingual Acquisition on Multimodal Pre-training for Cross-modal Retrieval | Vision and diverse languages are important information sources in our living world. A model that understands multi-modalities and multi-languages can be applied to a wider range of real-life scenarios. To build such a multimodal and multilingual model, existing works try to ensemble vision-language data from multiple languages in pre-training. However, due to the large number of languages, these works often require huge computing resources and cannot be flexibly extended to new languages. In this work, we propose a MultiLingual Acquisition (MLA) framework that can easily empower a monolingual Vision-Language Pre-training (VLP) model with multilingual capability. Specifically, we design a lightweight language acquisition encoder based on state-of-the-art monolingual VLP models. We further propose a two-stage training strategy to optimize the language acquisition encoder, namely the Native Language Transfer stage and the Language Exposure stage. With much less multilingual training data and computing resources, our model achieves state-of-the-art performance on multilingual image-text and video-text retrieval benchmarks. | Accept | All reviewers are positive to this paper. The authors also respond actively to address the problems raised by the reviewers. I recommend acceptance. One thing is that reviewer 4qbz has pointed out the authors should be responsible to take the comments into consideration and
do necessary changes in the paper, such as adjusting the title and the introduction and adding connections to previous work. | test | [
"YBIpcBfzYth",
"FJvy8YsAMEHY",
"qUoaKG-E5B",
"Hf43XRr7Dg5",
"DRsUHhCD-g5",
"Jaosr1Th2v",
"J_M2sca7b2b",
"ITmH6TCZL6c",
"n8f6QVdzz3B"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for answering. I would say this is excellent work. All the best for your submission!",
" Thanks for carefully reading and providing valuable feedback. We list our responses to the review comments here:\n\n**1.\"Adapting ... has been studied extensively ... an efficient method ... in [1]\"**\nMLA share... | [
-1,
-1,
-1,
-1,
-1,
7,
5,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"qUoaKG-E5B",
"n8f6QVdzz3B",
"ITmH6TCZL6c",
"J_M2sca7b2b",
"Jaosr1Th2v",
"nips_2022_h73nTbImOt9",
"nips_2022_h73nTbImOt9",
"nips_2022_h73nTbImOt9",
"nips_2022_h73nTbImOt9"
] |
nips_2022_s_mEE4xOU-m | Learning Distributed and Fair Policies for Network Load Balancing as Markov Potential Game | This paper investigates the network load balancing problem in data centers (DCs) where multiple load balancers (LBs) are deployed, using the multi-agent reinforcement learning (MARL) framework. The challenges of this problem consist of the heterogeneous processing architecture and dynamic environments, as well as limited and partial observability of each LB agent in distributed networking systems, which can largely degrade the performance of in-production load balancing algorithms in real-world setups. Centralised training and distributed execution (CTDE) RL scheme has been proposed to improve MARL performance, yet it incurs -- especially in distributed networking systems, which prefer distributed and plug-and-play design schemes -- additional communication and management overhead among agents. We formulate the multi-agent load balancing problem as a Markov potential game, with a carefully and properly designed workload distribution fairness as the potential function. A fully distributed MARL algorithm is proposed to approximate the Nash equilibrium of the game. Experimental evaluations involve both an event-driven simulator and a real-world system, where the proposed MARL load balancing algorithm shows close-to-optimal performance in simulations and superior results over in-production LBs in the real-world system. | Accept | The paper received an uniformly positive evaluation, although all the scores are in the "borderline / weak accept" range. The authors included a long and comprehensive rebuttal and actively participated in the discussion, which made some of the reviewers updating their scores.
I recommend the paper to be accepted, but I understand the decision could be reverted when comparing the paper with the other candidates. | train | [
"3pDXyX8VEBk",
"JoyOf5rGHQn",
"5bVFMWm2pOu",
"vQ2EtCO-lBA",
"6tM7ImLXg1",
"nOQEYRz5d28",
"Vo3Yjvx6N01",
"B6M5XFWNCcj",
"s9DODgk9XWV",
"jEyuTnvJt3D",
"WZh5v2BrWPF",
"J-hNgd41cwO",
"N0aMBGGyHdb",
"TICRP0koJ9x",
"onttCy3pOf6",
"yUTaCTjPvM1",
"FP2Lxj2nKqy",
"4ApVdVSBTx_",
"0QyOxyLsd6... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author"... | [
" Dear Authors, Thank you for responding to my questions. I appreciate your detailed explanation and find the clarification helpful. The additional results also help to alleviate my concerns. I will increase my rating to Boarderline Accept.\n ",
" Dear Reviewer Y8vv:\n\nThank you so much for raising the score fro... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
3,
4
] | [
"nOQEYRz5d28",
"5bVFMWm2pOu",
"vQ2EtCO-lBA",
"6tM7ImLXg1",
"K4a8JxGV0N0",
"H6oOOJwVT_d",
"M7eVHE-zY7T",
"-a-U_ZPWd6z",
"QJI81B1Ba-g",
"WZh5v2BrWPF",
"42L0lcvj69i",
"u6vAgKIyCMu",
"TICRP0koJ9x",
"u6vAgKIyCMu",
"u6vAgKIyCMu",
"FP2Lxj2nKqy",
"H6oOOJwVT_d",
"0QyOxyLsd6N",
"M7eVHE-zY7... |
nips_2022_aoWo6iAxGx | Robust Imitation of a Few Demonstrations with a Backwards Model | Behavior cloning of expert demonstrations can speed up learning optimal policies in a more sample-efficient way over reinforcement learning. However, the policy cannot extrapolate well to unseen states outside of the demonstration data, creating covariate shift (agent drifting away from demonstrations) and compounding errors. In this work, we tackle this issue by extending the region of attraction around the demonstrations so that the agent can learn how to get back onto the demonstrated trajectories if it veers off-course. We train a generative backwards dynamics model and generate short imagined trajectories from states in the demonstrations. By imitating both demonstrations and these model rollouts, the agent learns the demonstrated paths and how to get back onto these paths. With optimal or near-optimal demonstrations, the learned policy will be both optimal and robust to deviations, with a wider region of attraction. On continuous control domains, we evaluate the robustness when starting from different initial states unseen in the demonstration data. While both our method and other imitation learning baselines can successfully solve the tasks for initial states in the training distribution, our method exhibits considerably more robustness to different initial states. | Accept | The paper proposes training a backward model to teach agents to recover from drifting off the optimal state trajectories provided by a limited number of demonstrations. All reviewers have voted to (weakly) accept. | train | [
"_zGHY_iXQwh",
"cyFTYi-FHWZ",
"lxlkeUeXf6g",
"8pZZl0wL4l4",
"rjUX9r1qAry",
"yQAsdf0hWd3b",
"Au1YOt2wji",
"89-XsfpkIes",
"Pi8XgCKORgU",
"Zxjhh_SBsQo",
"RqWeY14Ikpg"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors' reply. Some of my concerns are addressed for example the details of training the backward model together with the policy and the assumption of the proposed method. However, I still feel the task setup is limited to low-dimensional inputs. The added dexterous manipulation is similar to the fet... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"Au1YOt2wji",
"lxlkeUeXf6g",
"yQAsdf0hWd3b",
"nips_2022_aoWo6iAxGx",
"RqWeY14Ikpg",
"RqWeY14Ikpg",
"Zxjhh_SBsQo",
"Pi8XgCKORgU",
"nips_2022_aoWo6iAxGx",
"nips_2022_aoWo6iAxGx",
"nips_2022_aoWo6iAxGx"
] |
nips_2022_dqO59nI_R9A | Learning on Arbitrary Graph Topologies via Predictive Coding | Training with backpropagation (BP) in standard deep learning consists of two main steps: a forward pass that maps a data point to its prediction, and a backward pass that propagates the error of this prediction back through the network. This process is highly effective when the goal is to minimize a specific objective function. However, it does not allow training on networks with cyclic or backward connections. This is an obstacle to reaching brain-like capabilities, as the highly complex heterarchical structure of the neural connections in the neocortex are potentially fundamental for its effectiveness. In this paper, we show how predictive coding (PC), a theory of information processing in the cortex, can be used to perform inference and learning on arbitrary graph topologies. We experimentally show how this formulation, called PC graphs, can be used to flexibly perform different tasks with the same network by simply stimulating specific neurons. This enables the model to be queried on stimuli with different structures, such as partial images, images with labels, or images without labels. We conclude by investigating how the topology of the graph influences the final performance, and comparing against simple baselines trained with BP. | Accept | This work presents the use of predictive coding (specifically PC graph) as a way to overcome some of the limitations of the commonly used backpropagation approach in deep learning. The initial reviews have raised some concerns regarding the paper, but these were sufficiently addressed in rebuttal to result in one reviewer recommending acceptance and others leaning towards acceptance. Taking this into account, and since I also think this line of work would contribute well to the audience of NeurIPS, I recommend acceptance and I would like to encourage the authors to take into account the reviewer comments, and the points discussed in their rebuttal, when preparing the camera ready version of the paper. | train | [
"7aHmcCpkMBr",
"PX_K98pjSH",
"C4x5fhXLAbd",
"qnDsi0FhtTU",
"nz_6oX9aFf",
"0FynpHW3hFS",
"EGHKMPBCkM-",
"nJ1jo0zjuY",
"PVo0JyRS3_o",
"H_elVWIvK8",
"zB_rN2gMQv4",
"muPrk6LnXOr",
"Qu5Pd7NzY3g",
"3S5KGk8DqV6",
"zJ8LHibjjY1",
"aTf8HIwTibq",
"ei2Z_7yLK6H"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the final comment. If you are satisfied with our response, we would appreciate it if you could update your score to account for the explanation we have provided above. We will include the above discussion in the final version of the manuscript.",
" Thanks for providing two useful refer... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
2
] | [
"PX_K98pjSH",
"C4x5fhXLAbd",
"qnDsi0FhtTU",
"nz_6oX9aFf",
"EGHKMPBCkM-",
"PVo0JyRS3_o",
"H_elVWIvK8",
"nips_2022_dqO59nI_R9A",
"aTf8HIwTibq",
"3S5KGk8DqV6",
"muPrk6LnXOr",
"ei2Z_7yLK6H",
"aTf8HIwTibq",
"zJ8LHibjjY1",
"nips_2022_dqO59nI_R9A",
"nips_2022_dqO59nI_R9A",
"nips_2022_dqO59n... |
nips_2022_lblv6NGI7un | Efficient Graph Similarity Computation with Alignment Regularization | We consider the graph similarity computation (GSC) task based on graph edit distance (GED) estimation. State-of-the-art methods treat GSC as a learning-based prediction task using Graph Neural Networks (GNNs). To capture fine-grained interactions between pair-wise graphs, these methods mostly contain a node-level matching module in the end-to-end learning pipeline, which causes high computational costs in both the training and inference stages. We show that the expensive node-to-node matching module is not necessary for GSC, and high-quality learning can be attained with a simple yet powerful regularization technique, which we call the Alignment Regularization (AReg). In the training stage, the AReg term imposes a node-graph correspondence constraint on the GNN encoder. In the inference stage, the graph-level representations learned by the GNN encoder are directly used to compute the similarity score without using AReg again to speed up inference. We further propose a multi-scale GED discriminator to enhance the expressive ability of the learned representations. Extensive experiments on real-world datasets demonstrate the effectiveness, efficiency and transferability of our approach.
| Accept | Compute similarity between two graphs based on the graph edit distance (GED) is expensive. This paper proposes a learnable way to infer GED quickly. The propose method has two stages, in the first stage, the GNN encoder capture underlying alignment information between pair-wise graphs; in the second stage, it uses the learned encoder to reflect the exact GED value between two graphs.
Questions raised by reviewers, i.e., on motivations / types of discriminators / accuracy of this approximation method / sensitivity of lambda, have been addressed during the rebuttal. Authors are encourage to reflect these changes in the final version. | train | [
"sTgWiIbcGFh",
"MjfxEHMXuA0",
"1CH8lwrijv",
"8XvsKsufyL-",
"Eg4G22GARRV",
"N_U6j2cxdRy",
"8ynf5QWxnMK",
"d1J9AAqafk",
"-Jy0MRUvaNE",
"YgAlw7uGUFw",
"l7gDJrPutEF",
"3HoP5q16xYz"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the time to read our feedback. We will include all your suggestions in the final version of the paper. If you have any further comments and questions, we are glad to write a follow-up response.",
" Thank you for your detailed feedback. Generally, feedback to my questions is satisfactory, solving som... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"MjfxEHMXuA0",
"N_U6j2cxdRy",
"8ynf5QWxnMK",
"d1J9AAqafk",
"3HoP5q16xYz",
"l7gDJrPutEF",
"l7gDJrPutEF",
"3HoP5q16xYz",
"YgAlw7uGUFw",
"nips_2022_lblv6NGI7un",
"nips_2022_lblv6NGI7un",
"nips_2022_lblv6NGI7un"
] |
nips_2022_v7SFDrS44Cf | Neural Estimation of Submodular Functions with Applications to Differentiable Subset Selection | Submodular functions and variants, through their ability to characterize diversity and coverage, have emerged as a key tool for data selection and summarization. Many recent approaches to learn submodular functions suffer from limited expressiveness. In this work, we propose FlexSubNet, a family of flexible neural models for both monotone and non-monotone submodular functions. To fit a latent submodular function from (set, value) observations, our method applies a concave function on modular functions in a recursive manner. We do not draw the concave function from a restricted family, but rather learn from data using a highly expressive neural network that implements a differentiable quadrature procedure. Such an expressive neural model for concave functions may be of independent interest. Next, we extend this setup to provide a novel characterization of monotone $\alpha$-submodular functions, a recently introduced notion of approximate submodular functions. We then use this characterization to design a novel neural model for such functions. Finally, we consider learning submodular set functions under distant supervision in the form of (perimeter, high-value-subset) pairs. This yields a novel subset selection method based on an order-invariant, yet greedy sampler built around the above neural set functions. Our experiments on synthetic and real data show that FlexSubNet outperforms several baselines.
| Accept | All 3 knowledgable reviewers recommended acceptance of the paper and had active discussions with the authors. I agree, that the paper makes several interesting and relevant contributions and suggest acceptance of the paper. Please consider the reviewers' comments when preparing the final version of the paper. | train | [
"aRisSdQ-5Dw",
"SRMRQ3B7WB9",
"5J3yQYwCwRX",
"FOSSVypHiEA",
"eyWr73W-2KT8",
"yT5lU4TSDo2",
"6LLl1Pb9s_3",
"1EU7ZTR70Wg",
"yLayVBS2aIS",
"wBRC2tDrf5",
"XE4ysj_CTd",
"OaDgyY1ujHH",
"q_UsHTgpwxQ"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your suggestions. We have added a short discussion about sample complexity in Appendix B. lines 509–514. We plan to include more in the subsequent revision. If our paper gets accepted, we will get space for one more page and will bring the discussion in main.\n\n",
" Thank you for your suggestions... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"eyWr73W-2KT8",
"5J3yQYwCwRX",
"yT5lU4TSDo2",
"6LLl1Pb9s_3",
"yLayVBS2aIS",
"q_UsHTgpwxQ",
"1EU7ZTR70Wg",
"OaDgyY1ujHH",
"wBRC2tDrf5",
"XE4ysj_CTd",
"nips_2022_v7SFDrS44Cf",
"nips_2022_v7SFDrS44Cf",
"nips_2022_v7SFDrS44Cf"
] |
nips_2022_RF74aWLrvBp | Probabilistic Transformer: Modelling Ambiguities and Distributions for RNA Folding and Molecule Design | Our world is ambiguous and this is reflected in the data we use to train our algorithms. This is particularly true when we try to model natural processes where collected data is affected by noisy measurements and differences in measurement techniques. Sometimes, the process itself is ambiguous, such as in the case of RNA folding, where the same nucleotide sequence can fold into different structures. This suggests that a predictive model should have similar probabilistic characteristics to match the data it models. Therefore, we propose a hierarchical latent distribution to enhance one of the most successful deep learning models, the Transformer, to accommodate ambiguities and data distributions. We show the benefits of our approach (1) on a synthetic task that captures the ability to learn a hidden data distribution, (2) with state-of-the-art results in RNA folding that reveal advantages on highly ambiguous data, and (3) demonstrating its generative capabilities on property-based molecule design by implicitly learning the underlying distributions and outperforming existing work. | Accept | This is a nice application-motivated paper that introduces and tests a novel stochastic variant of a transformer architecture.
All three reviewers recommend acceptance (albeit one is borderline). The borderline review focuses on the closeness of this to existing work, and the relatively incremental nature of the contribution. While I do see the potential concern, I think all in all the consensus is clearly to accept.
Interaction between reviewers and authors led to a number of beneficial changes to the paper during the review process. | train | [
"3RKQEBgaRI",
"BpB_40gRs50",
"ocRDqMu35T9",
"be774n1HxfN",
"yg3sGfN-Ld",
"4-tLnQiXioq",
"itsV7Prail4V",
"o2vXBS6k9oQ",
"0uf_42lHMG",
"0xW6zGdfOjZ",
"TS82sYE2oMTl",
"yTH_7_0GM9",
"r536ihXbvgB",
"OUMP2LeKjpk",
"XKDm6s9Hw-",
"CHmNxGfk5C",
"WeRnmOqTnsS",
"fBlzvDq729",
"e2N1OxWzFyP",
... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" We thank the reviewer for this reply. We changed our manuscript accordingly now and will keep the changes for the final version.\nWe did the following changes to our manuscript\n- We add a detailed figure of the model in the Appendix\n- We remove all statements that refer to mutations and explain in more detail t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"be774n1HxfN",
"yg3sGfN-Ld",
"4-tLnQiXioq",
"E_QZCwCpJy",
"c5ijK-Znkmod",
"GjPXXef5eIzk",
"o2vXBS6k9oQ",
"e2N1OxWzFyP",
"fBlzvDq729",
"WeRnmOqTnsS",
"CHmNxGfk5C",
"XKDm6s9Hw-",
"eUobSjPER0f",
"eUobSjPER0f",
"dWAuM-dO3q-",
"dWAuM-dO3q-",
"dWAuM-dO3q-",
"dWAuM-dO3q-",
"dWAuM-dO3q-"... |
nips_2022_KSioDlJiUaz | Polynomial-Time Optimal Equilibria with a Mediator in Extensive-Form Games | For common notions of correlated equilibrium in extensive-form games, computing an optimal (e.g., welfare-maximizing) equilibrium is NP-hard. Other equilibrium notions---communication and certification equilibria---augment the game with a mediator that has the power to both send and receive messages to and from the players---and, in particular, to remember the messages. In this paper, we investigate both notions in extensive-form games from a computational lens. We show that optimal equilibria in both notions can be computed in polynomial time, the latter under a natural additional assumption known in the literature. Our proof works by constructing a {\em mediator-augmented game} of polynomial size that explicitly represents the mediator's decisions and actions. Our framework allows us to define an entire family of equilibria by varying the mediator's information partition, the players' ability to lie, and the players' ability to deviate. From this perspective, we show that other notions of equilibrium, such as extensive-form correlated equilibrium, correspond to the mediator having imperfect recall. This shows that, at least among all these equilibrium notions, the hardness of computation is driven by the mediator's imperfect recall. As special cases of our general construction, we recover the polynomial-time algorithm of Conitzer & Sandholm [2004] for automated mechanism design in Bayes-Nash equilibria, and the correlation DAG algorithm of Zhang et al [2022] for optimal correlation. Our algorithm is especially scalable when the equilibrium notion is what we define as the full-certification equilibrium, where players cannot lie about their information but they can be silent. We back up our theoretical claims with experiments on a suite of standard benchmark games. | Accept | Executive summary:
Motivated by the potential hardness of computing optimal corrrelated equilibria, this paper looks at variants of correlated equilibria -- communication and certification equilibria --- where the mediator has additional power. The main result of the paper is a poly-time algorithm for computing optimal such equilibria, which embeds the mediator into the original game in the fashion of a two-player Stackelberg game and shows that the resulting game can be solved optimally via linear programming. The paper implies certain existing poly-time algorithm as special cases.
Discussion:
Overall this paper struck me as rather original, and the main result is rather general and encouraging. I think it may motivate follow-up motivation, and variants of it may even have practical use cases.
Accept. | train | [
"9s37ezc4H3Q",
"R8gF6ml8bD7",
"gOPG516dMudw",
"RlDtmYokw37",
"ZQteZOtr4Lhw",
"VCJWAn2tU6Z",
"6OwjjRksfo6",
"nktFBm8R68Q",
"eMhr4beQk9A"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I would like to thank the authors for their detailed response to my concerns. After reading other reviews, I still think that the paper is somehow weak in terms of technical contribution/novelty. Thus, I will stick to my score (borderline accept). Nevertheless, I am open to engage in a discussion with other revie... | [
-1,
-1,
-1,
-1,
-1,
4,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
3
] | [
"gOPG516dMudw",
"eMhr4beQk9A",
"nktFBm8R68Q",
"6OwjjRksfo6",
"VCJWAn2tU6Z",
"nips_2022_KSioDlJiUaz",
"nips_2022_KSioDlJiUaz",
"nips_2022_KSioDlJiUaz",
"nips_2022_KSioDlJiUaz"
] |
nips_2022_bVVIZjQ2AA | Discovered Policy Optimisation | Tremendous progress has been made in reinforcement learning (RL) over the past decade. Most of these advancements came through the continual development of new algorithms, which were designed using a combination of mathematical derivations, intuitions, and experimentation. Such an approach of creating algorithms manually is limited by human understanding and ingenuity. In contrast, meta-learning provides a toolkit for automatic machine learning method optimisation, potentially addressing this flaw. However, black-box approaches which attempt to discover RL algorithms with minimal prior structure have thus far not outperformed existing hand-crafted algorithms. Mirror Learning, which includes RL algorithms, such as PPO, offers a potential middle-ground starting point: while every method in this framework comes with theoretical guarantees, components that differentiate them are subject to design. In this paper we explore the Mirror Learning space by meta-learning a “drift” function. We refer to the immediate result as Learnt Policy Optimisation (LPO). By analysing LPO we gain original insights into policy optimisation which we use to formulate a novel, closed-form RL algorithm, Discovered Policy Optimisation (DPO). Our experiments in Brax environments confirm state-of-the-art performance of LPO and DPO, as well as their transfer to unseen settings. | Accept | This paper proposes a policy-search approach in which a "drift" function is meta-learned in the framework of Mirror Learning.
After reading each other's reviews and the authors' feedback, the reviewers have solved most of their concerns and agree that this paper deserves publication.
The authors need to consider the reviewers' suggestions in preparing the final version of their paper. | train | [
"qsGzFnFqPO8",
"hyMn8ZYgfLo",
"6wlmxLAL5F",
"ZK5hPTDSU3e",
"9pn3MqCq6eO",
"9C3DuLGsW9Z",
"3N2vTEvTKAg",
"gNRoxyiNt3",
"Z8iBVrOKSbM",
"VKLSB8xOSWm",
"KFUmv9eHc1-",
"hh8AAJKhKq",
"NDi8F4vrfv7"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the thoughtful feedback!\n\n> On Input Parameterization\n\nThat makes sense! However, on our end, these experiments are quite costly (the results in the Appendix are not fully meta-trained to convergence, for example). We decided to go with the input we were most confident would work rather than the mo... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"hyMn8ZYgfLo",
"ZK5hPTDSU3e",
"9C3DuLGsW9Z",
"3N2vTEvTKAg",
"3N2vTEvTKAg",
"gNRoxyiNt3",
"Z8iBVrOKSbM",
"NDi8F4vrfv7",
"hh8AAJKhKq",
"KFUmv9eHc1-",
"nips_2022_bVVIZjQ2AA",
"nips_2022_bVVIZjQ2AA",
"nips_2022_bVVIZjQ2AA"
] |
nips_2022_zvNMzjOizmn | Langevin Autoencoders for Learning Deep Latent Variable Models | Markov chain Monte Carlo (MCMC), such as Langevin dynamics, is valid for approximating intractable distributions. However, its usage is limited in the context of deep latent variable models owing to costly datapoint-wise sampling iterations and slow convergence. This paper proposes the amortized Langevin dynamics (ALD), wherein datapoint-wise MCMC iterations are entirely replaced with updates of an encoder that maps observations into latent variables. This amortization enables efficient posterior sampling without datapoint-wise iterations. Despite its efficiency, we prove that ALD is valid as an MCMC algorithm, whose Markov chain has the target posterior as a stationary distribution under mild assumptions. Based on the ALD, we also present a new deep latent variable model named the Langevin autoencoder (LAE). Interestingly, the LAE can be implemented by slightly modifying the traditional autoencoder. Using multiple synthetic datasets, we first validate that ALD can properly obtain samples from target posteriors. We also evaluate the LAE on the image generation task, and show that our LAE can outperform existing methods based on variational inference, such as the variational autoencoder, and other MCMC-based methods in terms of the test likelihood. | Accept | The problem addressed in this paper is the one of inference in deep generative latent variable model. The paper proposes a novel approach with an amortized approximation to the joint distribution of data and latent.
All three reviewers liked the paper with one reviewer being a bit concerned about the soundness of the MCMC convergence proof. In this meta-reviewers' opinion the algorithm (Alg 2) is sound although there might be subtleties here because it mixes posterior parameter updates with generative model parameter updates and latent samples.
The paper is original, clear and numerical evaluations sufficient so acceptance is recommended. | train | [
"vztU-usV-P",
"WTFC9X47eOf",
"MJ-NKbEESXt",
"oIUy-cuGsKY",
"a9TnqA2g2nP",
"8zfj_iyaa3",
"KsbFayYbKEI",
"6TXYNZVS7s",
"tedOAOnCXuY"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Your AC",
" > Is this about the ELBO calculation in Eq. (20) for the evaluation of LAEs? If so, resampling the encoder's parameters (i.e., and ) at test time is not required, because we fix those parameters after training.\n\nThank you this was not clear to me that you hold the parameters fixed after training.... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"a9TnqA2g2nP",
"8zfj_iyaa3",
"oIUy-cuGsKY",
"tedOAOnCXuY",
"6TXYNZVS7s",
"KsbFayYbKEI",
"nips_2022_zvNMzjOizmn",
"nips_2022_zvNMzjOizmn",
"nips_2022_zvNMzjOizmn"
] |
nips_2022_kI_kL5vq6Oa | Risk-Driven Design of Perception Systems | Modern autonomous systems rely on perception modules to process complex sensor measurements into state estimates. These estimates are then passed to a controller, which uses them to make safety-critical decisions. It is therefore important that we design perception systems to minimize errors that reduce the overall safety of the system. We develop a risk-driven approach to designing perception systems that accounts for the effect of perceptual errors on the performance of the fully-integrated, closed-loop system. We formulate a risk function to quantify the effect of a given perceptual error on overall safety, and show how we can use it to design safer perception systems by including a risk-dependent term in the loss function and generating training data in risk-sensitive regions. We evaluate our techniques on a realistic vision-based aircraft detect and avoid application and show that risk-driven design reduces collision risk by 37% over a baseline system. | Accept | This paper proposes a technique that can evaluate the risk to a system (e.g., autonomous robot, aircraft, etc.) posed by errors in its perception system. While the idea of evaluating a perception system (or any other subsystem) as part of the overall pipeline is not new, this work introduces a novel method of encapsulating the risk in the cost-to-go values of the system under assumed dynamics, control policy and perception error model. The Q-values are then learnt via distributional RL. Overall, this work has been well received and should be interesting to anyone developing complicated systems which have machine learning components in them. The authors are encouraged to incorporate all the rich feedback during author-reviewer interaction period in the camera-ready version. | train | [
"JyhjxB0oiFw",
"3f7sQugIFvR",
"gCUhAwJ6hT",
"Rnx5MS45VPv",
"ZmZO5rh3XDE",
"pk5isHsC5gR",
"lK_BZGXaVvJ"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Authors,\nI appreciate the clear feedback.\n\nThe model validation issue in real-world vs simulated data is a key step that is missing. \n\n2) You may wish add a reference to the Greiffenhagen et al paper since it is a longer version and addresses illustration of real-world validation more comprehensively ... | [
-1,
-1,
-1,
-1,
6,
9,
6
] | [
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"Rnx5MS45VPv",
"lK_BZGXaVvJ",
"pk5isHsC5gR",
"ZmZO5rh3XDE",
"nips_2022_kI_kL5vq6Oa",
"nips_2022_kI_kL5vq6Oa",
"nips_2022_kI_kL5vq6Oa"
] |
nips_2022_edffTbw0Sws | Training Scale-Invariant Neural Networks on the Sphere Can Happen in Three Regimes | A fundamental property of deep learning normalization techniques, such as batch normalization, is making the pre-normalization parameters scale invariant. The intrinsic domain of such parameters is the unit sphere, and therefore their gradient optimization dynamics can be represented via spherical optimization with varying effective learning rate (ELR), which was studied previously. However, the varying ELR may obscure certain characteristics of the intrinsic loss landscape structure. In this work, we investigate the properties of training scale-invariant neural networks directly on the sphere using a fixed ELR. We discover three regimes of such training depending on the ELR value: convergence, chaotic equilibrium, and divergence. We study these regimes in detail both on a theoretical examination of a toy example and on a thorough empirical analysis of real scale-invariant deep learning models. Each regime has unique features and reflects specific properties of the intrinsic loss landscape, some of which have strong parallels with previous research on both regular and scale-invariant neural networks training. Finally, we demonstrate how the discovered regimes are reflected in conventional training of normalized networks and how they can be leveraged to achieve better optima. | Accept | All reviewers find the papers analysis on optimization regimes for scale invariant networks interesting and observations novel. The results are clearly presented and well supported by the (limited) analysis. However the reviewers also highlight several drawbacks
1) the paper mainly presents analysis for a scalar function
2) experiments limited to Cifar datasets with ConvNets.
While the results for presented settings are convincing, I think it is hard to judge the universality/importance of the phenomenon from analysis of scalar functions and experiments only on Cifar. Overall I think the paper is on borderline and suggest acceptance as the phenomenon is well presented and can lead to further works building on this. I encourage authors to include more experiments on different datasets and model architectures in the final version. | train | [
"PIWST0yz2vI",
"_DySpxp5b6c",
"UYs584vxXLW",
"opU-DQBqHpu",
"uGKvo03esGr",
"DxUl-a_k1dh",
"fk51RGK_6rP",
"sXx3M2yveS2",
"X9VYzTkvgMA",
"oK3-Hs-p9gk",
"1gBUD5axyrw",
"sCDIvW4Q_m",
"dtuNdCx-4h",
"YbW-RqefNb2"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to express our gratitude to the Reviewer for his/her decision to raise the score! We are sure that taking into account the comments received will significantly improve the positioning and clarity of our work.",
" Thanks to the detailed response from the authors. Now I would like to increase my sco... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
3
] | [
"_DySpxp5b6c",
"uGKvo03esGr",
"YbW-RqefNb2",
"YbW-RqefNb2",
"dtuNdCx-4h",
"dtuNdCx-4h",
"sCDIvW4Q_m",
"1gBUD5axyrw",
"1gBUD5axyrw",
"nips_2022_edffTbw0Sws",
"nips_2022_edffTbw0Sws",
"nips_2022_edffTbw0Sws",
"nips_2022_edffTbw0Sws",
"nips_2022_edffTbw0Sws"
] |
nips_2022_IZXIfq0CuTa | Highly Parallel Deep Ensemble Learning | In this paper, we propose a novel highly parallel deep ensemble learning, which leads to highly compact and parallel deep neural networks. The main idea is to first represent the data in tensor form, apply a linear transform along certain dimension and split the transformed data into different independent spectral data sets; then the matrix product in conventional neural networks is replaced by tensor product, which in effect imposes certain transformed-induced structure on the original weight matrices, e.g., a block-circulant structure. The key feature of the proposed spectral tensor network is that it consists of parallel branches with each branch being an independent neural network trained using one spectral subset of the training data. Besides, the joint data/model parallel amiable for GPU implementation. The outputs of the parallel branches, which are trained on different independent spectral, are combined for ensemble learning to produce an overall network with substantially stronger generalization capability than that of those parallel branches. Moreover, benefiting from the reducing size of inputs, the proposed spectral tensor network exhibits an inherent network compression, and as a result, reduction in computation complexity, which leads to the acceleration of training process. The high parallelism from the massive independent operations of the parallel spectral subnetworks enable a further acceleration in training and inference process. We evaluate the proposed spectral tensor networks on the MNIST, CIFAR-10 and ImageNet data sets, to highlight that they simultaneously achieve network compression, reduction in computation and parallel speedup. | Reject | Reviewers generally agreed that the parallelization idea described in the manuscript is interesting, but for the most part felt that it is not well positioned within existing literature (making it difficult to gauge the exact novelty), and that empirical results are not sufficiently convincing. I encourage the authors to address these two critiques and submit their work to a future venue. | train | [
"G9gua01raPe",
"7P-rewSuMrA",
"oegwljPBIby",
"XNmpHbMDrVA",
"H019rcE3495",
"dw6ALy6F-OR",
"VErCVVMfGZT",
"JEDIqBfgX8",
"CXfjHcR7qzZ",
"vtJkWncDppS",
"CJKH_Foss1j",
"F55Png4fp9s",
"syVPaW_9wjK",
"HeaCe6D9f2W"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your reply! For the first question, in our work, different model learns different spectrals of the data, rather than the different subsets of the data. It is the major difference with conventional MoE models. For the second question, we adopts the ResNet18 as the benchmark. Besides, we really appreciat... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
4,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
2
] | [
"oegwljPBIby",
"dw6ALy6F-OR",
"H019rcE3495",
"nips_2022_IZXIfq0CuTa",
"HeaCe6D9f2W",
"syVPaW_9wjK",
"F55Png4fp9s",
"CXfjHcR7qzZ",
"vtJkWncDppS",
"CJKH_Foss1j",
"nips_2022_IZXIfq0CuTa",
"nips_2022_IZXIfq0CuTa",
"nips_2022_IZXIfq0CuTa",
"nips_2022_IZXIfq0CuTa"
] |
nips_2022_88ubVLwWvGD | You Never Stop Dancing: Non-freezing Dance Generation via Bank-constrained Manifold Projection | One of the most overlooked challenges in dance generation is that the auto-regressive frameworks are prone to freezing motions due to noise accumulation. In this paper, we present two modules that can be plugged into the existing models to enable them to generate non-freezing and high fidelity dances. Since the high-dimensional motion data are easily swamped by noise, we propose to learn a low-dimensional manifold representation by an auto-encoder with a bank of latent codes, which can be used to reduce the noise in the predicted motions, thus preventing from freezing. We further extend the bank to provide explicit priors about the future motions to disambiguate motion prediction, which helps the predictors to generate motions with larger magnitude and higher fidelity than possible before. Extensive experiments on AIST++, a public large-scale 3D dance motion benchmark, demonstrate that our method notably outperforms the baselines in terms of quality, diversity and time length. | Accept | This paper proposes to fix common issues in motion generation by representing motion latents as combinations of discrete latent codewords learned by a VQ-VAE style approach. The idea is interesting and novel, and in the response period has been shown to potentially work for more than just dance generation, including a human trajectory prediction task.
While the idea might be quite well-motivated, reviewers all agreed that the writing does not quite do it justice in the submitted draft. I would recommend that the authors work on addressing those relevant reviewer comments for the next iteration (either camera-ready / future submission), aside from incorporating the new results into the draft. | test | [
"v9jrafMYnyg",
"68hQIgNah0V",
"Jzll3VkKsrb",
"8Uk_OQSoxnb",
"stm0h8G9z4-",
"OiU-ZWxEI7H",
"OPoHG6GpxMBQ",
"xext9w73oWg",
"5E3iWRQE1ul",
"yQaA57Ulszm",
"erYMWSZz7z",
"tKn3EtzizA8",
"dlAcVxqoyum",
"LIcIMHmtip1",
"mgoSNzHHRn",
"iOwUHrGhS0A",
"vctcErh9piq",
"TKYR2CJQyBW",
"b6PwI4Q34J... | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks to all reviewers for appreciating our work and providing constructive comments. We will provide experimental results and related analysis discussed during rebuttal process in the revised supplementary material to show broader use and generalizability of our proposed modules.",
" **Q8. TransitBank is clea... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"nips_2022_88ubVLwWvGD",
"OiU-ZWxEI7H",
"OiU-ZWxEI7H",
"OiU-ZWxEI7H",
"b6PwI4Q34J5",
"dlAcVxqoyum",
"dlAcVxqoyum",
"b7eTCqPVp7",
"b7eTCqPVp7",
"b7eTCqPVp7",
"b6PwI4Q34J5",
"b6PwI4Q34J5",
"TKYR2CJQyBW",
"TKYR2CJQyBW",
"vctcErh9piq",
"vctcErh9piq",
"nips_2022_88ubVLwWvGD",
"nips_2022... |
nips_2022_7XCFxnG8nGS | Regularized Molecular Conformation Fields | Predicting energetically favorable 3-dimensional conformations of organic molecules from
molecular graph plays a fundamental role in computer-aided drug discovery research.
However, effectively exploring the high-dimensional conformation space to identify (meta) stable conformers is anything but trivial.
In this work, we introduce RMCF, a novel framework to
generate a diverse set of low-energy molecular conformations through sampling
from a regularized molecular conformation field.
We develop a data-driven molecular segmentation algorithm to automatically partition each molecule into several structural building blocks to reduce the modeling degrees of freedom.
Then, we employ a Markov Random Field to learn the joint probability distribution of fragment configurations and inter-fragment dihedral angles,
which enables us to sample from different low-energy regions of a conformation space.
Our model constantly outperforms state-of-the-art models for the conformation generation task on the GEOM-Drugs dataset.
We attribute the success of RMCF to modeling in a regularized feature space and learning a global fragment configuration distribution for effective sampling.
The proposed method could be generalized to deal with larger biomolecular systems. | Accept | This work introduces fragment-based data-driven model for molecular conformation generation. The method has a segmentation that breaks each molecule into several fragments and then learns a joint probability distribution over the fragment configurations and dihedral angles between fragments with a Markov Random Field, which enables the method to sample from different low-energy regions of a conformation space.
The idea is novel with convincing results. There were some initial questions about clarity and further discussions that were necessary but these have been mostly addressed during rebuttal, and reviewers have increased their scores once those were cleared out. Overall this is a solid paper and we recommend acceptance. | train | [
"dCgMVblJc7G",
"YIrqop8r5_x",
"SD09HdmEi-1",
"uYM-a7fbjS",
"Mstx0_u3Hhg",
"LmrSaXuOL9X",
"j3jHXHnN2tq",
"GO9J092hW4S",
"49YMzR9QZO-",
"wXV3W-6I3c",
"Ytw49HpInAU0",
"Pl5U8Fp3mIi",
"JF90MG17ejm",
"KkKQmpEzkJeG",
"ELYsL2HjFg",
"Wp77Z_yKk-",
"DCfAb0qqCjx",
"jnePPJsgB_q",
"zkwotHhHGSf... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
... | [
" I'm satisfied with the revision.\n\n## Minor (you do not need to do it now, perhaps add them in the next version)\n1. Give a detailed analysis of your conformation fragments, and discuss with chemical experts to see whether you can get more insights.\n\n2. How do you choose the cases in Figure 6? For what kind of... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"SD09HdmEi-1",
"ELYsL2HjFg",
"uYM-a7fbjS",
"DCfAb0qqCjx",
"Pl5U8Fp3mIi",
"j3jHXHnN2tq",
"jnePPJsgB_q",
"ELYsL2HjFg",
"Wp77Z_yKk-",
"DCfAb0qqCjx",
"TzQfUJThPLQ",
"B3TKA-7ZJes",
"lfM6A8VGy-",
"HRoW8uAJv0f",
"TzQfUJThPLQ",
"B3TKA-7ZJes",
"lfM6A8VGy-",
"zkwotHhHGSf",
"HRoW8uAJv0f",
... |
nips_2022_slKVqAflN5 | A gradient sampling method with complexity guarantees for Lipschitz functions in high and low dimensions | Zhang et al. (ICML 2020) introduced a novel modification of Goldstein's classical subgradient method, with an efficiency guarantee of $O(\varepsilon^{-4})$ for minimizing Lipschitz functions. Their work, however, makes use of an oracle that is not efficiently implementable. In this paper, we obtain the same efficiency guarantee with a standard subgradient oracle, thus making our algorithm efficiently implementable. Our resulting method works on any Lipschitz function whose value and gradient can be evaluated at points of differentiability. We additionally present a new cutting plane algorithm that achieves an efficiency of $O(d\varepsilon^{-2}\log S)$ for the class of $S$-smooth (and possibly non-convex) functions in low dimensions. Strikingly, this $\epsilon$-dependence matches the lower bounds for the convex setting. | Accept | The main contribution of this work is to extend and improve previous results regarding optimization complexity of Lipschitz functions.
In particular, it addresses issues in previously proposed algorithms that did not make it implementable. The newly proposed algorithm does not use the strong oracle that was required in previous work, and only uses first-order information, as is common in large-scale optimization problems in ML. There is also improved analysis of the complexity of the algorithms.
The reviewers had many questions for the authors, and there was a fruitful and constructive discussion between all parties. Their remarks have been adressed, and all reviewers recommend to accept this work, and so do I.
The comments that I would personally have is that 1) For an implementable algorithm, an implementation would have been nice 2) Since the motivation is to train models in an ML setting, some discussion about the "quality" of these stationary points from the point of view of generalization abilities would also complete nicely this work. | test | [
"Y-i0ipayhM",
"o3_nhyw9Yva",
"BvC5WdUP75",
"m9XeFHzTnTd",
"ffL-QQfYuBk",
"OHxEOR8xx-F",
"cbTG65LGAsL",
"-i13A7CAyO2",
"Qn2v_jek1gq",
"hwVMkqybAW3",
"DyCIuKzLcHr",
"2teyn28kC8e",
"HB2q7IFkZ_P",
"mmFYMwpbaPI",
"I1k52lD1o1B",
"gCJYpJP2mb",
"DPY7FkhiEYQ",
"6hnohsfzCCR",
"KeNP9TRNZr",... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_rev... | [
" Dear AC, \n\nI just updated my review and increased my score to a 7.\n\n",
" Dear reviewer, \n\nI wanted to check if you already had the opportunity to update your review.\n\nBest,\nAC",
" This is indeed a convincing example!\nClarke sub-differential does not exhibit good properties.\n\nThis indeed motivates ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
9,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
4
] | [
"o3_nhyw9Yva",
"hwVMkqybAW3",
"m9XeFHzTnTd",
"ffL-QQfYuBk",
"OHxEOR8xx-F",
"Qn2v_jek1gq",
"DPY7FkhiEYQ",
"2teyn28kC8e",
"I1k52lD1o1B",
"HB2q7IFkZ_P",
"nips_2022_slKVqAflN5",
"WU1gOgXpBkJ",
"mmFYMwpbaPI",
"KeNP9TRNZr",
"6hnohsfzCCR",
"nips_2022_slKVqAflN5",
"sRiqkcJ_DI",
"nips_2022_... |
nips_2022_Bwh6XmDEDe | Decoupled Self-supervised Learning for Graphs | This paper studies the problem of conducting self-supervised learning for node representation learning on graphs. Most existing self-supervised learning methods assume the graph is homophilous, where linked nodes often belong to the same class or have similar features. However, such assumptions of homophily do not always hold in real-world graphs. We address this problem by developing a decoupled self-supervised learning (DSSL) framework for graph neural networks. DSSL imitates a generative process of nodes and links from latent variable modeling of the semantic structure, which decouples different underlying semantics between different neighborhoods into the self-supervised learning process. Our DSSL framework is agnostic to the encoders and does not need prefabricated augmentations, thus is flexible to different graphs. To effectively optimize the framework, we derive the evidence lower bound of the self-supervised objective and develop a scalable training algorithm with variational inference. We provide a theoretical analysis to justify that DSSL enjoys the better downstream performance. Extensive experiments on various types of graph benchmarks demonstrate that our proposed framework can achieve better performance compared with competitive baselines. | Accept | This paper proposes a self-supervised learning method for graph-structured data, which can better explore global and local semantic dependencies. The method uses a probabilistic framework based on the mutual information maximization principle. Theoretical analysis is also developed to show a tighter upper bound on the downstream Bayes error can be obtained by the proposed method. After the rebuttal, all reviewers generally appreciate contributions made by this submission.
However, it is suggested to revise the following problem in the final version.
- As identified by the first three reviewers, this paper does not specifically target at non-homophilous graphs. Instead it seems to can better capture semantic information on graphs on a more non-local level as emphasized by the last reviewer. The connections between global semantic and non-homophilous graphs need to be clarified.
- The above problems make reviewers wonder the necessity of study non-homophilous graphs, whether sufficient methods are compared in homophilous graphs, and where enough ablation studies are performed.
- While authors also offer theoretical analysis, it is also not clear how obtained results are connected with non-homophilous graphs. | train | [
"PNdQk3ji-n",
"wuyCRAiDR7O",
"WawqZyQre9",
"DwIwD9IQSJ",
"8Nn6Q-1bGw4",
"rhmbPrBwjRG",
"90cJ-LJ32EA",
"vrLa73Rhpko",
"4HCjc12cfSo",
"lHcAjW0jC2PD",
"DPZtmXXr_Ss",
"TdMLz9ezAQE",
"LyqIkM4WAcG",
"3tVB9Q6kks4",
"xUEz4n6gVu8",
"ZLJDxQ0loRL",
"DdfvPxszxm0",
"p5TlLPR_1q-"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks the authors for the detailed feedback, which addressed most of my concerns. I have raised my rating accordingly.",
" We appreciate the reviewer's further insightful comments. We respond to the comments as follows.\n\n**Firstly, we would like to say that conducting self-supervised learning on non-homophil... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5,
4
] | [
"TdMLz9ezAQE",
"WawqZyQre9",
"DwIwD9IQSJ",
"8Nn6Q-1bGw4",
"4HCjc12cfSo",
"90cJ-LJ32EA",
"3tVB9Q6kks4",
"4HCjc12cfSo",
"xUEz4n6gVu8",
"DPZtmXXr_Ss",
"ZLJDxQ0loRL",
"LyqIkM4WAcG",
"DdfvPxszxm0",
"p5TlLPR_1q-",
"nips_2022_Bwh6XmDEDe",
"nips_2022_Bwh6XmDEDe",
"nips_2022_Bwh6XmDEDe",
"n... |
nips_2022_Y-sdZLIi9R9 | Meta Reinforcement Learning with Finite Training Tasks - a Density Estimation Approach | In meta reinforcement learning (meta RL), an agent learns from a set of training tasks how to quickly solve a new task, drawn from the same task distribution. The optimal meta RL policy, a.k.a.~the Bayes-optimal behavior, is well defined, and guarantees optimal reward in expectation, taken with respect to the task distribution. The question we explore in this work is how many training tasks are required to guarantee approximately optimal behavior with high probability. Recent work provided the first such PAC analysis for a model-free setting, where a history-dependent policy was learned from the training tasks. In this work, we propose a different approach: directly learn the task distribution, using density estimation techniques, and then train a policy on the learned task distribution. We show that our approach leads to bounds that depend on the dimension of the task distribution. In particular, in settings where the task distribution lies in a low-dimensional manifold, we extend our analysis to use dimensionality reduction techniques and account for such structure, obtaining significantly better bounds than previous work, which strictly depend on the number of states and actions. The key of our approach is the regularization implied by the kernel density estimation method. We further demonstrate that this regularization is useful in practice, when `plugged in' the state-of-the-art VariBAD meta RL algorithm. | Accept | The paper provides a theoretical PAC analysis of meta-learning when the task distribution is directly learned. Under the assumption that the task distribution can be approximated in a low-dimensional space, this approach leads to improved bounds wrt to previous literature. The authors further illustrate how the lessons learned in the theory can be translated into a practical algorithm by integrating their kernel density estimation into the VariBAD algorithm.
There is general consensus that the paper makes several interesting contributions. First the authors devise a new algorithmic approach for meta-learning and derive PAC bounds that clearly improve over current results. The results are rigorous and their interpretation is insightful. Furthermore, the authors made a considerable effort in translating their method into an actual algorithm with a non-trivial empirical validation. While more work may be needed to have a full picture of the empirical merits and limitations of the proposed method, I am confident the paper can be built upon by other researchers in the area.
I strongly suggest the authors to integrate their rebuttal into the final version of the paper, in particular the discussion on the dependency on the dimensionality. | test | [
"LadQYf7rgd7",
"NZTJL-ZRAEy",
"Fw51RLl3GaLD",
"1LTO1-CDJQz",
"2kTGabMgS4a",
"3XD6KdWOlyG",
"qbGGYuFDgn",
"qCsXpaMkjw",
"P3DgVykkdcN",
"JdnWxn1TAl8",
"l8wFJ4yGdXV",
"tkSV3QlBoN3",
"0s8dkmggk-"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The additional experiments and the author's comment satisfied most of my concerns. I agree the structure will benefit and will look more unified with this experiment that is closer to the theoretical context (even though I would have loved to see an experiment involving a set of discrete MDPs perfectly observed, ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
2,
5
] | [
"qbGGYuFDgn",
"2kTGabMgS4a",
"1LTO1-CDJQz",
"3XD6KdWOlyG",
"0s8dkmggk-",
"tkSV3QlBoN3",
"l8wFJ4yGdXV",
"JdnWxn1TAl8",
"nips_2022_Y-sdZLIi9R9",
"nips_2022_Y-sdZLIi9R9",
"nips_2022_Y-sdZLIi9R9",
"nips_2022_Y-sdZLIi9R9",
"nips_2022_Y-sdZLIi9R9"
] |
nips_2022_AyGJDpN2eR6 | Exponentially Improving the Complexity of Simulating the Weisfeiler-Lehman Test with Graph Neural Networks | Recent work shows that the expressive power of Graph Neural Networks (GNNs) in distinguishing non-isomorphic graphs is exactly the same as that of the Weisfeiler-Lehman (WL) graph test. In particular, they show that the WL test can be simulated by GNNs. However, those simulations involve neural networks for the “combine” function of size polynomial or even exponential in the number of graph nodes $n$, as well as feature vectors of length linear in $n$.
We present an improved simulation of the WL test on GNNs with {\em exponentially} lower complexity. In particular, the neural network implementing the combine function in each node has only $\mathrm{polylog}(n)$ parameters, and the feature vectors exchanged by the nodes of GNN consists of only $O(\log n)$ bits. We also give logarithmic lower bounds for the feature vector length and the size of the neural networks, showing the (near)-optimality of our construction. | Accept | Even though there were some concerns regarding the limited practical consequences of these results to the ML/GNN community, the consensus is that the work makes a meaningful theoretical contribution toward the understanding of the connection between the Weisfeiler-Lehman (WL) Isomorphism Test and message-passing graph neural networks (MPNN). The work is well-written, stands out from the typical ML paper, and brings forth novel complexity bounds for the simulation of the 1WL-test with MPNN. | train | [
"OdXT5gwfazq",
"ECu330q0uhSI",
"HXBPjkAPmX",
"kx8ij5Y9P4GP",
"1IZ9FR7m2XD",
"_RYfLT06WS_",
"PZwc7XmdaC",
"pn1Ilqt_b0v",
"giu4hYr0cFa",
"x5aAuSZiFm7"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for taking the time to answer my questions. I like the paper and hope it will be accepted. I agree with the authors that the expressivity results are interesting and significant for the ML community. \nI still think the authors should connect their results to practical machine learning (perhaps in the a... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"kx8ij5Y9P4GP",
"x5aAuSZiFm7",
"1IZ9FR7m2XD",
"pn1Ilqt_b0v",
"giu4hYr0cFa",
"PZwc7XmdaC",
"x5aAuSZiFm7",
"nips_2022_AyGJDpN2eR6",
"nips_2022_AyGJDpN2eR6",
"nips_2022_AyGJDpN2eR6"
] |
nips_2022__jg6Sf6tuF7 | Post-hoc estimators for learning to defer to an expert | Many practical settings allow a learner to defer predictions to one or more costly experts. For example, the learning to defer paradigm allows a learner to defer to a human expert, at some monetary cost. Similarly, the adaptive inference paradigm allows a base model to defer to one or more large models, at some computational cost. The goal in these settings is to learn classification and deferral mechanisms to optimise a suitable accuracy-cost tradeoff. To achieve this, a central issue studied in prior work is the design of a coherent loss function for both mechanisms. In this work, we demonstrate that existing losses have two subtle limitations: they can encourage underfitting when there is a high cost of deferring, and the deferral function can have a weak dependence on the base model predictions. To resolve these issues, we propose a post-hoc training scheme: we train a deferral function on top of a base model, with the objective of predicting to defer when the base model's error probability exceeds the cost of the expert model. This may be viewed as applying a partial surrogate to the ideal deferral loss, which can lead to a tighter approximation and thus better performance. Empirically, we verify the efficacy of post-hoc training on benchmarks for learning to defer and adaptive inference. | Accept | This paper proposed a novel framework for “learning to defer” (L2D), which decides when to defer the decision on an instance to an expert based on an estimate of the expert's cost. The key results include identifying the “failure mode” of existing L2D approaches, a novel post-doc estimation procedure for model calibration, and thorough experiments showing that the proposed algorithm works well when compared to SOTA. During the rebuttal phase, the authors included additional experiments which make the empirical performance more convincing (e.g., baselines + post-hoc thresholding, and additional baseline results as suggested by Reviewer xmUy). Other than several clarity issues, there were no critical concerns in the reviews.
There are valuable suggestions in the reviews, including improving the clarity when introducing key concepts and notations in the main text, and providing details of the experimental setting and results. The authors are strongly encouraged to address the concerns raised in the reviews when preparing a revision of this paper.
| train | [
"4HLpV05Vku7",
"Lpk_j8WQCHH",
"HKXDODOY2IG",
"PsHorcIycKI",
"OlV_Y21jLW",
"NIqF2aIvZvV",
"sYvoDMeNMd",
"r8g3I0lnCLX",
"YqxX4eSubzs",
"-MpvnwbLXl"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I wish to thank the authors for their response and clarifications. I have read over the updated paper and supplemental material. The paper is much improved and I remain confident in my assessment.",
" Thanks again for the detailed feedback. As the discussion period is ending shortly, we wanted to check if you h... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"NIqF2aIvZvV",
"PsHorcIycKI",
"OlV_Y21jLW",
"YqxX4eSubzs",
"-MpvnwbLXl",
"r8g3I0lnCLX",
"nips_2022__jg6Sf6tuF7",
"nips_2022__jg6Sf6tuF7",
"nips_2022__jg6Sf6tuF7",
"nips_2022__jg6Sf6tuF7"
] |
nips_2022_wFymjzZEEkH | Personalized Federated Learning towards Communication Efficiency, Robustness and Fairness | Personalized Federated Learning faces many challenges such as expensive communication costs, training-time adversarial attacks, and performance unfairness across devices. Recent developments witness a trade-off between a reference model and local models to achieve personalization. We follow the avenue and propose a personalized FL method towards the three goals. When it is time to communicate, our method projects local models into a shared-and-fixed low-dimensional random subspace and uses infimal convolution to control the deviation between the reference model and projected local models. We theoretically show our method converges for smooth objectives with square regularizers and the convergence dependence on the projection dimension is mild. We also illustrate the benefits of robustness and fairness on a class of linear problems. Finally, we conduct a large number of experiments to show the empirical superiority of our method over several state-of-the-art methods on the three aspects. | Accept | The paper gives an approach to personalized federated learning, which incorporates client heterogeneity, robustness to attacks and fairness in performance. The proposed method is interesting, though the simultaneous incorporation of multiple concerns makes it somewhat difficult to disentangle the role of different choices. The authors present theoretical results and a fairly comprehensive empirical evaluation showing effectiveness of their method.
The accept decision is primarily based on the substantial empirical analysis. As mentioned above, various algorithmic considerations should ideally be studied somewhat in isolation, such as through ablations. For instance, it is completely reasonable to incorporate personalization through the infimal convolution without a subspace projection. Presumably the authors do not do this due to the computational cost, but the joint effect of P and the personalized formulation is a bit tricky for the reader.
For the theory, assumption 1(a) is effectively vacuous in most ML settings, where the loss \tilde{f}_k is typically based on one or a few examples only, and hence cannot be strongly convex without including additional regularization in the loss. I appreciate the addition of the non-convex result in the revision, and would advise changing this to the main result. The per example strong convexity assumed here is not common in the literature at all (typically it is average loss strong convexity), so this result adds little value to the paper.
The reviewers also highlighted multiple weaknesses in the evaluation and method, such as scalability in practice due to the projection step and the use of non-standard models in evaluation. These should be properly acknowledged in the revision. | train | [
"jO9nXubDObH",
"Lq6b1QyD_J",
"ROwtbCjXs2a",
"qlg_dNgTJ8Y",
"jY07bR5Ean",
"3MeeW0W667C",
"vOIi15N53s",
"v0Yrsbfpj0WZ",
"IL6qURPOZvQ",
"DoCEXUdjWJd",
"ZXf_zEFqrNQ",
"_hXheLgjqsX",
"AEcHXJYv0cX",
"uZQ5gKbCntT",
"MGt7UzU2dez",
"RKUB9lm9nal",
"IBBqmPL42E8",
"R26IfesZwl8",
"3PAaEhk-ohR... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"... | [
" Thank you for the complexity analysis, and I've read your new proof. I will keep my score as the complexity/cost issues seem to be a point worth improvement.",
" 1) Since the birth of ResNet, countless variants have been proposed in the literature. \nIt is impossible to test them all.\nMoreover, the key of Res... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
6,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4,
3,
3
] | [
"vOIi15N53s",
"3MeeW0W667C",
"DoCEXUdjWJd",
"jY07bR5Ean",
"ZXf_zEFqrNQ",
"AEcHXJYv0cX",
"v0Yrsbfpj0WZ",
"_hXheLgjqsX",
"3PAaEhk-ohR",
"R26IfesZwl8",
"IBBqmPL42E8",
"RKUB9lm9nal",
"MGt7UzU2dez",
"nips_2022_wFymjzZEEkH",
"nips_2022_wFymjzZEEkH",
"nips_2022_wFymjzZEEkH",
"nips_2022_wFym... |
nips_2022_GdHVClGh9N | Bayesian Optimistic Optimization: Optimistic Exploration for Model-based Reinforcement Learning | Reinforcement learning (RL) is a general framework for modeling sequential decision making problems, at the core of which lies the dilemma of exploitation and exploration. An agent failing to explore systematically will inevitably fail to learn efficiently. Optimism in the face of uncertainty (OFU) is a conventionally successful strategy for efficient exploration. An agent following the OFU principle explores actively and efficiently. However, when applied to model-based RL, it involves specifying a confidence set of the underlying model and solving a series of nonlinear constrained optimization, which can be computationally intractable. This paper proposes an algorithm, Bayesian optimistic optimization (BOO), which adopts a dynamic weighting technique for enforcing the constraint rather than explicitly solving a constrained optimization problem. BOO is a general algorithm proved to be sample-efficient for models in a finite-dimensional reproducing kernel Hilbert space. We also develop techniques for effective optimization and show through some simulation experiments that BOO is competitive with the existing algorithms. | Accept | The paper presents a new model-based approach based on optimism in the face of uncertainty.
After reading each other's reviews and the authors' feedback, most of the reviewers' concerns were solved and the reviewers agree that this paper deserves publication.
However, while preparing the final version of their paper, the authors have to consider the reviewers' suggestions and in particular, they are expected to extend the related work discussion and move some of the experimental results from the appendix to the main paper. | train | [
"UcbDOOsoFf3",
"eU53xMUQ_8m",
"o0-gWCzDz5X",
"twDnZvwFPtw",
"YWrWtHs1rrp",
"yTBrCQtbLt",
"_57Aagb8O9p",
"iyJojYahUAs",
"osliAmjtlbd"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks to the authors for their responses.\n\nI checked the revision, especially Section 5 (including Theorem 5.1).\n\nThe major concern (Theorem 5.1) has been addressed.\n\nI change the score from 4 to 5.",
" Dear Reviewer EVFD,\n\nThanks again for your comments. We hope our response clarifies your concerns. S... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
2,
2,
3
] | [
"YWrWtHs1rrp",
"_57Aagb8O9p",
"osliAmjtlbd",
"iyJojYahUAs",
"_57Aagb8O9p",
"nips_2022_GdHVClGh9N",
"nips_2022_GdHVClGh9N",
"nips_2022_GdHVClGh9N",
"nips_2022_GdHVClGh9N"
] |
nips_2022_ZLcwSgV-WKH | Pre-trained Adversarial Perturbations | Self-supervised pre-training has drawn increasing attention in recent years due to its superior performance on numerous downstream tasks after fine-tuning. However, it is well-known that deep learning models lack the robustness to adversarial examples, which can also invoke security issues to pre-trained models, despite being less explored. In this paper, we delve into the robustness of pre-trained models by introducing Pre-trained Adversarial Perturbations (PAPs), which are universal perturbations crafted for the pre-trained models to maintain the effectiveness when attacking fine-tuned ones without any knowledge of the downstream tasks. To this end, we propose a Low-Level Layer Lifting Attack (L4A) method to generate effective PAPs by lifting the neuron activations of low-level layers of the pre-trained models. Equipped with an enhanced noise augmentation strategy, L4A is effective at generating more transferable PAPs against the fine-tuned models. Extensive experiments on typical pre-trained vision models and ten downstream tasks demonstrate that our method improves the attack success rate by a large margin compared to the state-of-the-art methods. | Accept | This paper develops an attack method for the pre-trained models so the attack can remain effective even for downstream tasks. The authors introduced a Low-Level Layer Lifting Attack (L4A) method, which mainly perturbation the neurons lying at the low-level layers of the pre-trained models. Their method looks simple and effective, and multiple datasets and settings are reported to justify its effectiveness. During rebuttal, the authors also reported additional results on adversarial pre-training, which would be valuable to add to the final paper.
On the negative side, the proposed method is tested only on the image classification task. Besides, the authors should have provided the required social/ethical review statement and questionnaire responses - please make sure to add.
Overall, this paper passes the bar given the generally positive sentiment among reviewers.
| train | [
"nq2lEduNL0H",
"9N1RrsoZS00",
"aG1gARGKSSy",
"LOR1r17sXyN",
"yseqo92_6_m",
"XFO4DB-Z9t",
"JBb8fdg1Ea",
"DdV88buG6kj",
"-bhcc7fGYj",
"q6ZfBoM0_dKS",
"tGoXyZ1F9Kh",
"Q35Ahx6DzF",
"fngA8fQlhX",
"VjhMeT1zXYc",
"-pg0Od9g8Bq",
"_PSv6By7zv6",
"fmXu7PcwjQC",
"kLwbE6vwBXZ",
"foYP2q_ykLf",... | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
... | [
" The paper was flagged by a reviewer for \"Privacy and Security (e.g., consent)\" concerns, but upon reading the manuscript it is unclear to me what ethical concerns exist. The paper does not appear to be focusing on human faces or other issues that might raise issues of privacy or consent. (I am willing to be cor... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
2,
4
] | [
"nips_2022_ZLcwSgV-WKH",
"yseqo92_6_m",
"-bhcc7fGYj",
"-bhcc7fGYj",
"fmXu7PcwjQC",
"JBb8fdg1Ea",
"DdV88buG6kj",
"q6ZfBoM0_dKS",
"fngA8fQlhX",
"-pg0Od9g8Bq",
"nips_2022_ZLcwSgV-WKH",
"vyPyf_CxS9d",
"vyPyf_CxS9d",
"foYP2q_ykLf",
"kLwbE6vwBXZ",
"fmXu7PcwjQC",
"nips_2022_ZLcwSgV-WKH",
... |
nips_2022_NIJFp_n4MXt | A contrastive rule for meta-learning | Humans and other animals are capable of improving their learning performance as they solve related tasks from a given problem domain, to the point of being able to learn from extremely limited data. While synaptic plasticity is generically thought to underlie learning in the brain, the precise neural and synaptic mechanisms by which learning processes improve through experience are not well understood. Here, we present a general-purpose, biologically-plausible meta-learning rule which estimates gradients with respect to the parameters of an underlying learning algorithm by simply running it twice. Our rule may be understood as a generalization of contrastive Hebbian learning to meta-learning and notably, it neither requires computing second derivatives nor going backwards in time, two characteristic features of previous gradient-based methods that are hard to conceive in physical neural circuits. We demonstrate the generality of our rule by applying it to two distinct models: a complex synapse with internal states which consolidate task-shared information, and a dual-system architecture in which a primary network is rapidly modulated by another one to learn the specifics of each task. For both models, our meta-learning rule matches or outperforms reference algorithms on a wide range of benchmark problems, while only using information presumed to be locally available at neurons and synapses. We corroborate these findings with a theoretical analysis of the gradient estimation error incurred by our rule. | Accept | The Reviewers appreciated the novelty factor of the contrastive meta-learning algorithm proposed in the paper, the theoretical analysis establishing a formal connection with equilibrium propagation, and the appealing features of the resulting meta-learning procedure, which include memory and computation efficiency, as well as the fact that the algorithm affords a biologically-plausible implementation that only requires locally available information for parameter updates.
To concretely showcase these properties the paper demonstrates two instantiations of the proposed algorithm that are mechanistically realized through synaptic consolidation and top-down neuronal modulation, respectively.
Finally, the paper validates the algorithm on standard few-shot learning benchmarks.
The main weaknesses of the paper identified by the Reviewers are the empirical evaluation, which would benefit from a more extensive comparison between methods and more experiments on more challenging meta-learning datasets, and the discussion on the candidate neurobiological substrate for a brain implementation of contrastive meta-learning, which would benefit from a more detailed and systematic description.
These limitations however do not substantially detract from the overall quality, relevance and interest of the paper, which Reviewers unanimously recommend for acceptance. | val | [
"0zoYGjTo8o7",
"Fz6Ivm7gtzU",
"O03eYJ3YqP",
"0-fkoy5d1S",
"8y567eNKMF6",
"sLEjTCmtiO3",
"-IrLJrxe61p",
"Dhn3YdEndaf",
"Z0LHcqALoV8",
"mY-CK-ktiMB",
"7SnnAwUGMkK",
"J9rxKfdwRZN",
"an8gJf69pNi",
"mzcqQkW3gO4",
"zQtKF3dgz2w",
"KE9GJ9uqcni"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for their response and regret that our reply has not convincingly addressed all of their concerns. We believe that the remaining concerns are exciting future research directions, but that they go beyond the scope of the current paper.\n\nIn its current scope, our paper joins an active line o... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"Fz6Ivm7gtzU",
"Z0LHcqALoV8",
"nips_2022_NIJFp_n4MXt",
"nips_2022_NIJFp_n4MXt",
"sLEjTCmtiO3",
"an8gJf69pNi",
"mzcqQkW3gO4",
"J9rxKfdwRZN",
"mY-CK-ktiMB",
"7SnnAwUGMkK",
"KE9GJ9uqcni",
"zQtKF3dgz2w",
"nips_2022_NIJFp_n4MXt",
"nips_2022_NIJFp_n4MXt",
"nips_2022_NIJFp_n4MXt",
"nips_2022_... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.