paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
nips_2021_gXdOOeqRN8g | On the Cryptographic Hardness of Learning Single Periodic Neurons | Min Jae Song, Ilias Zadik, Joan Bruna | accept | This work establishes computational hardness for the problem of learning periodic functions with a small amount of adversarial additive noise added to each example. This problem was recently considered in a work by (Song et al., 2017) who established hardness in the Statistical Query model. The current work provides a reduction from the computationally hard problem of Continuous Learning with Errors (CLWE) -- a continuous analogue of LWE that was introduced in a recent work (where its computational hardness was established). A second contribution of the paper is an efficient algebraic algorithm solving the underlying learning problem when the additive noise is very small.
The reviewers uniformly agreed that this is an interesting contribution that should appear in NeurIPS. | val | [
"WY2gN-bQhkL",
"Zh_qJRHT9MI",
"_6TyqFr2QfL",
"1t4xK58Ugx",
"ZBnkj5wfXz-",
"UmCTvR_hQqL",
"N1aoT6SvsJ",
"AWb5UslVVN",
"4HOdUGr9Dth",
"BloeAt3w7Hj",
"uun3-WRkB5n"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The first line of the abstract nicely summarises the work and I cannot summarise this any better. \n\"We show a simple reduction which demonstrates the cryptographic hardness of learning a single periodic neuron over isotropic Gaussian distributions in the presence of noise\".\n\nThe goal is to learn w using sampl... | [
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
7,
8
] | [
3,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_gXdOOeqRN8g",
"AWb5UslVVN",
"nips_2021_gXdOOeqRN8g",
"UmCTvR_hQqL",
"N1aoT6SvsJ",
"_6TyqFr2QfL",
"BloeAt3w7Hj",
"uun3-WRkB5n",
"WY2gN-bQhkL",
"nips_2021_gXdOOeqRN8g",
"nips_2021_gXdOOeqRN8g"
] |
nips_2021_FEIFFzmq_V_ | PCA Initialization for Approximate Message Passing in Rotationally Invariant Models | We study the problem of estimating a rank-1 signal in the presence of rotationally invariant noise--a class of perturbations more general than Gaussian noise. Principal Component Analysis (PCA) provides a natural estimator, and sharp results on its performance have been obtained in the high-dimensional regime. Recently, an Approximate Message Passing (AMP) algorithm has been proposed as an alternative estimator with the potential to improve the accuracy of PCA. However, the existing analysis of AMP requires an initialization that is both correlated with the signal and independent of the noise, which is often unrealistic in practice. In this work, we combine the two methods, and propose to initialize AMP with PCA. Our main result is a rigorous asymptotic characterization of the performance of this estimator. Both the AMP algorithm and its analysis differ from those previously derived in the Gaussian setting: at every iteration, our AMP algorithm requires a specific term to account for PCA initialization, while in the Gaussian case, PCA initialization affects only the first iteration of AMP. The proof is based on a two-phase artificial AMP that first approximates the PCA estimator and then mimics the true AMP. Our numerical simulations show an excellent agreement between AMP results and theoretical predictions, and suggest an interesting open direction on achieving Bayes-optimal performance.
| accept | This paper received 5 reviews. The scores/confidences were 6/3, 6/2, 8/5, 6/3, and 10/4, implying that all the reviewers evaluated this paper positively. We can notice that there is a relatively large spread among these review scores, but I think that the review contents are more or less coherent across all the reviews: The major strength of this paper is certainly its high technical quality, as commented by most reviewers. On the other hand, the major weakness is that it does not succeed in demonstrating significance of the theoretical contribution in real applications, so that it would be difficult for this paper to attract a wide audience beyond those who have specific interest in AMP and related iterative inference methods. The five reviews weighed these points differently on the basis of their own expertise, resulting in the spread of the scores. I am thus happy to recommend acceptance of this paper for presentation at the NeurIPS conference. At the same time, I would like to encourage the authors to consider demonstrating the significance of their contribution in real-world settings.
Very minor points:
- Lines 107-109: One would have to assume that $O$ is independent of $\Lambda$.
- Lines 127-129: Similarly, one would have to assume that $O$ and $Q$ are independent and independent of $\Lambda$. | train | [
"YHR93pOVjFl",
"PeljzHZDNQC",
"zD33zZ8dsG",
"VK9McPna58v",
"JxdrnrdzBd1",
"mN1Rgq0rC_s",
"1BMxQsA1vxO",
"h8Of575C3oD",
"s9iv8wf9H6P",
"t_Z68bom-nh",
"xK6nZ4WFws",
"PV6ToV_xM6S",
"VvOyMJBDsTN",
"TrJv4RmEwLp",
"-diK-HaE_vh"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the response. I still have concerns about what are and how many applications can be benefited by this framework. Yet, based on the other reviewers and the potential theoretical contribution to AMP, I am increasing my rating to 6.",
"The paper studies the asymptotics of the AMP algorithms with PCA ini... | [
-1,
6,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
10
] | [
-1,
3,
2,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"VvOyMJBDsTN",
"nips_2021_FEIFFzmq_V_",
"nips_2021_FEIFFzmq_V_",
"PV6ToV_xM6S",
"nips_2021_FEIFFzmq_V_",
"t_Z68bom-nh",
"xK6nZ4WFws",
"s9iv8wf9H6P",
"-diK-HaE_vh",
"JxdrnrdzBd1",
"TrJv4RmEwLp",
"zD33zZ8dsG",
"PeljzHZDNQC",
"nips_2021_FEIFFzmq_V_",
"nips_2021_FEIFFzmq_V_"
] |
nips_2021_817F5yuNAf1 | Automatic and Harmless Regularization with Constrained and Lexicographic Optimization: A Dynamic Barrier Approach | Many machine learning tasks have to make a trade-off between two loss functions, typically the main data-fitness loss and an auxiliary loss. The most widely used approach is to optimize the linear combination of the objectives, which, however, requires manual tuning of the combination coefficient and is theoretically unsuitable for non-convex functions. In this work, we consider constrained optimization as a more principled approach for trading off two losses, with a special emphasis on lexicographic optimization, a degenerated limit of constrained optimization which optimizes a secondary loss inside the optimal set of the main loss. We propose a dynamic barrier gradient descent algorithm which provides a unified solution of both constrained and lexicographic optimization. We establish the convergence of the method for general non-convex functions.
| accept | This paper unfortunately only received three reviews.
One reviewer was negative due to concerns with some proofs but I think the authors have mostly addressed them.
Two reviewers were very positive and ready to back up the paper.
I would therefore like to accept the paper, assuming that the authors comply to the following two requests:
- Carefully proofread the paper and fix the numerous small typos
- Change the title, as the current title is awkward and uninformative (honestly with such a title, the authors are shooting themselves in the foot). I would suggest "Bi-objective optimization: a dynamic barrier approach" but other choices are of course possible. | train | [
"BQFYFSpC03o",
"Ozwic7MoPYq",
"6Km3OHhUAXV",
"iqfpvp89DGn",
"5aHf9oEhUHj",
"6Hvq0Wjotwo",
"LStTv8cdreC",
"SwuWteax_VX",
"hPVYDgBVotL",
"sglzluAJ3la",
"D3UG5OsRL8R"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
" Dear Reviewer, \n\nThanks a lot for you consideration. We will further elaborate the points in your review in the revision of the draft. ",
" \nDear reviewer, \nthanks for considering our response. We have been working on improving the paper and proofs based on your comments. \n\nWe want to further clarify the ... | [
-1,
-1,
6,
5,
-1,
-1,
-1,
-1,
-1,
-1,
7
] | [
-1,
-1,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"6Hvq0Wjotwo",
"5aHf9oEhUHj",
"nips_2021_817F5yuNAf1",
"nips_2021_817F5yuNAf1",
"sglzluAJ3la",
"hPVYDgBVotL",
"iqfpvp89DGn",
"6Km3OHhUAXV",
"D3UG5OsRL8R",
"iqfpvp89DGn",
"nips_2021_817F5yuNAf1"
] |
nips_2021_Ruw3MHL9jAO | Corruption Robust Active Learning | Yifang Chen, Simon S. Du, Kevin G. Jamieson | accept | The paper studies active learning under label corruptions and presents a robust version of the classical CAL algorithm. All the reviewers agreed that the paper studies an interesting direction and presents good theoretical results. While the results in the paper do not present a complete picture of the problem setting, there is enough technical novelty to make the paper slightly above the bar for publication. The authors are encourage to carefully take into account the reviewers' comments when preparing the final version. | train | [
"p1bCnl_TEl",
"7z7pVito8cT",
"ZU20-1W0wDa",
"9C-oBlCyRvk",
"cjPh6XB7Fg_",
"RiHm6ZhZa_R",
"0IOnCVizxRN",
"ZaKzk7Dq7Dm",
"5iHOl8CbsjF",
"hiEyJfmgAO",
"8Chm437xTp3"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper discusses binary classification when the labels can be corrupted by noise in an active learning scenario. In this direction the authors show that in a benign corruption setting the, a known algorithm, which the authors call RobustCAL, can achieve pretty-much the same label complexity as in a non-corrupte... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"nips_2021_Ruw3MHL9jAO",
"ZU20-1W0wDa",
"RiHm6ZhZa_R",
"p1bCnl_TEl",
"8Chm437xTp3",
"5iHOl8CbsjF",
"hiEyJfmgAO",
"nips_2021_Ruw3MHL9jAO",
"nips_2021_Ruw3MHL9jAO",
"nips_2021_Ruw3MHL9jAO",
"nips_2021_Ruw3MHL9jAO"
] |
nips_2021_nW4xl2CjcVg | Metadata-based Multi-Task Bandits with Bayesian Hierarchical Models | How to explore efficiently is a central problem in multi-armed bandits. In this paper, we introduce the metadata-based multi-task bandit problem, where the agent needs to solve a large number of related multi-armed bandit tasks and can leverage some task-specific features (i.e., metadata) to share knowledge across tasks. As a general framework, we propose to capture task relations through the lens of Bayesian hierarchical models, upon which a Thompson sampling algorithm is designed to efficiently learn task relations, share information, and minimize the cumulative regrets. Two concrete examples for Gaussian bandits and Bernoulli bandits are carefully analyzed. The Bayes regret for Gaussian bandits clearly demonstrates the benefits of information sharing with our algorithm. The proposed method is further supported by extensive experiments.
| accept | Thank you to the authors and the reviewers for their contributions to the conference! This paper proposes a new bandit formulation with metadata, which they solve using a Bayesian hierarchical framework. They then propose a meta Thompson sampling algorithm and show regret guarantees using prior alignment. The reviewers uniformly appreciated the paper, so my recommendation is to accept the paper. A few concerns were raised regarding the lack of a lower bound and experiments on real data. For the former, I would suggest that the authors add some discussion on why this is challenging to do. For the latter, please add the new MovieLens experiments. Additional clarifications based on the rebuttal response are of course welcome as well. | train | [
"4DQEo0cCC23",
"91U1TadqPwW",
"gNrWAe-gOI4",
"Cx8MvqXCRti",
"cfVeJrE1Ew",
"wLGN1lCXi4E",
"2Q1lnyOeAF6",
"jgtUfr1brF7",
"1ndv6aWmIHM"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We would like to appreciate all reviewers for your time spent and valuable comments, and also our AC for handling this paper! We hope our point-by-point responses clarified/addressed your questions about our paper, and please feel free to let us know any other feedbacks/questions/comments before the end of the re... | [
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
4
] | [
"nips_2021_nW4xl2CjcVg",
"jgtUfr1brF7",
"1ndv6aWmIHM",
"2Q1lnyOeAF6",
"wLGN1lCXi4E",
"nips_2021_nW4xl2CjcVg",
"nips_2021_nW4xl2CjcVg",
"nips_2021_nW4xl2CjcVg",
"nips_2021_nW4xl2CjcVg"
] |
nips_2021_43fmQ-db-yJ | Program Synthesis Guided Reinforcement Learning for Partially Observed Environments | A key challenge for reinforcement learning is solving long-horizon planning problems. Recent work has leveraged programs to guide reinforcement learning in these settings. However, these approaches impose a high manual burden on the user since they must provide a guiding program for every new task. Partially observed environments further complicate the programming task because the program must implement a strategy that correctly, and ideally optimally, handles every possible configuration of the hidden regions of the environment. We propose a new approach, model predictive program synthesis (MPPS), that uses program synthesis to automatically generate the guiding programs. It trains a generative model to predict the unobserved portions of the world, and then synthesizes a program based on samples from this model in a way that is robust to its uncertainty. In our experiments, we show that our approach significantly outperforms non-program-guided approaches on a set of challenging benchmarks, including a 2D Minecraft-inspired environment where the agent must complete a complex sequence of subtasks to achieve its goal, and achieves a similar performance as using handcrafted programs to guide the agent. Our results demonstrate that our approach can obtain the benefits of program-guided reinforcement learning without requiring the user to provide a new guiding program for every new task.
| accept | This is an easy meta-review to write, as the during discussion, the consensus clearly shifted towards a mean of 7 amongst those reviewers who actively responded, with the outlier not having replied to the authors rebuttal. This paper is pretty cool — it proposes a method for program-synthesis guided RL, but does not depend on privileged information in the form of ground truth programs. It performs better than comparable approaches that do not have the inductive bias of program-synthesis within their agents. I'm of the opinion we should see more work in this area at NeurIPS, and the quality of this work definitely warrants it being an exemplar thereof. I recommend acceptance. | train | [
"Gvu4ZLghDXv",
"QRYj0vRFZDg",
"JZT--jA6QW",
"Y4WxKMZesBP",
"wX7qyIHyp-C",
"LVMhGZsqDde",
"VoY_w6FFPOM",
"UJrcWXvmK74",
"ndAq9qpZLHU",
"1vWEik742l_",
"mujnpZK4gsH",
"qZ2TaPQfHz",
"7YECKidkA6t",
"FWTVpfSF7Yv"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper builds upon previous work using a guiding program over options trained with Reinforcement Learning to solve new tasks in an environment that requires planning.\nIt adds capabilities to deal with partially observable environments and reduces the burden in the user by introducing a world model (hallucinat... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"nips_2021_43fmQ-db-yJ",
"mujnpZK4gsH",
"wX7qyIHyp-C",
"wX7qyIHyp-C",
"LVMhGZsqDde",
"VoY_w6FFPOM",
"UJrcWXvmK74",
"FWTVpfSF7Yv",
"7YECKidkA6t",
"Gvu4ZLghDXv",
"qZ2TaPQfHz",
"nips_2021_43fmQ-db-yJ",
"nips_2021_43fmQ-db-yJ",
"nips_2021_43fmQ-db-yJ"
] |
nips_2021_xVZx1SXb_IU | Robust Allocations with Diversity Constraints | We consider the problem of allocating divisible items among multiple agents, and consider the setting where any agent is allowed to introduce {\emph diversity constraints} on the items they are allocated. We motivate this via settings where the items themselves correspond to user ad slots or task workers with attributes such as race and gender on which the principal seeks to achieve demographic parity. We consider the following question: When an agent expresses diversity constraints into an allocation rule, is the allocation of other agents hurt significantly? If this happens, the cost of introducing such constraints is disproportionately borne by agents who do not benefit from diversity. We codify this via two desiderata capturing {\em robustness}. These are {\emph no negative externality} -- other agents are not hurt -- and {\emph monotonicity} -- the agent enforcing the constraint does not see a large increase in value. We show in a formal sense that the Nash Welfare rule that maximizes product of agent values is {\emph uniquely} positioned to be robust when diversity constraints are introduced, while almost all other natural allocation rules fail this criterion. We also show that the guarantees achieved by Nash Welfare are nearly optimal within a widely studied class of allocation rules. We finally perform an empirical simulation on real-world data that models ad allocations to show that this gap between Nash Welfare and other rules persists in the wild.
| accept | Overall the reviewers are quite positive about this paper: every reviewer thought the paper studies an interesting topic, and the results are interesting. A weakness of the paper is that it does not quite fully explore the space. This makes it hard to really compare the different allocation rules, since various subsets of results are shown for each rule. The paper would be substantially strengthened if it were able to give a more complete accounting of the properties of each allocation rule. It is suggest that the authors partially address this shortcoming by giving some sort of overview of the results that can better facilitate comparison. One reviewer brought up a deficiency in comparing to the existing concept of "bossiness," which seems highly related to the proposed methods. The authors are strongly encouraged to add some comparison to this literature.
| train | [
"x9a3Sc5ovz-",
"vx2h-88EmUx",
"8MkNnvt79FT",
"yKiszI6JOS",
"AtKCHYoCPIw",
"JW9MeWD2EG-",
"i0PF4G-1lW",
"sfFedmpYsl"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a framework for and and analysis of algorithms that allocate divisible resources to a set of agents when the agents can specify both utilities as well as constraints over the items as well. This is motivated by, e.g., computational advertising settings where we want to allocate users to adverti... | [
5,
-1,
-1,
-1,
-1,
6,
6,
8
] | [
4,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"nips_2021_xVZx1SXb_IU",
"x9a3Sc5ovz-",
"sfFedmpYsl",
"JW9MeWD2EG-",
"i0PF4G-1lW",
"nips_2021_xVZx1SXb_IU",
"nips_2021_xVZx1SXb_IU",
"nips_2021_xVZx1SXb_IU"
] |
nips_2021_8CaXYuLlZ6o | Activation Sharing with Asymmetric Paths Solves Weight Transport Problem without Bidirectional Connection | One of the reasons why it is difficult for the brain to perform backpropagation (BP) is the weight transport problem, which argues forward and feedback neurons cannot share the same synaptic weights during learning in biological neural networks. Recently proposed algorithms address the weight transport problem while providing good performance similar to BP in large-scale networks. However, they require bidirectional connections between the forward and feedback neurons to train their weights, which is observed to be rare in the biological brain. In this work, we propose an Activation Sharing algorithm that removes the need for bidirectional connections between the two types of neurons. In this algorithm, hidden layer outputs (activations) are shared across multiple layers during weight updates. By applying this learning rule to both forward and feedback networks, we solve the weight transport problem without the constraint of bidirectional connections, also achieving good performance even on deep convolutional neural networks for various datasets. In addition, our algorithm could significantly reduce memory access overhead when implemented in hardware.
| accept | In this paper the authors present a solution to the problem of bidirectional connections that exist in some solutions to the weight transport problem. More specifically, Akrout et al. (2019) (https://arxiv.org/abs/1904.05391) proposed two algorithmic solutions to the weight transport problem that involved training a separate "error pathway" of neurons, wherein each neuron in the forward processing pathway had a paired error neuron that it was bidirectionally connected to. This is potentially problematic from a biological perspective, as there is no evidence for such a paired error pathway in the neocortex, and moreover, there is no guarantee of bidirectional connections in real neural networks. The present paper solves the bidirectional connection problem (but not the paired error pathway problem) by allowing feedforward neurons to project to multiple error neurons as long as they are equal to or higher in the hierarchy. The authors show that weight alignment and learning can still proceed fairly well under these conditions.
This paper received mixed reviews on the borderline. One of the reviewers was fairly positive, two were borderline accepts, and one felt it was a clear reject. The authors' responses were thorough and clear enough that one of the borderline reviewers updated their score from a 6 to a 7, leading to a final average score of 6 (i.e. a borderline accept). As such, this is a borderline case as an AC. This is especially so because biological plausibility is a tricky concept and sometimes hard to pin down. Generally, it is hard to justify rejecting a paper on biological plausibility alone. And, this paper appears to be technically sound. But, biological plausibility is also not a concept without any meaning, and there are some basic aspects of neurophysiology that we can generally employ when asking about biological plausibility. Moreover, when a paper is framed as having its central contribution be that it is more biologically plausible than previous algorithms the question of biological plausibility necessarily becomes the focus.
Nonetheless, after all these considerations, and after a lot of discussion, an accept decision was reached. Ultimately, the paper is a sound contribution to the field and deserves to be published.
However, given the reviews, the authors may want to consider the following issues for when they prepare the camera ready version though:
1) The paper is focused 100% on solving a problem that exists for the algorithms proposed in the paper of Akrout et al. (2019). Only this paper assumes bidirectional connections between the backward and forward pathway neurons. Other papers on weight alignment (such as: https://www.nature.com/articles/s41593-021-00857-x, https://openreview.net/forum?id=rJxWxxSYvB, http://proceedings.mlr.press/v119/kunin20a.html and https://arxiv.org/abs/2005.04168) don't use a separate error pathway with paired neurons and bidirectional connections. Thus, at face value, this paper's contribution is really just to address a gap in biological plausibility introduced by a specific paper (Akrout et al. 2019). That's fine, but it makes the potential impact much more limited. Can the authors articulate how their approach may be used more broadly than to just solve the bidirectional connection issue in Akrout et al.?
2) The proposed solution is not clearly all that more biologically plausible than that proposed by Akrout et al., based on well-known facts of neurophysiology. Specifically, it appears that this model still mandates that an error pathway with paired neurons for the feedforward pathway exists (i.e. each feedforward neuron has an error neuron partner), and this was arguably the most problematic aspect of the Akrout et al. proposal, since there is zero empirical evidence for such paired pathways in the brain. Moreover, there are other biological plausibility issues that this paper introduces, as pointed out by the critical reviewers, such as the errors now being synapses specific and the need for additional feedforward pathways with the existing (and as noted) biologically implausible paired error pathway neurons. It would be good for the authors to address these concerns more in the final version.
In summary, this paper is technically sound, and potentially interesting, so it is an accept. But, per some of the reviews, at face value it is focused on solving a problem in a specific model, and there are still issues with biological plausibility, so the authors should consider these issues when preparing the camera ready version.
| train | [
"AxTTDJTMmlu",
"AmNX0bZJ-o0",
"ZIQ0QKA2vl",
"8G-rjGXFmkE",
"WCOC9UeUSLa",
"8IOqQq1u8yH",
"HKsiWYy2xbz",
"EfOrA0Qsy0P",
"gljGEv_N8aa",
"TysjP4XGeEY",
"yb_SYBGLytU",
"OVzg2ucp8D2",
"ByvWbDpt33O",
"ictzNfBm5du",
"WNb14A8tqti"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a novel approximation to backpropagation that solves the need for bidirectional connections between the forward pass neurons, and the backwards pass circuitry. This is done via activation sharing, meaning that the weight update is computed by the backward pass circuitry and activations of some l... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"nips_2021_8CaXYuLlZ6o",
"ZIQ0QKA2vl",
"8G-rjGXFmkE",
"8IOqQq1u8yH",
"WNb14A8tqti",
"AxTTDJTMmlu",
"WNb14A8tqti",
"ictzNfBm5du",
"ictzNfBm5du",
"AxTTDJTMmlu",
"ByvWbDpt33O",
"ByvWbDpt33O",
"nips_2021_8CaXYuLlZ6o",
"nips_2021_8CaXYuLlZ6o",
"nips_2021_8CaXYuLlZ6o"
] |
nips_2021_gISH-80g05u | BlendGAN: Implicitly GAN Blending for Arbitrary Stylized Face Generation | Generative Adversarial Networks (GANs) have made a dramatic leap in high-fidelity image synthesis and stylized face generation. Recently, a layer-swapping mechanism has been developed to improve the stylization performance. However, this method is incapable of fitting arbitrary styles in a single model and requires hundreds of style-consistent training images for each style. To address the above issues, we propose BlendGAN for arbitrary stylized face generation by leveraging a flexible blending strategy and a generic artistic dataset. Specifically, we first train a self-supervised style encoder on the generic artistic dataset to extract the representations of arbitrary styles. In addition, a weighted blending module (WBM) is proposed to blend face and style representations implicitly and control the arbitrary stylization effect. By doing so, BlendGAN can gracefully fit arbitrary styles in a unified model while avoiding case-by-case preparation of style-consistent training images. To this end, we also present a novel large-scale artistic face dataset AAHQ. Extensive experiments demonstrate that BlendGAN outperforms state-of-the-art methods in terms of visual quality and style diversity for both latent-guided and reference-guided stylized face synthesis.
| accept | UPDATE: The revision of this paper has been reviewed and the paper has been accepted. However, the following additional minor changes are required for the camera-ready:
- Acknowledge in the main text the issue that the new model has the same challenges as PULSE (i.e., input images with darker skin are now stylized as lighter skinned faces in the output).
- Update the "Broader Impacts" section to mention the fact that face data is biometric data and thus needs to be sourced and distributed more carefully, with attention to potential privacy, consent and copyright issues.
----
It came to the attention of the program chairs and ethics review chairs very late in the review process that this paper deals with face generation, a sensitive application, but was not flagged for ethics review. An emergency ethics review was obtained and the program chairs and ethics review chairs then discussed the paper in detail.
Based on this discussion, we have decided to conditionally accept the paper, given the ethical concerns related to face generation more broadly. In order for this paper to be fully accepted, authors need to address the following concerns:
- Provide examples beyond light-skinned faces and other potentially biased results or demonstrate some mitigation and reflection on any biased outcomes of the model. Include an analysis or disaggregated evaluation acknowledging any limitations in how the model handles faces of differing demographics.
- Communicate any face data distribution restrictions or limitations to model distribution in light of privacy or malicious use concerns.
- Acknowledge any potential harmful applications or depictions that could arise from the use of this technology in an expanded “broader impacts and ethical considerations” section.
We appreciate the cooperation of the authors in this process, and we hope that these adjustments and further reflection will improve the overall quality of the work. We hope authors can commit to these improvements as a requirement for inclusion at this conference.
The original meta-review from the AC follows.
----
This paper proposes a model for synthesizing faces with diverse styles. The model can generate arbitrary stylized faces and their natural photo versions at the same time. In general, many reviewers find the idea interesting and the results encouraging. There are concerns regarding the technical novelty and comparisons against Swapping Autoencoder and Toonify. The rebuttal addressed most of the concerns and clarified the difference between the proposed work vs. Toonify and Swapping Autoencoder. The AC agreed with the reviewers’ consensus and recommended accepting the paper.
| val | [
"qM8BfvJQOm",
"sQzV1zUyOYO",
"F4FCpJMeRJj",
"izOoupLB3g5",
"M49E0cmmSN",
"C-gAMPv5lR",
"pqgQ0XcUV-Z",
"-0KkEYNS6W",
"I_6mgBO1IC",
"CxsHe909yIy",
"sv3Zcz8_j9",
"MelmV-dqPDY"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This work proposes a method to synthesize faces with arbitrary artistic styles, where the style can be obtained from random samples or a reference style of a real face image. To achieve this, the authors introduce a style encoder to predict the style latent of a reference image. A face latent which controls the st... | [
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6
] | [
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_gISH-80g05u",
"nips_2021_gISH-80g05u",
"M49E0cmmSN",
"I_6mgBO1IC",
"CxsHe909yIy",
"-0KkEYNS6W",
"MelmV-dqPDY",
"sv3Zcz8_j9",
"sQzV1zUyOYO",
"qM8BfvJQOm",
"nips_2021_gISH-80g05u",
"nips_2021_gISH-80g05u"
] |
nips_2021_fOaks7LY5R | Differentially Private Model Personalization | Prateek Jain, John Rush, Adam Smith, Shuang Song, Abhradeep Guha Thakurta | accept | This paper analyzes a setting in which multiple users, each with their own dataset, wish to collaborate so as to learn better personalized model, while maintaining the privacy of their data. In the framework analyzed, all users share a common, low-dimensional embedding model, the output of which is used as the input for each personalized model. The paper presents two algorithms, based on two DP mechanisms, and proves that they are $(\epsilon, \delta)$-DP while also providing bounds on the excess population risk w.r.t. non-private learning -- i.e., the price paid for private learning.
This is a very interesting and timely topic at the intersection of federated learning, on-device personalization and differential privacy. The theoretical results are compelling and clearly presented. The writing is OK, although I noticed a few typos in the intro -- mostly missing articles. The reviews ended up being unanimously positive. One reviewer was originally against acceptance, but was later convinced to change their score by the authors' response.
I am recommending this paper for a spotlight because I believe that DP personalization is an important emerging topic that has not yet received much attention in the community. However, I recognize that the paper did not receive very high scores, so I am comfortable with the SAC/PCs downgrading it to a poster if need be.
Note that the submitted manuscript does not contain experimental results, but the authors provided some during discussion at the request of a reviewer. I encourage the authors to include these in the paper. (Experiments may strengthen the case for a spotlight -- or even a full oral.)
I also encourage the authors to give the paper a close read for grammar, syntax and clarity, as there were multiple complaints about the paper's clarity, and I myself think the writing could be polished. | val | [
"22gLdqg1rVd",
"gLBmeFbi8lK",
"ptpJmiaqbLl",
"jAZuDL26D_h",
"qddyXKo2KgR",
"U1LcUxzrqxh",
"0-fz36wixBS",
"Y02sSR9TvqG",
"tNJmszQSOk",
"wLIkTaAjfh7",
"KFHYXOSqq5w"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the clarifications",
" Thank you for the response and clarification. The response clarifies all of my concerns. ",
"\nThe paper presents an algorithm for differentially private (DP) learning of personalized models. A set of users with their own private data collaborate to privately learn a good emb... | [
-1,
-1,
7,
6,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
-1,
2,
4,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"Y02sSR9TvqG",
"0-fz36wixBS",
"nips_2021_fOaks7LY5R",
"nips_2021_fOaks7LY5R",
"U1LcUxzrqxh",
"jAZuDL26D_h",
"wLIkTaAjfh7",
"ptpJmiaqbLl",
"KFHYXOSqq5w",
"nips_2021_fOaks7LY5R",
"nips_2021_fOaks7LY5R"
] |
nips_2021_fqfHJqNy_uY | Rates of Estimation of Optimal Transport Maps using Plug-in Estimators via Barycentric Projections | NABARUN DEB, Promit Ghosal, Bodhisattva Sen | accept | All the reviewers insisted on the high quality of the theoretical results, even if some numerical simulations would have been welcome (these could have been provided in the supplementary material). They also appreciated the quality of the rebuttal, which clarified some important questions. Some raised the issue that the part of the proof in the supplementary was lacking clarity and details, and this should be improved in the final version. I thus strongly urge the authors to take into account the feedback of the reviewers to improve the quality of the paper. For these reasons (lacks of numerical simulation and clarity of the proofs), after a discussion with the reviewers and in agreement with them, I decided to support acceptance but not for a spotlight. | train | [
"iYjCuPs0pO4",
"zvi6Q2spw-c",
"QZniXXKPl3P",
"sxJhRSBcqS",
"xHo3FX7as0",
"ND5BT5Sq09M",
"7Uqy8OVgvEu",
"Nwh2lvwrn1v",
"Nwy-26viXxX",
"QK2grX-xDhZ",
"8VriIeQtae",
"U1e2JdBLCs",
"PhURqAS_fhf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the detailed rebuttal. All my main concerns were addressed here. This paper is a valuable contribution to the field nonetheless I invite the authors to go make their proofs more accessible and self contained. A detailed sketch of proof could also help the reader grasp the reasoning behind the demonstra... | [
-1,
-1,
5,
-1,
7,
-1,
-1,
-1,
-1,
-1,
6,
7,
7
] | [
-1,
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"Nwh2lvwrn1v",
"7Uqy8OVgvEu",
"nips_2021_fqfHJqNy_uY",
"QK2grX-xDhZ",
"nips_2021_fqfHJqNy_uY",
"QZniXXKPl3P",
"PhURqAS_fhf",
"8VriIeQtae",
"U1e2JdBLCs",
"xHo3FX7as0",
"nips_2021_fqfHJqNy_uY",
"nips_2021_fqfHJqNy_uY",
"nips_2021_fqfHJqNy_uY"
] |
nips_2021_kiWRlrbVzSM | Robust Generalization despite Distribution Shift via Minimum Discriminating Information | Training models that perform well under distribution shifts is a central challenge in machine learning. In this paper, we introduce a modeling framework where, in addition to training data, we have partial structural knowledge of the shifted test distribution. We employ the principle of minimum discriminating information to embed the available prior knowledge, and use distributionally robust optimization to account for uncertainty due to the limited samples. By leveraging large deviation results, we obtain explicit generalization bounds with respect to the unknown shifted distribution. Lastly, we demonstrate the versatility of our framework by demonstrating it on two rather distinct applications: (1) training classifiers on systematically biased data and (2) off-policy evaluation in Markov Decision Processes.
| accept | Authors propose a DRO formulation that incorporates prior information in the form of moments of functionals, using minimum discriminating information. I agree with the reviewers that the new formulation is a meaningful contribution.
One remaining concern is that prior information studied in the experiments is somewhat contrived. The narrative of the paper will be substantially stronger if authors can demonstrate their method on a realistic example where the prior information presents naturally. | train | [
"kx4LrAHk5Uw",
"sWu8VqD16w_",
"71TRS3wVQJR",
"8rquDrw3Zoi",
"M4va3Q2AjTF",
"EBQaswqoIxW",
"DxRgfKLFZcM",
"iwx7ptOnHla",
"3B2NsymnFoX",
"36PRoAtcmVu",
"qdFRah6InpQ",
"TUgN_dO-rq4"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a distributionally robust optimization (DRO) framework for learning models that can generalize well under distribution shifts, when partial knowledge about the nature of the shift is available. As opposed to classical domain adaptation methods, the proposed method does not require data from the... | [
7,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_kiWRlrbVzSM",
"nips_2021_kiWRlrbVzSM",
"8rquDrw3Zoi",
"3B2NsymnFoX",
"EBQaswqoIxW",
"DxRgfKLFZcM",
"TUgN_dO-rq4",
"kx4LrAHk5Uw",
"sWu8VqD16w_",
"qdFRah6InpQ",
"nips_2021_kiWRlrbVzSM",
"nips_2021_kiWRlrbVzSM"
] |
nips_2021_-tVD13hOsQ3 | Soft Calibration Objectives for Neural Networks | Optimal decision making requires that classifiers produce uncertainty estimates consistent with their empirical accuracy. However, deep neural networks are often under- or over-confident in their predictions. Consequently, methods have been developed to improve the calibration of their predictive uncertainty both during training and post-hoc. In this work, we propose differentiable losses to improve calibration based on a soft (continuous) version of the binning operation underlying popular calibration-error estimators. When incorporated into training, these soft calibration losses achieve state-of-the-art single-model ECE across multiple datasets with less than 1% decrease in accuracy. For instance, we observe an 82% reduction in ECE (70% relative to the post-hoc rescaled ECE) in exchange for a 0.7% relative decrease in accuracy relative to the cross entropy baseline on CIFAR-100.When incorporated post-training, the soft-binning-based calibration error objective improves upon temperature scaling, a popular recalibration method. Overall, experiments across losses and datasets demonstrate that using calibration-sensitive procedures yield better uncertainty estimates under dataset shift than the standard practice of using a cross entropy loss and post-hoc recalibration methods.
| accept | The authors' response has addressed all the reviewers' concerns, and all the reviewers' final score was "weak accept", which is borderline. It seems that main issue was that they were not completely sure how the different design choices are related to the final performance, which would make this method somewhat less useful to practitioners. For example, Reviewer CL78 summarized:
It seems incorporating the soft-calibration objectives helps to improve the calibration during training. I am not getting a higher score because most of the paper is devoted to SB-ECE, while the most successful secondary loss is S-AvUC. However, there is not much theoretical support (except the previous work under the SVI setting) for different design choices made by the authors (I am not much familiar with SVI). Furthermore, there are some combinations of primary losses that have a great impact on the final results. Although the soft-calibration objectives can improve the calibration in most cases, it might not be easy for a practitioner to summarize which combinations of primary and secondary loss should be selected in advance. | train | [
"NJ9h_EuWok2",
"D5ooYfTzZt8",
"o08JOpH3Tp9",
"zlHGgw5MLGe",
"TAk8sLKuWJm",
"tYJrfPYDqZr",
"8c3ay89vfK6",
"wPsx54q9oJ",
"RF5jNY_H_O3",
"37XZbXziydX",
"omn97a8a5Am",
"PX-9JpmB0og",
"J9ZehOtgpV",
"HdxEgv_ntY_"
] | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thanks for your review and for the follow-up response. This helped us improve the paper greatly.",
" Thanks for your review and for your responses to our rebuttal. This helped us improve the paper greatly. We will make sure to address all your suggestions in the paper.\n\nYou make a good point about the cost co... | [
-1,
-1,
-1,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
-1,
-1,
4,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"o08JOpH3Tp9",
"tYJrfPYDqZr",
"PX-9JpmB0og",
"wPsx54q9oJ",
"nips_2021_-tVD13hOsQ3",
"omn97a8a5Am",
"nips_2021_-tVD13hOsQ3",
"RF5jNY_H_O3",
"37XZbXziydX",
"J9ZehOtgpV",
"TAk8sLKuWJm",
"HdxEgv_ntY_",
"8c3ay89vfK6",
"nips_2021_-tVD13hOsQ3"
] |
nips_2021_Kc2529RIhJV | Distributional Gradient Matching for Learning Uncertain Neural Dynamics Models | Differential equations in general and neural ODEs in particular are an essential technique in continuous-time system identification. While many deterministic learning algorithms have been designed based on numerical integration via the adjoint method, many downstream tasks such as active learning, exploration in reinforcement learning, robust control, or filtering require accurate estimates of predictive uncertainties. In this work, we propose a novel approach towards estimating epistemically uncertain neural ODEs, avoiding the numerical integration bottleneck. Instead of modeling uncertainty in the ODE parameters, we directly model uncertainties in the state space. Our algorithm distributional gradient matching (DGM) jointly trains a smoother and a dynamics model and matches their gradients via minimizing a Wasserstein loss. Our experiments show that, compared to traditional approximate inference methods based on numerical integration, our approach is faster to train, faster at predicting previously unseen trajectories, and in the context of neural ODEs, significantly more accurate.
| accept | Even if there was some spread in the review scores which called for discussion among the reviewers, in the end, the reviewer consensus was in favour of accepting this paper. Please, make sure to address their concerns in the reviews in the camera-ready. The concerns related to clarity, presentation, and notation (which you also comment on in your response) are of particular importance as these help ensure impact of your work.
| train | [
"Ym8-p6-lqXA",
"3HyfSG7xGf",
"dlf0NaEeor",
"SgIdBbjVkQy",
"YyW1-iUIhVY",
"cbpdx4lq3KQ",
"LBEjbSuSNwa",
"5api66R50Mw",
"hh-1VwTihb",
"-hy9ErvAeJ",
"u36WDdRciKa",
"Jy17v_BmgyP",
"3p4Y79XNBB",
"qvc-IDiHqYf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a novel approach to model and capture the uncertainty of dynamical systems by learning from trajectory data. The method is composed of a Gaussian process smoother model, used for predictions, and a neural dynamics model, used for training. The novelty in the method comes from the use of the dis... | [
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5
] | [
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3
] | [
"nips_2021_Kc2529RIhJV",
"u36WDdRciKa",
"nips_2021_Kc2529RIhJV",
"YyW1-iUIhVY",
"5api66R50Mw",
"LBEjbSuSNwa",
"hh-1VwTihb",
"qvc-IDiHqYf",
"dlf0NaEeor",
"3p4Y79XNBB",
"Ym8-p6-lqXA",
"nips_2021_Kc2529RIhJV",
"nips_2021_Kc2529RIhJV",
"nips_2021_Kc2529RIhJV"
] |
nips_2021_c34cJa9uZn | Shaping embodied agent behavior with activity-context priors from egocentric video | Complex physical tasks entail a sequence of object interactions, each with its own preconditions -- which can be difficult for robotic agents to learn efficiently solely through their own experience. We introduce an approach to discover activity-context priors from in-the-wild egocentric video captured with human worn cameras. For a given object, an activity-context prior represents the set of other compatible objects that are required for activities to succeed (e.g., a knife and cutting board brought together with a tomato are conducive to cutting). We encode our video-based prior as an auxiliary reward function that encourages an agent to bring compatible objects together before attempting an interaction. In this way, our model translates everyday human experience into embodied agent skills. We demonstrate our idea using egocentric EPIC-Kitchens video of people performing unscripted kitchen activities to benefit virtual household robotic agents performing various complex tasks in AI2-iTHOR, significantly accelerating agent learning.
| accept | This paper considers an interesting problem of aiming to use egocentric human videos as a prior for guiding the behavior of an embodied agent completing tasks in AI2-iTHOR virtual environment. All of the reviews are positive about the paper, and the author response also helped address several points of feedback from the reviewers. The authors are strongly encouraged to incorporate the feedback and the changes mentioned in the author response into the revised paper. | train | [
"uKLvIwdMwoa",
"iBhfKGm6k91",
"LpSqScWLFQZ",
"Ha--D4Y4vK",
"H7EsL3dxSqH",
"WN4enXoIpG",
"0iYuhZVP1g",
"8nnEFdSRbQ-",
"kgcWO8nk8h",
"rIrP1VZJ5j",
"EUleQILh30A",
"3UAbA12kqUf",
"ADaaSIWzrFh",
"NhxiFVyBZqr",
"N4fDtJFVIxJ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" Thank you for addressing my comments.",
"The paper presented a novel method that extracts object priors from egocentric videos to guide the learning of a robotic agent to interact with objects. The key idea is to model the presence of objects and their co-occurrence from naturalistic activities in first person ... | [
-1,
8,
8,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
4,
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"EUleQILh30A",
"nips_2021_c34cJa9uZn",
"nips_2021_c34cJa9uZn",
"8nnEFdSRbQ-",
"nips_2021_c34cJa9uZn",
"kgcWO8nk8h",
"NhxiFVyBZqr",
"rIrP1VZJ5j",
"H7EsL3dxSqH",
"NhxiFVyBZqr",
"N4fDtJFVIxJ",
"LpSqScWLFQZ",
"iBhfKGm6k91",
"LpSqScWLFQZ",
"nips_2021_c34cJa9uZn"
] |
nips_2021_tJ_CO8orSI | Adjusting for Autocorrelated Errors in Neural Networks for Time Series | An increasing body of research focuses on using neural networks to model time series. A common assumption in training neural networks via maximum likelihood estimation on time series is that the errors across time steps are uncorrelated. However, errors are actually autocorrelated in many cases due to the temporality of the data, which makes such maximum likelihood estimations inaccurate. In this paper, in order to adjust for autocorrelated errors, we propose to learn the autocorrelation coefficient jointly with the model parameters. In our experiments, we verify the effectiveness of our approach on time series forecasting. Results across a wide range of real-world datasets with various state-of-the-art models show that our method enhances performance in almost all cases. Based on these results, we suggest empirical critical values to determine the severity of autocorrelated errors. We also analyze several aspects of our method to demonstrate its advantages. Finally, other time series tasks are also considered to validate that our method is not restricted to only forecasting.
| accept | All reviewers are overall positive, the paper addresses a problem that is important, and the authors have provided thoughtful and useful replies to the reviewers.
The paper proposes an elegant way to deal with autocorrelated errors in time series usable with a variety of existing neural nets. The idea is to learn the net’s parameters jointly with an autocorrelation coefficient parameter. The paper combines insights from how autocorrelation error is dealt with in econometrics with deep learning for time series analysis. The method may be a major advance, since it can improve many existing neural time series models. | train | [
"OJ_Pmajp7G",
"67q4Gz3psWm",
"j6NtXQ2TNKU",
"-OD_nkVp6k9",
"QR7cnCYsiSR",
"wHkJVGkODqA",
"RRW7GrP3Mm7",
"g0sy9s1Q7Yz",
"G3Jk8647ByW",
"EIRt730O-pn"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for raising the score. We very much appreciate your recognition. Here, we provide additional responses to your latest concerns.\n\nResponse to Q1: \n\nIn general, yes, we knew that likelihood in DeepAR is flexible. But the DeepAR we refer to in the original rebuttal is the one in the original ... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
2,
5,
2
] | [
"j6NtXQ2TNKU",
"nips_2021_tJ_CO8orSI",
"RRW7GrP3Mm7",
"EIRt730O-pn",
"G3Jk8647ByW",
"g0sy9s1Q7Yz",
"67q4Gz3psWm",
"nips_2021_tJ_CO8orSI",
"nips_2021_tJ_CO8orSI",
"nips_2021_tJ_CO8orSI"
] |
nips_2021_KRODJAa6pzE | A Geometric Analysis of Neural Collapse with Unconstrained Features | We provide the first global optimization landscape analysis of Neural Collapse -- an intriguing empirical phenomenon that arises in the last-layer classifiers and features of neural networks during the terminal phase of training. As recently reported by Papyan et al., this phenomenon implies that (i) the class means and the last-layer classifiers all collapse to the vertices of a Simplex Equiangular Tight Frame (ETF) up to scaling, and (ii) cross-example within-class variability of last-layer activations collapses to zero. We study the problem based on a simplified unconstrained feature model, which isolates the topmost layers from the classifier of the neural network. In this context, we show that the classical cross-entropy loss with weight decay has a benign global landscape, in the sense that the only global minimizers are the Simplex ETFs while all other critical points are strict saddles whose Hessian exhibit negative curvature directions. Our analysis of the simplified model not only explains what kind of features are learned in the last layer, but also shows why they can be efficiently optimized, matching the empirical observations in practical deep network architectures. These findings provide important practical implications. As an example, our experiments demonstrate that one may set the feature dimension equal to the number of classes and fix the last-layer classifier to be a Simplex ETF for network training, which reduces memory cost by over 20% on ResNet18 without sacrificing the generalization performance. The source code is available at https://github.com/tding1/Neural-Collapse.
| accept | This paper provides an interesting well presented study of neural collapse, mostly focusing on solid theoretical results, but also demonstrating their empirical implication. All reviewers agree the paper should be accepted, commending the high quality of the presentation in the paper. Further, it is clear from the reviews and the paper that this is an important problem, and the results are sufficiently significant to be of interest to a wide audience at NeurIPS (as evidence, two reviewers marked the paper as top 50% of accepted NeurIPS papers with high or absolute confidence). Therefore, I recommend acceptance as a spotlight.
To the authors: please read carefully the reviewer comments and follow up on your responses with appropriate revisions to the paper. | train | [
"Uf_75S93nOw",
"mtT2vUUkWD",
"FRFksYoVT3z",
"odmBjvx0po4",
"GvhChxzOLJG",
"0PJENwZ_Cji",
"oeevZN8two4",
"s1pdZ6mFxUh",
"aV9iX8eLw0f",
"9CQssFBlHv_",
"0S0lwuSkxiy",
"ZD6ItlULhN3",
"DfCChIobKu1",
"hKo58swZZzW",
"Nq-qQAjB-ge"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"Neural collapse is an intriguing phenomenon that has been observed in neural network classifiers as the training error goes to 0 where the last layer of the neural network has collapsed input points to corners of a simplex, thereby minimizing variability, and maximizing separability. While this phenomenon seems we... | [
8,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
8
] | [
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
5
] | [
"nips_2021_KRODJAa6pzE",
"odmBjvx0po4",
"nips_2021_KRODJAa6pzE",
"GvhChxzOLJG",
"0PJENwZ_Cji",
"DfCChIobKu1",
"0S0lwuSkxiy",
"Nq-qQAjB-ge",
"9CQssFBlHv_",
"oeevZN8two4",
"nips_2021_KRODJAa6pzE",
"FRFksYoVT3z",
"FRFksYoVT3z",
"Uf_75S93nOw",
"nips_2021_KRODJAa6pzE"
] |
nips_2021_MudT1U2eIY | NeRS: Neural Reflectance Surfaces for Sparse-view 3D Reconstruction in the Wild | Recent history has seen a tremendous growth of work exploring implicit representations of geometry and radiance, popularized through Neural Radiance Fields (NeRF). Such works are fundamentally based on a (implicit) {\em volumetric} representation of occupancy, allowing them to model diverse scene structure including translucent objects and atmospheric obscurants. But because the vast majority of real-world scenes are composed of well-defined surfaces, we introduce a {\em surface} analog of such implicit models called Neural Reflectance Surfaces (NeRS). NeRS learns a neural shape representation of a closed surface that is diffeomorphic to a sphere, guaranteeing water-tight reconstructions. Even more importantly, surface parameterizations allow NeRS to learn (neural) bidirectional surface reflectance functions (BRDFs) that factorize view-dependent appearance into environmental illumination, diffuse color (albedo), and specular “shininess.” Finally, rather than illustrating our results on synthetic scenes or controlled in-the-lab capture, we assemble a novel dataset of multi-view images from online marketplaces for selling goods. Such “in-the-wild” multi-view image sets pose a number of challenges, including a small number of views with unknown/rough camera estimates. We demonstrate that surface-based neural reconstructions enable learning from such data, outperforming volumetric neural rendering-based reconstructions. We hope that NeRS serves as a first step toward building scalable, high-quality libraries of real-world shape, materials, and illumination.
| accept | This paper received borderline reject initial reviews, with the reviewers asking for comparisons with IDR etc. and more discussion on the environment maps. The authors provided significant new results which addressed most of the reviewers' concerns. Limitations remain both on the accuracy of the illumination/reflectance decomposition and on the range of shapes that the method handles.
However, based on the current interest in the area, the novel experimental set-up and dataset and the fact that the authors have demonstrated better results in the case of a challenging sparse view set-up, the metareviewer recommends acceptance. The authors are urged to include the comparisons from the rebuttal to the final paper. | train | [
"14lQhu5ZZ7",
"Vcj8bjcuIUq",
"F4pfqiFaU4p",
"BoIi-bI2pcG",
"g-UCm35M6-W",
"4T_Az_7ksZ1",
"Yk07t51qwHo",
"_Buy1xQabuu",
"kMAlQZZt6wp",
"768ItGd_1Ja",
"-X7TCPssTxa",
"F5luCZpi0MA",
"JIphCx4wEm4",
"LZu-rRdCA_4",
"wPonYCeF9Ao",
"hrD_0FC0ti",
"lLoVQcWX33D",
"SlOxFYiJna2"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate the authors' effort in providing the detailed responses. My rating remains borderline accept. The additional comparison against IDR shows significantly better shape and texture reconstruction. Other additional results are also very insightful.\n\nThe novelty of the technical components is quite limit... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"lLoVQcWX33D",
"LZu-rRdCA_4",
"BoIi-bI2pcG",
"g-UCm35M6-W",
"SlOxFYiJna2",
"kMAlQZZt6wp",
"lLoVQcWX33D",
"hrD_0FC0ti",
"nips_2021_MudT1U2eIY",
"kMAlQZZt6wp",
"SlOxFYiJna2",
"kMAlQZZt6wp",
"lLoVQcWX33D",
"hrD_0FC0ti",
"nips_2021_MudT1U2eIY",
"nips_2021_MudT1U2eIY",
"nips_2021_MudT1U2e... |
nips_2021_LY6qkvd71Td | Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning | Contrastive self-supervised learning (CSL) has attracted increasing attention for model pre-training via unlabeled data. The resulted CSL models provide instance-discriminative visual features that are uniformly scattered in the feature space. During deployment, the common practice is to directly fine-tune CSL models with cross-entropy, which however may not be the best strategy in practice. Although cross-entropy tends to separate inter-class features, the resulting models still have limited capability for reducing intra-class feature scattering that exists in CSL models. In this paper, we investigate whether applying contrastive learning to fine-tuning would bring further benefits, and analytically find that optimizing the contrastive loss benefits both discriminative representation learning and model optimization during fine-tuning. Inspired by these findings, we propose Contrast-regularized tuning (Core-tuning), a new approach for fine-tuning CSL models. Instead of simply adding the contrastive loss to the objective of fine-tuning, Core-tuning further applies a novel hard pair mining strategy for more effective contrastive fine-tuning, as well as smoothing the decision boundary to better exploit the learned discriminative feature space. Extensive experiments on image classification and semantic segmentation verify the effectiveness of Core-tuning.
| accept | This paper investigates ways to improve the training of self-supervised models on downstream tasks by adding a contrastive loss to the fine-tuning process as well as augmenting this loss with methods for hard negative and hard positive pair mining. The authors show theoretically that a supervised contrastive loss benefits performance and show empirically that this method results in improved performance on a number of downstream datasets. Reviewers agreed that the clarity and quality of the work was high, though there were some concerns regarding the novelty of the proposed approach. While I agree with reviewers that this paper combines several previously known innovations rather than introducing a completely novel approach, I do not think this will limit the impact of the paper, as there is substantial value in combining previously independent observations. As such, I recommend the paper be accepted as a poster. | train | [
"2262b8MRzHH",
"JvM_k4ntE6E",
"bxUl6heSYzD",
"08lVQdF6KLJ",
"KSC2lnnVh-s",
"HNXh0RvLWf"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your constructive review. We are glad to see that the overall quality, extensive experiments and good writing of this paper are appreciated. We address the main concerns point by point below.\n\n***\n**Q1. The novelty of this work and its difference to existing studies [1,2,3].**\n\nA1. In the submi... | [
-1,
-1,
-1,
6,
6,
8
] | [
-1,
-1,
-1,
4,
5,
5
] | [
"KSC2lnnVh-s",
"08lVQdF6KLJ",
"HNXh0RvLWf",
"nips_2021_LY6qkvd71Td",
"nips_2021_LY6qkvd71Td",
"nips_2021_LY6qkvd71Td"
] |
nips_2021_AADxnPG-PR | Discovery of Options via Meta-Learned Subgoals | Temporal abstractions in the form of options have been shown to help reinforcement learning (RL) agents learn faster. However, despite prior work on this topic, the problem of discovering options through interaction with an environment remains a challenge. In this paper, we introduce a novel meta-gradient approach for discovering useful options in multi-task RL environments. Our approach is based on a manager-worker decomposition of the RL agent, in which a manager maximises rewards from the environment by learning a task-dependent policy over both a set of task-independent discovered-options and primitive actions. The option-reward and termination functions that define a subgoal for each option are parameterised as neural networks and trained via meta-gradients to maximise their usefulness. Empirical analysis on gridworld and DeepMind Lab tasks show that: (1) our approach can discover meaningful and diverse temporally-extended options in multi-task RL domains, (2) the discovered options are frequently used by the agent while learning to solve the training tasks, and (3) that the discovered options help a randomly initialised manager learn faster in completely new tasks.
| accept |
The reviewers thought this was an interesting paper considering how to combine meta-gradient methods with HRL, and the empirical experiments were convincing. In the internal discussions two reviewers suggested the paper would be further strengthened by a more detailed discussion on the theoretical side. The authors are encouraged to consider the reviewers’ feedback in their revision for the camera ready. | test | [
"foj-fcc2Z0",
"K7oq7Z67LHj",
"bJxukHngV8C",
"M6i8rOGK0o",
"tpIv7g-Khg",
"08kps4X4Et5",
"lRqlEiIGyY",
"Jpycb6nDPES",
"OkeTMNFXoBd",
"C6qM1XCyt7h",
"F4pn4uxCDSd",
"gzNzubtE6Xh",
"hOM_l5mbOkK",
"0SDB7RaxQqv",
"NwSOlDXmyJO",
"D9YyDpoVrTY",
"lKtSbABIUr",
"JEkpTmTV3IA",
"v5l6Wdwzt_U",
... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"... | [
"Authors propose a meta-learning algorithm for learning temporarily extended actions within the options framework. In addition to the commonly used option modules defined in the original options framework (high-level policy, terminations, sub-policies) the algorithm also uses an aditional learned option-rewards mod... | [
4,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
8
] | [
4,
5,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"nips_2021_AADxnPG-PR",
"nips_2021_AADxnPG-PR",
"0SDB7RaxQqv",
"nips_2021_AADxnPG-PR",
"F4pn4uxCDSd",
"lRqlEiIGyY",
"0SDB7RaxQqv",
"0SDB7RaxQqv",
"hOM_l5mbOkK",
"hOM_l5mbOkK",
"D9YyDpoVrTY",
"EsSIxeiwU_",
"v5l6Wdwzt_U",
"lKtSbABIUr",
"D9YyDpoVrTY",
"FUa9TgMeYzM",
"K7oq7Z67LHj",
"7W... |
nips_2021_prVxS4W_ds | Near-Optimal Lower Bounds For Convex Optimization For All Orders of Smoothness | Ankit Garg, Robin Kothari, Praneeth Netrapalli, Suhail Sherif | accept | Thank you for submitting your paper to NeurIPS'21.
Despite some minor concerns that should be addressed in the camera-ready version, the reviewers agreed that the paper is worthy of acceptance and is presented at NeurIPS'21. Congratulations! | train | [
"Mopt7P3BfsE",
"dz3TCwF_tOZ",
"BC5WijWo_rT",
"aOn7oh3tzTD",
"99zIJELzsi4",
"qSop2F1R9aR",
"mPqj457fIAD",
"OWrZnQI_8Wb"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" After the response phase, my opinion remains that this is a strong paper clearly deserving of acceptance.\n\nMinor note: In the abstract, \"PLMR\" should be \"PMLR\".",
" I have read the authors’ rebuttals and other reviewers' comments, and keep my score.",
" Thanks for your comments. The motivation for our w... | [
-1,
-1,
-1,
-1,
-1,
6,
9,
7
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"aOn7oh3tzTD",
"99zIJELzsi4",
"qSop2F1R9aR",
"mPqj457fIAD",
"OWrZnQI_8Wb",
"nips_2021_prVxS4W_ds",
"nips_2021_prVxS4W_ds",
"nips_2021_prVxS4W_ds"
] |
nips_2021_w3x8K0M6sAz | Topology-Imbalance Learning for Semi-Supervised Node Classification | The class imbalance problem, as an important issue in learning node representations, has drawn increasing attention from the community. Although the imbalance considered by existing studies roots from the unequal quantity of labeled examples in different classes (quantity imbalance), we argue that graph data expose a unique source of imbalance from the asymmetric topological properties of the labeled nodes, i.e., labeled nodes are not equal in terms of their structural role in the graph (topology imbalance). In this work, we first probe the previously unknown topology-imbalance issue, including its characteristics, causes, and threats to semisupervised node classification learning. We then provide a unified view to jointly analyzing the quantity- and topology- imbalance issues by considering the node influence shift phenomenon with the Label Propagation algorithm. In light of our analysis, we devise an influence conflict detection–based metric Totoro to measure the degree of graph topology imbalance and propose a model-agnostic method ReNode to address the topology-imbalance issue by re-weighting the influence of labeled nodes adaptively based on their relative positions to class boundaries. Systematic experiments demonstrate the effectiveness and generalizability of our method in relieving topology-imbalance issue and promoting semi-supervised node classification. The further analysis unveils varied sensitivity of different graph neural networks (GNNs) to topology imbalance, which may serve as a new perspective in evaluating GNN architectures.
| accept | This paper investigates the issue that nodes in a graph may have different local topology. This imbalance of topology may induce bias in node classification task. The paper proposes a solution inspired by classic label propagation. Nodes far away from decision boundary are assigned heavier weights in learning, whereas nodes near decision boundary are considered less important.
While reviewers generally thought the paper does bring up an important and interesting problem, they did have many concerns that were not addressed after rebuttal.
1) the problem statement (topology imbalance) lacks a rigorous / formal definition. This makes it harder to appreciate the proposed loss.
2) the assumption (homophilous) may be too strong.
3) the proposed approach is too simplistic: nodes near the decision boundary should be used smartly, not be discarded.
| train | [
"dooxHte5qip",
"kjZT25Yxkc",
"efs67R0rGpj",
"4v2R7-3HNSh",
"CtufgUEe4q_",
"EXLKQCLB9Ny",
"f51DM7zcQ6K",
"qtP-eFDf87S",
"QWDREPkpAg0"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your clarification. The response clarify part of my concerns. However, the 'analysis' for loss function provided by the authors remain heuristics, lacking a systematic and convincing explanation. An advanced theoretical analysis may improve the overall quality of this work. I do like the experiment... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4
] | [
"CtufgUEe4q_",
"efs67R0rGpj",
"4v2R7-3HNSh",
"QWDREPkpAg0",
"f51DM7zcQ6K",
"qtP-eFDf87S",
"nips_2021_w3x8K0M6sAz",
"nips_2021_w3x8K0M6sAz",
"nips_2021_w3x8K0M6sAz"
] |
nips_2021_x9jS8pX3dkx | Gradient Inversion with Generative Image Prior | Federated Learning (FL) is a distributed learning framework, in which the local data never leaves clients’ devices to preserve privacy, and the server trains models on the data via accessing only the gradients of those local data. Without further privacy mechanisms such as differential privacy, this leaves the system vulnerable against an attacker who inverts those gradients to reveal clients’ sensitive data. However, a gradient is often insufficient to reconstruct the user data without any prior knowledge. By exploiting a generative model pretrained on the data distribution, we demonstrate that data privacy can be easily breached. Further, when such prior knowledge is unavailable, we investigate the possibility of learning the prior from a sequence of gradients seen in the process of FL training. We experimentally show that the prior in a form of generative model is learnable from iterative interactions in FL. Our findings demonstrate that additional mechanisms are necessary to prevent privacy leakage in FL.
| accept | The reviewers all agreed that this is an interesting paper and a worthy improvement over existing gradient inversion attacks against federated learning. I agree with the reviewers: the paper is clearly written and the many experimental results are worthy of publication. The paper will, at the very least, serve as an important motivation for further research on privacy in federated learning. I do caution the authors to take the suggestions made by the reviewers and the changes proposed in their rebuttal seriously and improve the manuscript in terms of related work and additional experiments.
It is also essential for the authors to implement what they promised in their rebuttal to the ethics reviewers in the final version of the paper. Critically, an experiment to demonstrate a potential fundamental defense mechanism and replacing human-face images in the paper's quantitative evaluation should be considered as "mandatory" revisions. I agree with the ethics reviewers that the authors' response helps assuage the ethical concerns (assuming that changes will be made to the final version). | test | [
"EcVi90VepzF",
"NelWLIqSClT",
"L8Xx-s-neTO",
"FVEQRqRmIDz",
"uho59NR1Irp",
"MUz6aYUtwGC",
"tVe7w6Ax68W",
"Ya95wY357Hc",
"YQIDejELhyW",
"QbKH39aPdY",
"nn30qhTNgem",
"Grr7yfwYTRf",
"pVqs3MmSwf",
"lwzfc_4iBfr",
"H0kGzOMDsYS",
"DnUPoNSL-rs"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The contributions of this paper are mainly two-fold:\n1) Gradient inversion in alternative spaces (GIAS) that improves reconstruction with the help of the prior from generative models;\n2) Gradient inversion to meta-learn (GIML) which learns a generative model via inverting multiple gradients computed on the data.... | [
6,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
3,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_x9jS8pX3dkx",
"QbKH39aPdY",
"tVe7w6Ax68W",
"nips_2021_x9jS8pX3dkx",
"pVqs3MmSwf",
"YQIDejELhyW",
"QbKH39aPdY",
"FVEQRqRmIDz",
"nips_2021_x9jS8pX3dkx",
"nips_2021_x9jS8pX3dkx",
"H0kGzOMDsYS",
"DnUPoNSL-rs",
"FVEQRqRmIDz",
"EcVi90VepzF",
"nips_2021_x9jS8pX3dkx",
"nips_2021_x9j... |
nips_2021_ahYIlRBeCFw | Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Neural Network Robustness Verification | Shiqi Wang, Huan Zhang, Kaidi Xu, Xue Lin, Suman Jana, Cho-Jui Hsieh, J. Zico Kolter | accept | The paper was well-received. The main idea is fairly simple, but the problem is important and the writing and empirical evaluation are solid. Based on the reviewers' advice, I am recommending acceptance. Please make sure to add to the final version the new results and the text on limitations and societal impact that you described in your author response. | train | [
"OpK7PrEnYkc",
"Ys61TbYJp9d",
"5Y-DiHBg4bP",
"hJP8CO2lxz",
"8aYi9tPBVq",
"jjB71Dx5jwQ",
"xD9OwpRwD4L",
"0Hm197le9MX",
"wz1e47LgQL6",
"rMhpFrBreus",
"C-7RlQwWgn3",
"db33Hmd_DuO"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer qX8M,\n\nWe thank you very much for updating your review and voting for accepting our paper. We really appreciate your support. We will make sure to add the paragraph discussing limitations to our paper.\n\nWe do want to respectfully point out that **our technical contributions are not incremental**... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"Ys61TbYJp9d",
"nips_2021_ahYIlRBeCFw",
"Ys61TbYJp9d",
"nips_2021_ahYIlRBeCFw",
"C-7RlQwWgn3",
"nips_2021_ahYIlRBeCFw",
"db33Hmd_DuO",
"rMhpFrBreus",
"Ys61TbYJp9d",
"nips_2021_ahYIlRBeCFw",
"nips_2021_ahYIlRBeCFw",
"nips_2021_ahYIlRBeCFw"
] |
nips_2021_NU69dglcsS | Autobahn: Automorphism-based Graph Neural Nets | We introduce Automorphism-based graph neural networks (Autobahn), a new family of graph neural networks. In an Autobahn, we decompose the graph into a collection of subgraphs and apply local convolutions that are equivariant to each subgraph's automorphism group. Specific choices of local neighborhoods and subgraphs recover existing architectures such as message passing neural networks. Our formalism also encompasses novel architectures: as an example, we introduce a graph neural network that decomposes the graph into paths and cycles. The resulting convolutions reflect the natural way that parts of the graph can transform, preserving the intuitive meaning of convolution without sacrificing global permutation equivariance. We validate our approach by applying Autobahn to molecular graphs, where it achieves results competitive with state-of-the-art message passing algorithms.
| accept | The paper proposes a novel structure-aware GNN architecture based on partitioning the graph into subgraphs.
The authors provided extensive responses to the reviewers' comments in the rebuttal and followup discussion. There was also an extensive discussion among the reviewers. The reviewers recognize the novelty of the paper. While after rebuttal and discussion some reviewers believe the paper still requires significant editing to improve the clarity, the AC is of the opinion that the novelty outweighs these drawbacks and would like to give the authors the benefit of the doubt recommending to accept the paper and hoping the authors will improve the presentation and introduce edits according to the reviewers' suggestions. | train | [
"zRM6H3dMOoy",
"hAzDtlSS3h",
"MlMEnZ7Thq",
"k7jBkx4IVjz",
"_pMwBKuFADF",
"R5fHLKHfE4G",
"Vzizdpr1sBa",
"Aez4XNOmtVK",
"dArnasRhe5",
"ofj_ulJvQnL",
"SWbynG2VWt6",
"kzeO_htSj1F",
"ZfWRfGY_b6",
"lYmK1vCaNV",
"vYVnOXTGWWz",
"pndht2pLLIp",
"nV5bFiBo7Sm",
"zEEVgz25oON",
"eDZvC2tfWqF"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
" We appreciate that the reviewer agrees with us about the novelty and significance of this work. \n\nHowever, we disagree with the criticism that our paper should be rejected because it is hard to access for those who are not familiar with this topic. Like any research article, we draw from a well-established bo... | [
-1,
-1,
6,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
2,
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"_pMwBKuFADF",
"k7jBkx4IVjz",
"nips_2021_NU69dglcsS",
"nV5bFiBo7Sm",
"nips_2021_NU69dglcsS",
"Vzizdpr1sBa",
"Aez4XNOmtVK",
"dArnasRhe5",
"eDZvC2tfWqF",
"SWbynG2VWt6",
"pndht2pLLIp",
"nips_2021_NU69dglcsS",
"lYmK1vCaNV",
"vYVnOXTGWWz",
"zEEVgz25oON",
"kzeO_htSj1F",
"MlMEnZ7Thq",
"ni... |
nips_2021_kgVJBBThdSZ | Data Augmentation Can Improve Robustness | Sylvestre-Alvise Rebuffi, Sven Gowal, Dan Andrei Calian, Florian Stimberg, Olivia Wiles, Timothy A. Mann | accept | Despite the different scores, all four reviewers agree on two main points:
A) The empirical results, in particular the non-trivial robustness gains, are interesting.
B) The explanations given for the robustness gains are only preliminary and do not account for all observed phenomena (e.g., why certain augmentations work better than others).
The reviews then arrive at different scores based on how much importance they assign to each point.
Overall I agree with both points and see this as a borderline paper. The authors could have written a stronger paper by avoiding tenuous explanations and instead focusing on a thorough experimental evaluation. A paper with strong empirical results does not have to offer a comprehensive explanation for each observed phenomenon. Instead, it suffices to discuss possible hypotheses as directions for future work. If the authors want to provide explanations, they should be rigorously investigated with experiments designed specifically to test the explanations. Presenting explanations without corresponding experiments undermines the validity of the paper.
Nevertheless, the experimental results still outweigh the lack of rigor in the provided explanations for me and I recommend accepting the paper. For the final version, I suggest that the authors clearly separate experimental findings from potential explanations. The paper could benefit from de-emphasizing the latter, e.g., by moving some of the explanations to the appendix and instead moving more of the adversarial evaluation details to the main text. | train | [
"UnJPoFEAmum",
"8ZWp3MuMdxP",
"F4JPG923xoA",
"4k--SglHGb5",
"Mt2v3Y_XNm",
"3tdtTC9Z9C",
"eR9rAMX-WR-",
"To0wdONhXiX",
"CFGzM0zOCn",
"VolrZ89UH_A",
"O2k3rHHQpaq",
"c5RMtsRJ46U",
"AdIWwtz5Ubw",
"DzeMbNtdUB"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors combine adversarial training with popular data augmentation techniques. They discuss how model averaging and data augmentation can be combined to prevent robust overfitting. The authors evaluate their method on several datasets and show state-of-the-art results on CIFAR10. ### Quick justification of r... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
5
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5
] | [
"nips_2021_kgVJBBThdSZ",
"F4JPG923xoA",
"CFGzM0zOCn",
"Mt2v3Y_XNm",
"VolrZ89UH_A",
"nips_2021_kgVJBBThdSZ",
"DzeMbNtdUB",
"UnJPoFEAmum",
"DzeMbNtdUB",
"AdIWwtz5Ubw",
"c5RMtsRJ46U",
"nips_2021_kgVJBBThdSZ",
"nips_2021_kgVJBBThdSZ",
"nips_2021_kgVJBBThdSZ"
] |
nips_2021_LaM6G4yrMy0 | Deep Explicit Duration Switching Models for Time Series | Many complex time series can be effectively subdivided into distinct regimes that exhibit persistent dynamics. Discovering the switching behavior and the statistical patterns in these regimes is important for understanding the underlying dynamical system. We propose the Recurrent Explicit Duration Switching Dynamical System (RED-SDS), a flexible model that is capable of identifying both state- and time-dependent switching dynamics. State-dependent switching is enabled by a recurrent state-to-switch connection and an explicit duration count variable is used to improve the time-dependent switching behavior. We demonstrate how to perform efficient inference using a hybrid algorithm that approximates the posterior of the continuous states via an inference network and performs exact inference for the discrete switches and counts. The model is trained by maximizing a Monte Carlo lower bound of the marginal log-likelihood that can be computed efficiently as a byproduct of the inference routine. Empirical results on multiple datasets demonstrate that RED-SDS achieves considerable improvement in time series segmentation and competitive forecasting performance against the state of the art.
| accept | This paper takes a nice idea (combining recurrent state-to-switch dependencies with semi-Markov duration distributions), pairs it with an amortized inference algorithm, describes the approach very well, and does a thorough set of experiments with reasonable baseline comparisons to recent methods. While it is a relatively simple idea, it is well executed and I think it will be a very nice contribution to the field.
In private discussion, Reviewers Naxb and GyB2 agreed that they would increase their scores to a 6, which would yield two 6s and a 7, for an average of 6.33. I strongly encourage the authors to make the promised changes in the final publication. | train | [
"fsHApPtKHS",
"hPwWzmmX7yH",
"u47aS5lBEeh",
"9TIwMvAavJ",
"5gSQbt_pl6H",
"EMQYFN16kqs",
"9FXC_qi8epO",
"w8nT5YfpSS",
"H1BafkAgda",
"AXKDg2I5un",
"Pn9doLfX2nX",
"UWinAh9C_Nx"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" This is great to see! Thank you!",
" Preliminary results on the bouncing ball dataset with `x_dim=2` for RED-SDS are tabulated below along with the results reported in the paper (with `x_dim=4`).\n\n```\n+----------+-------------------------+-------------------------+\n| -- | Bouncing Ball (x_dim=4) | Bou... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"hPwWzmmX7yH",
"5gSQbt_pl6H",
"9TIwMvAavJ",
"5gSQbt_pl6H",
"w8nT5YfpSS",
"nips_2021_LaM6G4yrMy0",
"UWinAh9C_Nx",
"Pn9doLfX2nX",
"AXKDg2I5un",
"nips_2021_LaM6G4yrMy0",
"nips_2021_LaM6G4yrMy0",
"nips_2021_LaM6G4yrMy0"
] |
nips_2021_yRTebElmilN | Shared Independent Component Analysis for Multi-Subject Neuroimaging | We consider shared response modeling, a multi-view learning problem where one wants to identify common components from multiple datasets or views. We introduce Shared Independent Component Analysis (ShICA) that models eachview as a linear transform of shared independent components contaminated by additive Gaussian noise. We show that this model is identifiable if the components are either non-Gaussian or have enough diversity in noise variances. We then show that in some cases multi-set canonical correlation analysis can recover the correct unmixing matrices, but that even a small amount of sampling noise makes Multiset CCA fail. To solve this problem, we propose to use joint diagonalization after Multiset CCA, leading to a new approach called ShICA-J. We show via simulations that ShICA-J leads to improved results while being very fast to fit. While ShICA-J is based on second-order statistics, we further propose to leverage non-Gaussianity of the components using a maximum-likelihood method, ShICA-ML, that is both more accurate and more costly. Further, ShICA comes with a principled method for shared components estimation. Finally, we provide empirical evidence on fMRI and MEG datasets that ShICA yields more accurate estimation of the componentsthan alternatives.
| accept | Two reviewers recommend rejection and two reviewers recommend acceptance. After reading the reviews, the rebuttal, the internal discussion among reviewers and after my own reading of the paper, I believe this work provides a genuine contribution to ICA for multiple views and therefore I recommend Accept. The method provides identifiable components (under certain restrictions) which makes it robust, and the empirical section shows this advantage of the method and its usefulness in downstream tasks, e.g. reconstructions of left-out subjects in fMRI data. Two reviewers that recommend rejection have mainly asked for comparisons against other baselines. However, I believe that the authors have included suitable baselines when testing their model. I recommend the authors follow the recommendations of the reviewers regarding the presentation and fixing any typos. | val | [
"yFGU1vSDGAR",
"YMIPM7vd7T",
"A8CG5erjAPp",
"nfcwsY_VKn",
"s6hRF73jlxf",
"ZRjjgP4AxKE",
"lEfzT7JeQUH",
"LNwBh9PGHV"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose the shared independent component analysis which can account for shared components $\\mathbf{s}$ and individual differences through the noise signal $\\mathbf{n}$. Notably the noise can vary across subjects and views in magnitude whereas the shared components can both be Gaussian or non-Gaussian... | [
6,
6,
-1,
-1,
-1,
-1,
4,
5
] | [
4,
3,
-1,
-1,
-1,
-1,
5,
4
] | [
"nips_2021_yRTebElmilN",
"nips_2021_yRTebElmilN",
"yFGU1vSDGAR",
"LNwBh9PGHV",
"lEfzT7JeQUH",
"YMIPM7vd7T",
"nips_2021_yRTebElmilN",
"nips_2021_yRTebElmilN"
] |
nips_2021_CyZF4CLnQ8D | Shape from Blur: Recovering Textured 3D Shape and Motion of Fast Moving Objects | We address the novel task of jointly reconstructing the 3D shape, texture, and motion of an object from a single motion-blurred image. While previous approaches address the deblurring problem only in the 2D image domain, our proposed rigorous modeling of all object properties in the 3D domain enables the correct description of arbitrary object motion. This leads to significantly better image decomposition and sharper deblurring results. We model the observed appearance of a motion-blurred object as a combination of the background and a 3D object with constant translation and rotation. Our method minimizes a loss on reconstructing the input image via differentiable rendering with suitable regularizers. This enables estimating the textured 3D mesh of the blurred object with high fidelity. Our method substantially outperforms competing approaches on several benchmarks for fast moving objects deblurring. Qualitative results show that the reconstructed 3D mesh generates high-quality temporal super-resolution and novel views of the deblurred object.
| accept | The main task of this work is fast motion deblurring and yet the paper claims to jointly estimate 3D shape and texture along with the motion of the object. The paper received positive reviews: 7, 9, 6, 6. The paper is written well with impressive results compared to previous works. However, the main limitations of the paper is the lack of novelty. As pointed out by a reviewer, the proposed approach is an extension of the DeFMO paper. The authors use DeFMO for the silhouette consistency prediction and differentiable interpolation-based render (DIB-R) and Kaolin [39] for the image formation check. The approach combines these modules to solve the problem differently. While the DeFMO directly renders the appearance and silhouette directly from the given input image and estimated background, this work instead estimates a 3D shape from the given input image (and background) and then renders the appearance and silhouette.
It is also worthy to note that estimating 3D shape and differentiable rendering from an image have been previously studied [23, 24, 25, 26]. The fact that this work estimates it from a motion-blur image doesn't necessarily make it significantly different from previous works. Moreover, the 3D estimation from the motion-blur image is also constrained with specific types of prototypes which makes it less generalizable to all types of fast-moving objects.
The approach demonstrates a new pipeline for jointly estimating shape, texture, and motion with challenging blurred images of moving objects. I am not sure the NeurIPS venue would be the best fit for this problem. Based on the reviews and the author rebuttals, I lean toward to reject the paper.
| test | [
"4IdhvOhTLL",
"KKZnvj0gljk",
"iNcjFEUfxa",
"YRVUeubzack",
"WNTWsRSgktw",
"vS9GSqM2QoC",
"_8MROYax7Qx",
"r1oxDmseef",
"9-7MfMACkmq",
"RObEND-sAr",
"29yzIkVBRb4"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for your suggestions for improving the paper quality. To summarize, we will include examples with the failure cases and on images with noise and compression artifacts. Discussions on the runtime and the selection of prototypes will also be added. From the points raised by other reviewers, we will incorpora... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
9,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
4,
4
] | [
"KKZnvj0gljk",
"_8MROYax7Qx",
"WNTWsRSgktw",
"29yzIkVBRb4",
"RObEND-sAr",
"9-7MfMACkmq",
"r1oxDmseef",
"nips_2021_CyZF4CLnQ8D",
"nips_2021_CyZF4CLnQ8D",
"nips_2021_CyZF4CLnQ8D",
"nips_2021_CyZF4CLnQ8D"
] |
nips_2021_h596lT4RAH4 | Batched Thompson Sampling | Cem Kalkanli, Ayfer Ozgur | accept | In this paper, the authors propose an interesting batch Thomson Sampling algorithm that achieve the optimal regret with $O(\log T)$ and $O(\log\log T)$ number of batches in instance dependent and instance independent settings. I very much like the results and propose acceptance.
I would like to bring the authors' attention to two concurrent work on the exact same topic:
"Parallelizing Thompson Sampling", https://arxiv.org/abs/2106.01420
"Batched Thompson Sampling for Multi-Armed Bandits", https://arxiv.org/abs/2108.06812
I propose the authors discuss the abovementioned work in the final version of their paper to highlight the differences in algorithm design and the corresponding results. I believe it adds to the value of their work.
| train | [
"vVxqz8nMeoj",
"7b1L7v68u4",
"v8unSBonRAz",
"HNNmcgjhFp",
"eVMF23lzmfV",
"T_UaEQacmtp",
"3SSqwqEtfs7",
"O1Nll2875zc",
"J1ArZsuO3IW"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper tackles the batched multi-armed bandit problem, where arms are played in batches and feedback is obtained by the end of each batch. The authors suggest the Batched Thompson-Sampling algorithm, an anytime TS algorithm with Gaussian priors that are updated at the end of adaptive batches. The adaptive mecha... | [
6,
-1,
-1,
-1,
-1,
-1,
6,
6,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
2,
3,
3
] | [
"nips_2021_h596lT4RAH4",
"T_UaEQacmtp",
"vVxqz8nMeoj",
"O1Nll2875zc",
"J1ArZsuO3IW",
"3SSqwqEtfs7",
"nips_2021_h596lT4RAH4",
"nips_2021_h596lT4RAH4",
"nips_2021_h596lT4RAH4"
] |
nips_2021_ACFHNxVNvfk | Delayed Gradient Averaging: Tolerate the Communication Latency for Federated Learning | Ligeng Zhu, Hongzhou Lin, Yao Lu, Yujun Lin, Song Han | accept | This paper proposes an approach to make Federated training more efficient by pipelining communication and computation. In particular, clients continue performing local updates on a previous model while they are communicating with the server. In addition to some theoretical convergence analysis for the smooth non-convex setting, the paper provides empirical evidence supporting the promise of this approach both in simulation and in an implementation using Raspberry PI’s. The approach seems especially well-suited for cross-silo federated learning when clients have unreliable communication channels or communication latency is otherwise a bottleneck.
While the reviewers raised some significant concerns in their initial reviews, the author responses largely addressed these in a satisfactory way, leading multiple reviewers to raise their scores. As a result, I’m happy to recommend that we accept this paper. When preparing the camera ready version, the authors are expected to implement the suggestions and corrections that were discussed during the rebuttal period. In particular, it is important to:
* Include the additional experiments (LEAF, CIFAR with 1k devices) either in supplementary material, or by moving other material to supplementary material, as you see fit,
* Clarify terminology and concepts (e.g., using “pipelining”, differentiating from asynchronous methods)
* To include clarifications about the implementation using a centralized parameter server vs. peer-to-peer synchronization
* Clarify the limitations around privacy guarantees, additional storage, and challenges around supporting other optimizers
* Adding additional references suggested be reviewers | train | [
"tz-P3xi3q58",
"Pg13q9YELt",
"fPDLcbCQJA4",
"2gOz9uzkE_2",
"BjLGl8XwzHY",
"RGASBq1OBb8",
"DiKNmLX7c8p",
"AvaGsUybh1O",
"CC4sd692k9A",
"RsR4xRL8s2",
"iIes9NiPC4c",
"W9b6AMof-ty",
"PVcmKDwDJiA",
"dN0dD-PgVDR",
"FempfNdQQ3n",
"AZXN7h7MlO",
"9UoHY5u3xKG",
"T8nOP75Di5U",
"Jr88OU62rCa"... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
... | [
" Dear all reviewers and ACs,\n\nWe sincerely thank you for the time, efforts and valuable suggestions to further improve our work during the rebuttal. We genuiely appreciate the positive **7-7-6-5** scores from **ZCJE**, **e8gp**, **XyoU** and **9p8v**.\n\nSpecifically, \n* We thank reviewer **ZCJE** for apprecia... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_ACFHNxVNvfk",
"2gOz9uzkE_2",
"nips_2021_ACFHNxVNvfk",
"RGASBq1OBb8",
"JiYKnt4IOzb",
"DiKNmLX7c8p",
"W9b6AMof-ty",
"RsR4xRL8s2",
"dN0dD-PgVDR",
"Jr88OU62rCa",
"nips_2021_ACFHNxVNvfk",
"9UoHY5u3xKG",
"nips_2021_ACFHNxVNvfk",
"T8nOP75Di5U",
"nips_2021_ACFHNxVNvfk",
"nips_2021_A... |
nips_2021_2zCRcTafea | Focal Attention for Long-Range Interactions in Vision Transformers | Recently, Vision Transformer and its variants have shown great promise on various computer vision tasks. The ability to capture local and global visual dependencies through self-attention is the key to its success. But it also brings challenges due to quadratic computational overhead, especially for the high-resolution vision tasks(e.g., object detection). Many recent works have attempted to reduce the cost and improve model performance by applying either coarse-grained global attention or fine-grained local attention. However, both approaches cripple the modeling power of the original self-attention mechanism of multi-layer Transformers, leading to sub-optimal solutions. In this paper, we present focal attention, a new attention mechanism that incorporates both fine-grained local and coarse-grained global interactions. In this new mechanism, each token attends its closest surrounding tokens at the fine granularity and the tokens far away at a coarse granularity and thus can capture both short- and long-range visual dependencies efficiently and effectively. With focal attention, we propose a new variant of Vision Transformer models, called Focal Transformers, which achieve superior performance over the state-of-the-art (SoTA) Vision Transformers on a range of public image classification and object detection benchmarks. In particular, our Focal Transformer models with a moderate size of 51.1M and a large size of 89.8M achieve 83.6% and 84.0%Top-1 accuracy, respectively, on ImageNet classification at 224×224. When employed as the backbones, Focal Transformers achieve consistent and substantial improvements over the current SoTA Swin Transformers [44] across 6 different object detection methods. Our largest Focal Transformer yields58.7/59.0boxmAPs and50.9/51.3mask mAPs on COCO mini-val/test-dev, and55.4mIoU onADE20K for semantic segmentation, creating new SoTA on three of the most challenging computer vision tasks.
| accept | There is a unanimous agreement on the novelty/solid results of the paper and the AC agree with the reviewers. The proposed idea in this paper is interesting, well-motivated, and could make good contributions to the general vision and representation learning communities. In this regard, the AC recommends acceptance as spotlight. | train | [
"ecpEKMw12k",
"XJ7w5SJ2_Mc",
"mzzHtLNuiEC",
"0F5ZSya7wLT",
"2U0RbQScYdk",
"DkutifARsjP",
"1Q2FgfyjqUL",
"MVMaqLkPec",
"tdi1e2bnUm8",
"OaEv1g7ri8",
"GskYaL9dv2F"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a focal version of self-attention module for vision transformers. The query is kept at high resolution, but the number of keys is reduced depending on the relative position difference. This proposed attention is implemented within multi-scale windows. It shows good performance on top of recent ... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
"nips_2021_2zCRcTafea",
"GskYaL9dv2F",
"1Q2FgfyjqUL",
"nips_2021_2zCRcTafea",
"GskYaL9dv2F",
"OaEv1g7ri8",
"ecpEKMw12k",
"tdi1e2bnUm8",
"nips_2021_2zCRcTafea",
"nips_2021_2zCRcTafea",
"nips_2021_2zCRcTafea"
] |
nips_2021_Eyy4Tb1SY94 | Scalable and Stable Surrogates for Flexible Classifiers with Fairness Constraints | We investigate how fairness relaxations scale to flexible classifiers like deep neural networks for images and text. We analyze an easy-to-use and robust way of imposing fairness constraints when training, and through this framework prove that some prior fairness surrogates exhibit degeneracies for non-convex models. We resolve these problems via three new surrogates: an adaptive data re-weighting, and two smooth upper-bounds that are provably more robust than some previous methods. Our surrogates perform comparably to the state-of-the-art on low-dimensional fairness benchmarks, while achieving superior accuracy and stability for more complex computer vision and natural language processing tasks.
| accept | The paper studies the effect of different fairness relaxations in the training of "fair" classifiers. The reviewers point out some limitations of the work, but think that the paper has ideas that could be interesting to the NeurIPS community, and therefore recommend acceptance. The authors are strongly encouraged to incorporate the feedback provide, and in particular, work on improving the writing as suggested by the reviewers.
**Additional comments from AC:**
To Reviewer okm9's comment on comparison to Cotter et al., my understanding of their work is that they propose a Lagrangian-based solver, in which the classifier is optimized using surrogate relaxations, whereas the Lagrange multipliers or costs $\lambda$ are updated using the *unrelaxed fairness constraint*. I think this is reminiscent of how you perform grid search, although when there are multiple constraints (e.g. over large number of protected groups), a simple grid search may not be feasible, and methods such as theirs may be needed. I think it's important to at least have a discussion on this in the paper, and to be explicit about the difficulty of applying grid search with non-binary attributes.
To Reviewer C1h7's question on why the method of Agarwal et al. may *fail* with non-negative costs, I don't think the authors provided a response. I can see why the surrogate relaxations would become non-convex with negative costs, but given that training of deep networks would anyway result in a non-convex problem, would the surrogate being non-convex be all that problematic? As the reviewer suggested, I think it's important to elaborate on why their method may *fail* (which I think is a strong claim).
Finally, a couple of minor comments on the writing from a quick glance:
- In the abstract, you mention that "We propose an easy-to-use and robust way of imposing fairness constraints when training": are you referring to Lagrangian formulation, which I think has already been well-studied in the literature and not necessarily a contribution of this paper, or specifically to the use of grid-search to tune $\lambda$, which unfortunately is not scalable to multiple constraints / protected groups. It might be good to be a bit more specific about what the paper claims to be its proposal.
- In the intro, you mention that "We also provide some of the first empirical results on fair relaxations for large-scale image and
text classification." Is this statement entirely accurate? I do remember CelebA being used commonly in prior fairness papers, e.g. https://arxiv.org/pdf/2004.01355.pdf.
I trust that the authors will put in the time and effort to make a thorough pass over the paper to polish up the writing, and to ensure that the statements made are all precise and accurate.
| train | [
"y6KrXf85xD2",
"t1JVJHupz-",
"SYJitTZwXO",
"F3yxF4LmuBp",
"01pqVbVbc3i",
"vaGX6JjrmY",
"YGeURBUo5Fa",
"QZZSd-fZxcr",
"RjLY5okdtL",
"5a8Cx6HYX4m",
"sl2Ff2Gvl0",
"lVlMVv4_JL",
"98e4QzGueeR"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
" I thank the authors for their comments and clarifications. The point regarding the constraint on $\\Delta_g$ being an inequality is convincing. For the point on generalizability of the proposed relaxation techniques, it would be helpful to provide more evidence theoretical or empirical in the updated version of t... | [
-1,
7,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
4,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"QZZSd-fZxcr",
"nips_2021_Eyy4Tb1SY94",
"YGeURBUo5Fa",
"nips_2021_Eyy4Tb1SY94",
"sl2Ff2Gvl0",
"nips_2021_Eyy4Tb1SY94",
"RjLY5okdtL",
"t1JVJHupz-",
"vaGX6JjrmY",
"98e4QzGueeR",
"F3yxF4LmuBp",
"nips_2021_Eyy4Tb1SY94",
"nips_2021_Eyy4Tb1SY94"
] |
nips_2021_k505ekjMzww | Residual Pathway Priors for Soft Equivariance Constraints | Models such as convolutional neural networks restrict the hypothesis space to a set of functions satisfying equivariance constraints, and improve generalization in problems by capturing relevant symmetries. However, symmetries are often only partially respected, preventing models with restriction biases from fitting the data. We introduce Residual Pathway Priors (RPPs) as a method for converting hard architectural constraints into soft priors, guiding models towards structured solutions while retaining the ability to capture additional complexity. RPPs are resilient to approximate or misspecified symmetries, and are as effective as fully constrained models even when symmetries are exact. We show that RPPs provide compelling performance on both model-free and model-based reinforcement learning problems, where contact forces and directional rewards violate the assumptions of equivariant networks. Finally, we demonstrate that RPPs have broad applicability, including dynamical systems, regression, and classification.
| accept | This paper presents a simple solution to an important problem - deciding what kind of inductive biases to use when we don’t have full confidence about the type of symmetries that exist in the domain. The solution is to combine a flexible model with one that has strong equivariance inductive biases.
All reviewers recognize the importance of the problem, clarity of the presentation and the simplicity of the approach. The proposed method being simple does not diminish its value, and simple methods are typically more applicable to a wide range of problems and more easily adopted by many.
Reviewer wXcB who gave a 5 rating does have a point about the baselines though: this paper mostly contrasts the proposed approach with the two extremes - the MLP without any explicitly symmetry built in and models like convnets that have strong equivariance built in. It would make the paper stronger to have a stronger baseline, that is for example trained with data-augmentation as a soft way to build some inductive bias into the model. The advantage of the proposed approach, as the authors explained in the discussion, is that it will be much more data-efficient and generalize much better in data-sparse regions.
After discussion among the reviewers and calibrating across a range of borderline papers we decided to accept this paper based on its clarity and elegance, and believe it can be a useful contribution to the community.
| train | [
"xfbtnCOpqD_",
"vi9DCbe4kb",
"_BomHqpcEPF",
"L5Yy2NuODhL",
"B-ZkeLlTvl1",
"5uITYGTKaiD",
"EMcNcL8tO5i",
"YVVy-P1xLo8",
"55da-t9RSoZ",
"lJ8YSgDtbIn"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We appreciate the response, but we respectfully disagree with the assessment. Although you may not find the strong performance of RPP surprising, we don’t see this as a drawback. Existing equivariant architectures can’t accommodate approximate symmetries and RPP can, and it benefits from doing so. Additionally, d... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"vi9DCbe4kb",
"5uITYGTKaiD",
"L5Yy2NuODhL",
"lJ8YSgDtbIn",
"YVVy-P1xLo8",
"55da-t9RSoZ",
"nips_2021_k505ekjMzww",
"nips_2021_k505ekjMzww",
"nips_2021_k505ekjMzww",
"nips_2021_k505ekjMzww"
] |
nips_2021_1lCZrXJBpM | Optimal Algorithms for Stochastic Contextual Preference Bandits | Aadirupa Saha | accept | This paper studies the problem of k-armed contextual preference bandits and gives optimal efficient algorithms matching lower bound regret guarantees. Empirical studies are also included. The results appear interesting and novel. Despite some concerns about the motivation, the paper makes a sufficient contribution overall to warrant acceptance. | test | [
"9WVJGcdeV9T",
"Rr_R30X4zXv",
"9pdJrSefvvV",
"o2BdIhEGDBv",
"y_Uje4SAZCv",
"hy7AASUFYuu",
"6ZjgRjMtSn"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer TXKE,\n\nWe hope that our responses have adequately addressed your concerns. We would greatly appreciate it if you please reconsider the scores based on that. Of course, we would be happy to clarify if there is any other concern.\n\nThanks \nAuthors",
" Thanks for your careful reviews and insightf... | [
-1,
-1,
-1,
-1,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"Rr_R30X4zXv",
"6ZjgRjMtSn",
"hy7AASUFYuu",
"y_Uje4SAZCv",
"nips_2021_1lCZrXJBpM",
"nips_2021_1lCZrXJBpM",
"nips_2021_1lCZrXJBpM"
] |
nips_2021_7nWS_1Gkqt | Tight High Probability Bounds for Linear Stochastic Approximation with Fixed Stepsize | Alain Durmus, Eric Moulines, Alexey Naumov, Sergey Samsonov, Kevin Scaman, Hoi-To Wai | accept | The paper provides a non-asymptotic analysis of linear stochastic approximation algorithms. It shows interesting concentration bounds and certain bounds are not achievable. The majority of the reviewers agree that the analysis is interesting, and provides new results to the literature. I have read the paper carefully and also the discussions between the authors and the reviewers, and agree with the majority of the reviewers. One reviewer pointed out that its connections to other papers are not made clear and pointed out some future directions to work on, and the authors have responded to these particular points. I do agree that the current form of the paper has not addressed fully these interesting new directions, but it has crossed the threshold. | train | [
"9YTGoltgiAL",
"SAf_1-UJWvQ",
"XmBCTX5tpht",
"pX8NU1-TTB-",
"cRvWTvw3KYt",
"mp3qWheUpfm",
"b_OZUHjcqC",
"4pfY6ZEOhN",
"SMzSxRJg7WU",
"6Kj4X5V6b74",
"t4BK0X4Tdmq",
"goDCYqq4zRO",
"mQmaFWHgaT",
"_MaDF-Wx1Du",
"7-0zc7r4tXt",
"56wgRMQRLdq",
"iaGBErUcT_M",
"I1nEUSVDsXG",
"7JvombQzWK",... | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_r... | [
" The authors have address most of my concerns. And based on my read of other reviews, I have updated score. The authors are suggested to have a comprehensive review of recent works on finite-sample analysis stochastic approximation schemes (including TD and Q-learning algorithms), to highlight the added value and ... | [
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
7
] | [
-1,
5,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
1,
3,
3
] | [
"mQmaFWHgaT",
"nips_2021_7nWS_1Gkqt",
"nips_2021_7nWS_1Gkqt",
"goDCYqq4zRO",
"SMzSxRJg7WU",
"SMzSxRJg7WU",
"SMzSxRJg7WU",
"SMzSxRJg7WU",
"6Kj4X5V6b74",
"_MaDF-Wx1Du",
"nips_2021_7nWS_1Gkqt",
"XmBCTX5tpht",
"SAf_1-UJWvQ",
"ZN_nPMY8rtM",
"YdwpIWiPXV",
"t4BK0X4Tdmq",
"lgJlllCNAOu",
"7... |
nips_2021_IaM7U4J-w3c | Learning Large Neighborhood Search Policy for Integer Programming | We propose a deep reinforcement learning (RL) method to learn large neighborhood search (LNS) policy for integer programming (IP). The RL policy is trained as the destroy operator to select a subset of variables at each step, which is reoptimized by an IP solver as the repair operator. However, the combinatorial number of variable subsets prevents direct application of typical RL algorithms. To tackle this challenge, we represent all subsets by factorizing them into binary decisions on each variable. We then design a neural network to learn policies for each variable in parallel, trained by a customized actor-critic algorithm. We evaluate the proposed method on four representative IP problems. Results show that it can find better solutions than SCIP in much less time, and significantly outperform other LNS baselines with the same runtime. Moreover, these advantages notably persist when the policies generalize to larger problems. Further experiments with Gurobi also reveal that our method can outperform this state-of-the-art commercial solver within the same time limit.
| accept | All reviewers agree that this paper applies RL (Q-actor-critic method) to learn Large Neighborhood Search policy for integer linear programming (ILP) and achieves strong performance compared to default policies in popular ILP solvers (SCIP and Gurobi) in multiple domains (SC, MIS, CA and MC) that have been widely used in previous works. Some of its proposed algorithm is novel (e.g., training an independent policy for each variable, new network architecture design), the presentation is clear, and experiments are well-designed and convincing, showing the final performance the community cares (i.e., the quality of the solution given fixed wall clock time). The authors also address many of the reviewers' concerns in the rebuttal, by including comparison with Gasse et al and different setting of the ILP solvers, further solidifying the work.
Overall, I happily accept the work. It can be a strong contribution to the learning-to-optimize community and I would highly encourage the authors to open source the code. | test | [
"QBSCOtF6Uu0",
"5GXsRa3vjE-",
"R3XgzNPTzPG",
"Mf_KswgQ6Ng",
"ktpU0ePo9tG",
"XTZll_Cc3gQ",
"fwGYOIv2Mmf",
"SRfhdQEOgxR",
"VvsMbDc1U2c",
"rhwMjydZWh",
"5hZy9M7a0vY"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper proposes to augment large neighborhood search (LNS) method in integer programming solver (e.g., SCIP) with a learning component. in particular, the paper provides a MDP formulation, parameterization and RL training pipeline for the problem. The paper also shows significant empirical performance gains com... | [
6,
-1,
7,
-1,
7,
-1,
-1,
-1,
-1,
-1,
8
] | [
4,
-1,
2,
-1,
4,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_IaM7U4J-w3c",
"Mf_KswgQ6Ng",
"nips_2021_IaM7U4J-w3c",
"XTZll_Cc3gQ",
"nips_2021_IaM7U4J-w3c",
"R3XgzNPTzPG",
"QBSCOtF6Uu0",
"5hZy9M7a0vY",
"nips_2021_IaM7U4J-w3c",
"ktpU0ePo9tG",
"nips_2021_IaM7U4J-w3c"
] |
nips_2021_vLvsnP64VC0 | Dynamic Trace Estimation | We study a dynamic version of the implicit trace estimation problem. Given access to an oracle for computing matrix-vector multiplications with a dynamically changing matrix A, our goal is to maintain an accurate approximation to A's trace using as few multiplications as possible. We present a practical algorithm for solving this problem and prove that, in a natural setting, its complexity is quadratically better than the standard solution of repeatedly applying Hutchinson's stochastic trace estimator. We also provide an improved algorithm assuming additional common assumptions on A's dynamic updates. We support our theory with empirical results, showing significant computational improvements on three applications in machine learning and network science: tracking moments of the Hessian spectral density during neural network optimization, counting triangles and estimating natural connectivity in a dynamically changing graph.
| accept | This paper considers estimating the trace of a sequence of matrices $A_1, A_2, … A_t$ under the constraint that $||A_i - A_{i+1}||\_2$ is small. The proposed algorithm outperforms the baseline which repeatedly applies Hutchinson's stochastic trace estimator in terms of sample complexity. This paper receives unanimous support from the reviewers. Thus, I recommend acceptance. | train | [
"dNYr16jozAd",
"fetpcVuu5lH",
"tEoV3H1EiJM",
"c2XgyCpb374",
"-JpZGgApmee",
"na1C1-O46_C",
"8RtVXGiYr6s",
"Nk5eArFRjD",
"Ak7nVhi3RZ1",
"kprcik18Ryu",
"9mcUOPC7C6p",
"qlhrOYonj_H"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This work considers the problem of dynamic trace estimation. Specifically, if there is a sequence of matrices $A_1, \\dots, A_n$, and the norm of $A_i - A_{i-1}$ is small relative to the norm of $A_i$, we can estimate the trace of each matrix efficiently. The paper gives an efficient algorithm, the convergence ana... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"nips_2021_vLvsnP64VC0",
"8RtVXGiYr6s",
"c2XgyCpb374",
"na1C1-O46_C",
"Nk5eArFRjD",
"dNYr16jozAd",
"qlhrOYonj_H",
"9mcUOPC7C6p",
"kprcik18Ryu",
"nips_2021_vLvsnP64VC0",
"nips_2021_vLvsnP64VC0",
"nips_2021_vLvsnP64VC0"
] |
nips_2021_Xv7rBttjWFT | Provable Representation Learning for Imitation with Contrastive Fourier Features | In imitation learning, it is common to learn a behavior policy to match an unknown target policy via max-likelihood training on a collected set of target demonstrations. In this work, we consider using offline experience datasets -- potentially far from the target distribution -- to learn low-dimensional state representations that provably accelerate the sample-efficiency of downstream imitation learning. A central challenge in this setting is that the unknown target policy itself may not exhibit low-dimensional behavior, and so there is a potential for the representation learning objective to alias states in which the target policy acts differently. Circumventing this challenge, we derive a representation learning objective that provides an upper bound on the performance difference between the target policy and a low-dimensional policy trained with max-likelihood, and this bound is tight regardless of whether the target policy itself exhibits low-dimensional structure. Moving to the practicality of our method, we show that our objective can be implemented as contrastive learning, in which the transition dynamics are approximated by either an implicit energy-based model or, in some special cases, an implicit linear model with representations given by random Fourier features. Experiments on both tabular environments and high-dimensional Atari games provide quantitative evidence for the practical benefits of our proposed objective.
| accept | After discussion, all reviewers believe that the rebuttal has addressed their concerns and agree on an acceptance. | train | [
"fPQLJKHGGHW",
"3XeFvw0bszp",
"4JbjR4Yvm17",
"9GjRJDcZVcI",
"1cZR5HHzlIp",
"_phOPr7RReT",
"Z7G2wD_DGso",
"kwlwN45Yqm6",
"eB-na0jRz3C",
"pxQ19eigZQv",
"xDU3eW-nSoe",
"Gmz0W0w3E5Q"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors study the impact of representation learning when applied to behavioral cloning (BC), and present a series of interesting theoretical results that establish its utility in this context. In particular, the authors show that there is a direct relationship between the quality of the state representation an... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
3
] | [
"nips_2021_Xv7rBttjWFT",
"kwlwN45Yqm6",
"9GjRJDcZVcI",
"1cZR5HHzlIp",
"_phOPr7RReT",
"Gmz0W0w3E5Q",
"xDU3eW-nSoe",
"fPQLJKHGGHW",
"pxQ19eigZQv",
"nips_2021_Xv7rBttjWFT",
"nips_2021_Xv7rBttjWFT",
"nips_2021_Xv7rBttjWFT"
] |
nips_2021_wFp6kmQELgu | MICo: Improved representations via sampling-based state similarity for Markov decision processes | We present a new behavioural distance over the state space of a Markov decision process, and demonstrate the use of this distance as an effective means of shaping the learnt representations of deep reinforcement learning agents. While existing notions of state similarity are typically difficult to learn at scale due to high computational cost and lack of sample-based algorithms, our newly-proposed distance addresses both of these issues. In addition to providing detailed theoretical analyses, we provide empirical evidence that learning this distance alongside the value function yields structured and informative representations, including strong results on the Arcade Learning Environment benchmark.
| accept | This seems like a solid paper that develops new approach to the interesting problem of inducing bias in learning through a notion of state similarity. The notion overcomes some limitations of previous notions that were based on bisimulation concepts. The reviewers are aligned in their ratings of the paper. | train | [
"-Bu4d77s7p4",
"8y7BMY42FNR",
"YQYZG2W-7Az",
"53EL-6Z7W5_",
"sstMgDk4l-T",
"fkV0PmG9Fh_",
"XL_ng537d3i",
"E6EtfN_w8uG",
"Cx9h0Gy7KDc",
"R6LZRI5zP57",
"pKlJuMWcxAb",
"RW_0jHjhhg9",
"ISCDZabY8g6",
"iPrc3F2LtGH",
"_zaui0QOtaK",
"h4HllVVvF4",
"mtOllgztuR",
"wu2pF40ADzv",
"qwBecbx7GKe... | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_re... | [
"The paper proposes a new behavioral distance over the state space of an MDP, which can be used to learn representations in deep RL tasks. The proposed distance has three advantages over the existing bisimulation metric: (1) it has better computational complexity; (2) it allows for online approximation; and (3) it ... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"nips_2021_wFp6kmQELgu",
"fkV0PmG9Fh_",
"sstMgDk4l-T",
"nips_2021_wFp6kmQELgu",
"fkV0PmG9Fh_",
"pKlJuMWcxAb",
"Cx9h0Gy7KDc",
"mtOllgztuR",
"h4HllVVvF4",
"RW_0jHjhhg9",
"ISCDZabY8g6",
"iPrc3F2LtGH",
"_zaui0QOtaK",
"P3eANmUxIiy",
"-Bu4d77s7p4",
"qwBecbx7GKe",
"wu2pF40ADzv",
"nips_202... |
nips_2021_M5h1l1SldlF | Counterfactual Explanations in Sequential Decision Making Under Uncertainty | Methods to find counterfactual explanations have predominantly focused on one-step decision making processes. In this work, we initiate the development of methods to find counterfactual explanations for decision making processes in which multiple, dependent actions are taken sequentially over time. We start by formally characterizing a sequence of actions and states using finite horizon Markov decision processes and the Gumbel-Max structural causal model. Building upon this characterization, we formally state the problem of finding counterfactual explanations for sequential decision making processes. In our problem formulation, the counterfactual explanation specifies an alternative sequence of actions differing in at most k actions from the observed sequence that could have led the observed process realization to a better outcome. Then, we introduce a polynomial time algorithm based on dynamic programming to build a counterfactual policy that is guaranteed to always provide the optimal counterfactual explanation on every possible realization of the counterfactual environment dynamics. We validate our algorithm using both synthetic and real data from cognitive behavioral therapy and show that the counterfactual explanations our algorithm finds can provide valuable insights to enhance sequential decision making under uncertainty.
| accept | Based on the review, rating and discussion, I recommend acceptance of this work.
* there is no fundamental flaws in this work
* while the required background for reading the paper is heavy, the overall paper is well written.
* the significance of the contribution is high | train | [
"KJJWCEW-5bQ",
"6WJ0bkWgw9k",
"LWxXW_jpC8e",
"8TxTpm8w4bK",
"922XBWTH6Nh",
"g2tpXVGayb",
"tL7Tg54Qlbo",
"xDaB88sS-yh",
"f6gfnylA0S",
"BLxx0fPh94"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a novel technique to generate counterfactual explanations for Markov decision processes (MDPs). Technically, it first remodels the state transition of MDPs with a Gumbel-Max structural causal model. Then, it formalizes the counterfactual explanation generation as an optimization problem and pro... | [
4,
-1,
-1,
7,
-1,
-1,
-1,
-1,
6,
8
] | [
3,
-1,
-1,
4,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_M5h1l1SldlF",
"LWxXW_jpC8e",
"922XBWTH6Nh",
"nips_2021_M5h1l1SldlF",
"8TxTpm8w4bK",
"KJJWCEW-5bQ",
"BLxx0fPh94",
"f6gfnylA0S",
"nips_2021_M5h1l1SldlF",
"nips_2021_M5h1l1SldlF"
] |
nips_2021_b5ybNM1d5O | Streaming Linear System Identification with Reverse Experience Replay | We consider the problem of estimating a linear time-invariant (LTI) dynamical system from a single trajectory via streaming algorithms, which is encountered in several applications including reinforcement learning (RL) and time-series analysis. While the LTI system estimation problem is well-studied in the {\em offline} setting, the practically important streaming/online setting has received little attention. Standard streaming methods like stochastic gradient descent (SGD) are unlikely to work since streaming points can be highly correlated. In this work, we propose a novel streaming algorithm, SGD with Reverse Experience Replay (SGD-RER), that is inspired by the experience replay (ER) technique popular in the RL literature. SGD-RER divides data into small buffers and runs SGD backwards on the data stored in the individual buffers. We show that this algorithm exactly deconstructs the dependency structure and obtains information theoretically optimal guarantees for both parameter error and prediction error. Thus, we provide the first -- to the best of our knowledge -- optimal SGD-style algorithm for the classical problem of linear system identification with a first order oracle. Furthermore, SGD-RER can be applied to more general settings like sparse LTI identification with known sparsity pattern, and non-linear dynamical systems. Our work demonstrates that the knowledge of data dependency structure can aid us in designing statistically and computationally efficient algorithms which can ``decorrelate'' streaming samples.
| accept | This paper considers the system identification problem in a vector autoregressive process using a single-pass SGD type algorithm with reverse experience replay. The analysis is interesting, and compared to existing work, it has improved the dependence on the mixing time and dimension. Although it has not completely resolved the dependence on the mixing time, I agree with the reviewers that it is already an interesting contribution and a concrete step towards a better understanding of system identifications in linear systems. I am happy to recommend acceptance. | train | [
"ksuhsYUarjI",
"WgC-wtpOzAW",
"-k0T6-6yNj",
"Ej-c6GrGH8t",
"TQ1nllHEChM",
"ebx7QCzYEej",
"vO3VXAz_aum",
"7tlEyB-JNZw",
"vBtwqegCfTF",
"qZvgjzOOzq3"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for responding to my comments. I still lean towards acceptance, and will maintain my score.",
"This paper considers systems identification in vectorized auto-regressive model with stochastic noise. The paper focuses on the practicality of system identification with nearly optimal computation and sample c... | [
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"ebx7QCzYEej",
"nips_2021_b5ybNM1d5O",
"TQ1nllHEChM",
"7tlEyB-JNZw",
"7tlEyB-JNZw",
"vBtwqegCfTF",
"WgC-wtpOzAW",
"qZvgjzOOzq3",
"nips_2021_b5ybNM1d5O",
"nips_2021_b5ybNM1d5O"
] |
nips_2021_nlEQMVBD359 | SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness | Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, Do-Guk Kim, Jinwoo Shin | accept | All reviewers praised the simplicity of the idea of combining MixUp and adversarial training to boost certified accuracy and its significance.
At the same time, reviewers questioned the significance of the performance boost in terms of metric used (ACR) and potentially different behaviors across datasets. Furthermore, presentation and writing were deemed improvable.
Authors were quite responsive during the rebuttal and provided additional explanations and experimental results that answered the major concerns raised by the reviewers.
Fixing the presentation as suggested is expected for the camera-ready version.
| train | [
"u2ZbliDay8c",
"9wFd50x8gH",
"EH1Q9CmIYwI",
"gWt4J_NkFWL",
"gcKF8jWmN92",
"TvDS3jAKF0s",
"OorrRyPgYg",
"9_tdGv3Vu9t",
"YfWeBigznNX",
"Ibm4tkg6fCH",
"utd3JXVR25",
"r-J6xPGHLyJ",
"lZ6WZj7ErA",
"nuoMcs_pNl1",
"F3sYK3s_772",
"PiVWRwEva-Q",
"ibmuagIiXCE"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper looks to improve upon SmoothAdv - a methodology that employs adversarial training on a smoothed classifier to further improve certified robustness. The methodology employed looks to tackling overconfident examples rather than adversarial examples by utilising MixUp. They demonstrate that we can combine ... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"nips_2021_nlEQMVBD359",
"gcKF8jWmN92",
"gWt4J_NkFWL",
"r-J6xPGHLyJ",
"utd3JXVR25",
"lZ6WZj7ErA",
"nips_2021_nlEQMVBD359",
"YfWeBigznNX",
"nuoMcs_pNl1",
"nips_2021_nlEQMVBD359",
"PiVWRwEva-Q",
"ibmuagIiXCE",
"u2ZbliDay8c",
"F3sYK3s_772",
"nips_2021_nlEQMVBD359",
"nips_2021_nlEQMVBD359"... |
nips_2021_ffj1YFEMqvn | Action-guided 3D Human Motion Prediction | The ability of forecasting future human motion is important for human-machine interaction systems to understand human behaviors and make interaction. In this work, we focus on developing models to predict future human motion from past observed video frames. Motivated by the observation that human motion is closely related to the action being performed, we propose to explore action context to guide motion prediction. Specifically, we construct an action-specific memory bank to store representative motion dynamics for each action category, and design a query-read process to retrieve some motion dynamics from the memory bank. The retrieved dynamics are consistent with the action depicted in the observed video frames and serve as a strong prior knowledge to guide motion prediction. We further formulate an action constraint loss to ensure the global semantic consistency of the predicted motion. Extensive experiments demonstrate the effectiveness of the proposed approach, and we achieve state-of-the-art performance on 3D human motion prediction.
| accept | All reviewers recommend accepting this paper.
The author response was well received and cleared up some of the minor points about the work.
This paper examines the idea of creating an action-specific memory bank to store motion dynamics different action categories.
The work explores a query-read process to retrieve information from the memory bank. The work provides a good set of experiments to demonstrate the effectiveness of the proposed approach, and asserts that they have achieved state-of-the-art performance on 3D human motion prediction.
The AC recommends accepting this paper. | train | [
"LcVGauwUYen",
"0RKM_v-UoRd",
"6Qxb7CM7akm",
"6r6F3dzS_V7",
"7IfaApHLafR",
"GmI_lbTYW0O",
"0U38vJd4Vps",
"nbKP6EbRX-Q",
"Bgqc20JMiwx",
"iEwIjJEsmGT",
"IDne3ymPb15",
"TlM1y_DtH1c",
"WBjzj2VI4wH"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper introduces an external memory component to the baseline method PHD on the task of human action prediction. This external memory component is designed so that it clusters common actions across videos conditioned on the action class. A query-read mechanism is designed to train and retrieve similar actions ... | [
7,
6,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
8
] | [
4,
4,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
"nips_2021_ffj1YFEMqvn",
"nips_2021_ffj1YFEMqvn",
"0U38vJd4Vps",
"TlM1y_DtH1c",
"iEwIjJEsmGT",
"nips_2021_ffj1YFEMqvn",
"0RKM_v-UoRd",
"nips_2021_ffj1YFEMqvn",
"0RKM_v-UoRd",
"GmI_lbTYW0O",
"WBjzj2VI4wH",
"LcVGauwUYen",
"nips_2021_ffj1YFEMqvn"
] |
nips_2021_a9WXj5XV5mK | Meta-Learning the Search Distribution of Black-Box Random Search Based Adversarial Attacks | Adversarial attacks based on randomized search schemes have obtained state-of-the-art results in black-box robustness evaluation recently. However, as we demonstrate in this work, their efficiency in different query budget regimes depends on manual design and heuristic tuning of the underlying proposal distributions. We study how this issue can be addressed by adapting the proposal distribution online based on the information obtained during the attack. We consider Square Attack, which is a state-of-the-art score-based black-box attack, and demonstrate how its performance can be improved by a learned controller that adjusts the parameters of the proposal distribution online during the attack. We train the controller using gradient-based end-to-end training on a CIFAR10 model with white box access. We demonstrate that plugging the learned controller into the attack consistently improves its black-box robustness estimate in different query regimes by up to 20% for a wide range of different models with black-box access. We further show that the learned adaptation principle transfers well to the other data distributions such as CIFAR100 or ImageNet and to the targeted attack setting.
| accept | First, we would like to commend the authors on an interesting submission and a heroic effort in the response period, including several experimental findings to help address reviewer concerns. It is extremely difficult to make a decision for this paper --- it is on the borderline. The scores are all borderline, tending on average towards reject, without a clear champion. However, the authors do make a reasonably good case that they are proposing a query efficient black box attack. In the rebuttal they also attempt to address ethical concerns. On the other hand, there are remaining concerns about the technical contribution, particularly how specific the approach is to the square attack. The questions are mostly around how applicable the ideas are to other attacks, and how much effort would be required to design controllers for other attacks. In any case, the effort the reviewers have made in responding to reviewers should be highly useful in subsequent revisions. | train | [
"ND8LPTgjIm",
"uzKf_jVJY46",
"S_8e6GyEsI_",
"md8ZhzktyzB",
"g3yinm0NCr1",
"uSfxbFav5Zr",
"FYhoOvoX9-3",
"8UU4CStZ_Mv",
"4P2KD2NAKyC",
"DelWua6hzv0",
"GkfaoH9hOZk",
"j2SaJDm9yR",
"-E9InKgnbtL",
"mvZfMkeCksK",
"wcGZhBX4qx",
"EBs2Jdygl3U"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes meta-learning a black box adversarial attack strategy. Specifically, the authors propose meta-learning controllers that output the parameters of a square attack. Experiments on CIFAR-10 and ImageNet demonstrate the effectiveness of a meta-learned square attack relative to a non meta-learned bas... | [
6,
-1,
5,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
4,
-1,
3,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_a9WXj5XV5mK",
"DelWua6hzv0",
"nips_2021_a9WXj5XV5mK",
"4P2KD2NAKyC",
"nips_2021_a9WXj5XV5mK",
"FYhoOvoX9-3",
"8UU4CStZ_Mv",
"GkfaoH9hOZk",
"S_8e6GyEsI_",
"ND8LPTgjIm",
"g3yinm0NCr1",
"nips_2021_a9WXj5XV5mK",
"EBs2Jdygl3U",
"nips_2021_a9WXj5XV5mK",
"nips_2021_a9WXj5XV5mK",
"n... |
nips_2021_h6EWbx5xTj7 | Validating the Lottery Ticket Hypothesis with Inertial Manifold Theory | Despite achieving remarkable efficiency, traditional network pruning techniques often follow manually-crafted heuristics to generate pruned sparse networks. Such heuristic pruning strategies are hard to guarantee that the pruned networks achieve test accuracy comparable to the original dense ones. Recent works have empirically identified and verified the Lottery Ticket Hypothesis (LTH): a randomly-initialized dense neural network contains an extremely sparse subnetwork, which can be trained to achieve similar accuracy to the former. Due to the lack of theoretical evidence, they often need to run multiple rounds of expensive training and pruning over the original large networks to discover the sparse subnetworks with low accuracy loss. By leveraging dynamical systems theory and inertial manifold theory, this work theoretically verifies the validity of the LTH. We explore the possibility of theoretically lossless pruning as well as one-time pruning, compared with existing neural network pruning and LTH techniques. We reformulate the neural network optimization problem as a gradient dynamical system and reduce this high-dimensional system onto inertial manifolds to obtain a low-dimensional system regarding pruned subnetworks. We demonstrate the precondition and existence of pruned subnetworks and prune the original networks in terms of the gap in their spectrum that make the subnetworks have the smallest dimensions.
| accept | This paper aims to provide theoretical study and justification to the validity of the lottery ticket hypothesis, which indeed has recently been of great interest to the deep learning community. While one reviewer suggested the results or conclusion here are somewhat overclaimed, the work has generally been favorably received by most of the reviewers. The results provided here are somewhat expected, but nevertheless, it is good to see work done to properly derive and establish them. Further, in their responses the authors have addressed most (if not all) major reviewer concerns, and have subsequently also emphasized their commitment to implementing appropriate revisions in their manuscript. In total, three reviewers have scored above acceptance threshold (one outright accept and two marginally leaning to accept). The remaining reviewer has raised their score from reject to marginally below the threshold, but it is my impression that they would not object to the paper being accepted. Therefore, I agree with the majority of reviewers and also lean towards recommending to accept this paper. | train | [
"42aQfiso_Wv",
"tybuQFx3jsp",
"cHrdro1GgT9",
"YhoFCY08MSj",
"W7SO0Vswo5",
"SLGY_qd69Pz",
"8f1q2jGmLl",
"yWNDWLQAdEU",
"vC5rxSba_N",
"AHgzGtEOj-x",
"JcfTT3eW126"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
" Thank you very much for your insightful comments. We have tried our best to clarify and address the concerns and comments (i.e., sparse network and semi-linear ODE) by the reviewer in the initial response. We are glad to answer and clarify any further questions and advices from the reviewer for better readabilit... | [
-1,
6,
-1,
5,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
-1,
3,
-1,
4,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"yWNDWLQAdEU",
"nips_2021_h6EWbx5xTj7",
"W7SO0Vswo5",
"nips_2021_h6EWbx5xTj7",
"vC5rxSba_N",
"JcfTT3eW126",
"tybuQFx3jsp",
"AHgzGtEOj-x",
"YhoFCY08MSj",
"nips_2021_h6EWbx5xTj7",
"nips_2021_h6EWbx5xTj7"
] |
nips_2021_kLWGdQYsmC5 | Are My Deep Learning Systems Fair? An Empirical Study of Fixed-Seed Training | Deep learning (DL) systems have been gaining popularity in critical tasks such as credit evaluation and crime prediction. Such systems demand fairness. Recent work shows that DL software implementations introduce variance: identical DL training runs (i.e., identical network, data, configuration, software, and hardware) with a fixed seed produce different models. Such variance could make DL models and networks violate fairness compliance laws, resulting in negative social impact. In this paper, we conduct the first empirical study to quantify the impact of software implementation on the fairness and its variance of DL systems. Our study of 22 mitigation techniques and five baselines reveals up to 12.6% fairness variance across identical training runs with identical seeds. In addition, most debiasing algorithms have a negative impact on the model such as reducing model accuracy, increasing fairness variance, or increasing accuracy variance. Our literature survey shows that while fairness is gaining popularity in artificial intelligence (AI) related conferences, only 34.4% of the papers use multiple identical training runs to evaluate their approach, raising concerns about their results’ validity. We call for better fairness evaluation and testing protocols to improve fairness and fairness variance of DL systems as well as DL research validity and reproducibility at large.
| accept | This is a very empirical paper that documents an interesting and surprising fact: fixed-seed identical training runs (FIT runs) can have surprisingly different fairness characteristics.
I think this is an important fact that deserves to be known widely in the community and will likely be used to justify a lot of follow up work. The paper also includes a systematic literature survey to argue that this is both a little known fact and one that can affect the validity of some existing results.
That said, while the specific focus on fairness is novel, and I don't disagree with the authors that more people need to be aware of the variance that can be introduced by various sources of implementation-level non-determinism, I think what the paper claims as implications of these results are mostly things that people should know to do already. For example, the suggestion that people should use statistical tests to check the validity of any proposed improvements is a good one, but it is not one that follows from this research. Given that training algorithms are fundamentally stochastic, applying statistical tests to confirm claimed improvements should be a requirement even if implementation-level non-determinism were not an issue. | val | [
"ABIL1JfZ0IU",
"dfKsSB3WIO",
"DGYJ2g_-7l",
"udeqYDlozcU",
"l6RLhdeR4up",
"WtVJMUJlHP",
"2yUiElaTM9l"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper investigates the impact of so-called implementation-level nondeterminism of machine learning frameworks on state-of-the-art fairness methods. Unlike algorithmic nondeterminism, which is caused by, e.g., dropout or random shuffling and can be rendered deterministic by fixing the random seed, implementati... | [
7,
-1,
-1,
-1,
-1,
7,
5
] | [
4,
-1,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_kLWGdQYsmC5",
"2yUiElaTM9l",
"2yUiElaTM9l",
"ABIL1JfZ0IU",
"WtVJMUJlHP",
"nips_2021_kLWGdQYsmC5",
"nips_2021_kLWGdQYsmC5"
] |
nips_2021_lmOF2OxxSz | Rectangular Flows for Manifold Learning | Normalizing flows are invertible neural networks with tractable change-of-volume terms, which allow optimization of their parameters to be efficiently performed via maximum likelihood. However, data of interest are typically assumed to live in some (often unknown) low-dimensional manifold embedded in a high-dimensional ambient space. The result is a modelling mismatch since -- by construction -- the invertibility requirement implies high-dimensional support of the learned distribution. Injective flows, mappings from low- to high-dimensional spaces, aim to fix this discrepancy by learning distributions on manifolds, but the resulting volume-change term becomes more challenging to evaluate. Current approaches either avoid computing this term entirely using various heuristics, or assume the manifold is known beforehand and therefore are not widely applicable. Instead, we propose two methods to tractably calculate the gradient of this term with respect to the parameters of the model, relying on careful use of automatic differentiation and techniques from numerical linear algebra. Both approaches perform end-to-end nonlinear manifold learning and density estimation for data projected onto this manifold. We study the trade-offs between our proposed methods, empirically verify that we outperform approaches ignoring the volume-change term by more accurately learning manifolds and the corresponding distributions on them, and show promising results on out-of-distribution detection. Our code is available at https://github.com/layer6ai-labs/rectangular-flows.
| accept | Congratulations on the acceptance of your paper! Please incorporate changes, edits and additional promised experiments from "Author Discussion" in the final paper/appendix. | train | [
"Pb7zJR6QA5U",
"1RyR5hI_cbK",
"6XQmzXRPk9H",
"5mTM4DTJjj",
"f-AYvpwhr02",
"PvbwCwsHz08",
"Um7BE1Ljgn6",
"9tZygTrVGvu",
"8Kp8A-mvkMK",
"hJTeDIhiJej",
"Q64xxvujVjV",
"YhdwG8XKNoS",
"2yrvRU_rGC",
"vyRYCAllrx"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper addresses the problem of constructing injective normalizing flows, which connect some low-dimensional space with the data manifold of interest in the high-dimensional space. Typically injective (or rectangular) flows are composed of two square flows with the “upsampling” padding layer between them. Beca... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
4
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"nips_2021_lmOF2OxxSz",
"9tZygTrVGvu",
"PvbwCwsHz08",
"nips_2021_lmOF2OxxSz",
"vyRYCAllrx",
"2yrvRU_rGC",
"YhdwG8XKNoS",
"Pb7zJR6QA5U",
"Q64xxvujVjV",
"nips_2021_lmOF2OxxSz",
"nips_2021_lmOF2OxxSz",
"nips_2021_lmOF2OxxSz",
"nips_2021_lmOF2OxxSz",
"nips_2021_lmOF2OxxSz"
] |
nips_2021_dPdrrr-YrgX | On the Generative Utility of Cyclic Conditionals | Chang Liu, Haoyue Tang, Tao Qin, Jintao Wang, Tie-Yan Liu | accept | The paper starts with the likelihood and inference model used in variational auto-encoders and studies if and when a joint distribution can be modeled using these two conditionals. The paper is a nice to read and brings to light many of the results on conditionally specified models to the VAE community. I also appreciated a practical implementation with cross-derivatives to encourage the two conditional distributions to be compatible with a joint distribution. A couple of comments
- I don't think the full Arnold book was cited. It has more results and should be integrated into the narrative
@book{arnold2012conditionally,
title={Conditionally specified distributions},
author={Arnold, Barry C and Castillo, Enrique and Alegria, Jose-Maria Sarabia},
volume={73},
year={2012},
publisher={Springer Science \& Business Media}
}
- Edit the paper to frame and highlight the practical implications of the results | train | [
"KZ2G0fjH35k",
"-IyzkyiKSjy",
"Gzi-yRWvIfH",
"JoGaXEMnLWh",
"KveDhYBsG1",
"KmbTNzmiWWD"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Notes: \n -Model the joint p(x,z) by modeling p(x|z) and q(z|x) to form a cycle. \n -Many existing techniques use an uninformative prior p(z). \n -Compatibility and determinacy criterions. \n -Use parameterized density models p(x|z) and q(z|x). \n -Define r(x,z) = log(p(x|z) / q(z|x)) and make the loss th... | [
6,
-1,
-1,
-1,
8,
6
] | [
2,
-1,
-1,
-1,
3,
3
] | [
"nips_2021_dPdrrr-YrgX",
"KmbTNzmiWWD",
"KZ2G0fjH35k",
"KveDhYBsG1",
"nips_2021_dPdrrr-YrgX",
"nips_2021_dPdrrr-YrgX"
] |
nips_2021_nz2iUi-iZLQ | Structural Credit Assignment in Neural Networks using Reinforcement Learning | Structural credit assignment in neural networks is a long-standing problem, with a variety of alternatives to backpropagation proposed to allow for local training of nodes. One of the early strategies was to treat each node as an agent and use a reinforcement learning method called REINFORCE to update each node locally with only a global reward signal. In this work, we revisit this approach and investigate if we can leverage other reinforcement learning approaches to improve learning. We first formalize training a neural network as a finite-horizon reinforcement learning problem and discuss how this facilitates using ideas from reinforcement learning like off-policy learning. We show that the standard on-policy REINFORCE approach, even with a variety of variance reduction approaches, learns suboptimal solutions. We introduce an off-policy approach, to facilitate reasoning about the greedy action for other agents and help overcome stochasticity in other agents. We conclude by showing that these networks of agents can be more robust to correlated samples when learning online.
| accept | This paper started out with a strong accept and 3 weak rejects, but then converged to 7-6-6-5 (where reviewer with 5 has a short review and is less engaged than other 3 reviewers). I recommend the weak acceptance for the paper, primarily because the novelty/insights of the paper outweigh the immaturity of presentation/lack of pragmatic result (to outperform backprop). The formulation could be impactful for further research, and this paper lays out sufficient foundations for it. However, all reviewers agree that the presentation clarity must be substantially improved, and therefore the authors must follow through and incorporate the reviewer suggestions for the final version. | train | [
"G_ZgXRphe0j",
"0W5mu8-psjh",
"yllDRheKoOL",
"jvkquioTeKF",
"DfEeq9GB2cn",
"2e-VzCdFxKn",
"OzHbUpu_eun",
"fqAdurhdE4",
"gsgsznl4p5n",
"ID6nSOzVyLU",
"sEx6FzR9n1"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"- This paper revisits an interesting (and truly under-explored) idea from the reinforcement-learning literature: coagent networks [43,44]. The original work by Philip Thomas considered endowing the standard agent, that maps from states to actions, with a richer structure whereby the state can be mapped by multiple... | [
7,
-1,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
4,
-1,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"nips_2021_nz2iUi-iZLQ",
"gsgsznl4p5n",
"nips_2021_nz2iUi-iZLQ",
"nips_2021_nz2iUi-iZLQ",
"OzHbUpu_eun",
"fqAdurhdE4",
"jvkquioTeKF",
"sEx6FzR9n1",
"yllDRheKoOL",
"G_ZgXRphe0j",
"nips_2021_nz2iUi-iZLQ"
] |
nips_2021_HjFtRc83eBB | A Near-Optimal Algorithm for Stochastic Bilevel Optimization via Double-Momentum | Prashant Khanduri, Siliang Zeng, Mingyi Hong, Hoi-To Wai, Zhaoran Wang, Zhuoran Yang | accept | Large-scale bi-level and multi-level optimization has several emerging applications (e.g., robotics) where optimizers are themselves embedded in end-to-end deep learning pipelines as layers. Bilevel problems where the lower level subproblem is strongly-convex and the upper level objective function is smooth is an interesting special case amenable to complexity bound analyses for reaching a stationary point. As such, the contributions of this paper should interest both numerical optimization and application communities. The reviews ask for a few clarifications in the final revision. | test | [
"f5kYEk64mc8",
"7BnYQUhQUxe",
"Zk7Q8y7BjTp",
"g3iQC0OpDcU",
"hM3fZtH2JGg",
"tab32hizGc",
"slLQiWL9CE",
"fUUEzYMTshI",
"kpTTRirNXja",
"dsxxq4b65p1",
"lBpOkmcqji6",
"gFcvpdk1dp6",
"KehEOhh9alN"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The reviewer is correct that the **bounded assumption of the stochastic second-order gradient** of $g$ is not explicitly stated as a separate assumption in [14] and [18], however, it is mentioned in the second paragraph after Assumption 3 (on Page 3) of [14]. Precisely, the authors state that they make the same a... | [
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"7BnYQUhQUxe",
"Zk7Q8y7BjTp",
"hM3fZtH2JGg",
"dsxxq4b65p1",
"fUUEzYMTshI",
"nips_2021_HjFtRc83eBB",
"KehEOhh9alN",
"gFcvpdk1dp6",
"tab32hizGc",
"lBpOkmcqji6",
"nips_2021_HjFtRc83eBB",
"nips_2021_HjFtRc83eBB",
"nips_2021_HjFtRc83eBB"
] |
nips_2021_TiwPYwg3IRf | Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels | Prior works have found it beneficial to combine provably noise-robust loss functions e.g., mean absolute error (MAE) with standard categorical loss function e.g. cross entropy (CE) to improve their learnability. Here, we propose to use Jensen-Shannon divergence as a noise-robust loss function and show that it interestingly interpolate between CE and MAE with a controllable mixing parameter. Furthermore, we make a crucial observation that CE exhibit lower consistency around noisy data points. Based on this observation, we adopt a generalized version of the Jensen-Shannon divergence for multiple distributions to encourage consistency around data points. Using this loss function, we show state-of-the-art results on both synthetic (CIFAR), and real-world (e.g., WebVision) noise with varying noise rates.
| accept | The reviewers deemed the paper of interest. I would like to thank the authors for providing focused discussion content for both theoretical and experiments, which contributed to explain further the paper's content and its improvements with respect to its submission history.
In the revised version, the authors should include the generalisation to Theorem 1 proposed at discussion phase to cover asymetric noise and in experiments, find a place to report consistency and instance-based label noise, a short discussion vs Wei + Liu's approach as developed in the discussion (point (12) 6kDX) and complete references to be more extensive. This is extremely important given the context of the paper and the richness of the relevant literature in the past decade or so.
AC. | test | [
"GifS1GtwU2",
"iN8fuPmZbj1",
"nG7jRJBEQ01",
"C6XsV5NJvPk",
"pN1IO1Zdsf3",
"JSbmH7MNt0N",
"SP8LN1U97by",
"-6h-3uys-NE",
"asAwnx_8c6Y",
"Vb-HEqxAcnd",
"vOYk-ZW-Swi",
"5VJ-sEkI9Gi",
"0e_ZwDx3rcp",
"r9PipW6Zx6P"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper considers learning with noisy labels. The author showed that the\nJensen-Shannon (JS) divergence sits in between the mean absolute error (MSE) and\nthe cross entropy (CS). The key proposition is to use the generalized JS (GJS)\nas the loss, which presents a trade-off between robustness with respect to... | [
6,
-1,
-1,
-1,
7,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
3,
-1,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_TiwPYwg3IRf",
"pN1IO1Zdsf3",
"GifS1GtwU2",
"-6h-3uys-NE",
"nips_2021_TiwPYwg3IRf",
"Vb-HEqxAcnd",
"nips_2021_TiwPYwg3IRf",
"nips_2021_TiwPYwg3IRf",
"SP8LN1U97by",
"asAwnx_8c6Y",
"r9PipW6Zx6P",
"pN1IO1Zdsf3",
"GifS1GtwU2",
"nips_2021_TiwPYwg3IRf"
] |
nips_2021_LJjC6DmSkgT | Continual Learning via Local Module Composition | Modularity is a compelling solution to continual learning (CL), the problem of modeling sequences of related tasks. Learning and then composing modules to solve different tasks provides an abstraction to address the principal challenges of CL including catastrophic forgetting, backward and forward transfer across tasks, and sub-linear model growth. We introduce local module composition (LMC), an approach to modular CL where each module is provided a local structural component that estimates a module’s relevance to the input. Dynamic module composition is performed layer-wise based on local relevance scores. We demonstrate that agnosticity to task identities (IDs) arises from (local) structural learning that is module-specific as opposed to the task- and/or model-specific as in previous works, making LMC applicable to more CL settings compared to previous works. In addition, LMC also tracks statistics about the input distribution and adds new modules when outlier samples are detected. In the first set of experiments, LMC performs favorably compared to existing methods on the recent Continual Transfer-learning Benchmark without requiring task identities. In another study, we show that the locality of structural learning allows LMC to interpolate to related but unseen tasks (OOD), as well as to compose modular networks trained independently on different task sequences into a third modular network without any fine-tuning. Finally, in search for limitations of LMC we study it on more challenging sequences of 30 and 100 tasks, demonstrating that local module selection becomes much more challenging in presence of a large number of candidate modules. In this setting best performing LMC spawns much fewer modules compared to an oracle based baseline, however, it reaches a lower overall accuracy. The codebase is available under https://github.com/oleksost/LMC.
| accept | This paper originally received three borderline positive reviews but without high confidence, and so a fourth review was solicited after the regular review period from a reviewer who is extremely knowledgeable on the topic. From this fourth review, it is clear that this paper should be accepted.
This paper focuses on continual learning of modular representations in task-agnostic settings, develops an approach that addresses this challenging, important, and understudied problem, and performs an extensive empirical analysis on a variety of well-chosen baselines.
The authors should be aware that there are some unclear parts, identified in the reviews, that even the most expert of the reviewers found difficult to follow at times. These need to be clarified for the final version. | train | [
"ABU_B8rCqha",
"Kv1xWUpkmjp",
"WjXTE7PhbGo",
"vJFnnWTgGuA",
"k91Q_y_AnqH",
"ZRc3RBYUt3Z",
"ocRWpfb4HI_",
"n_maaT0ioc4",
"D1q57hNsnXG",
"OP-fga4oP_H",
"M7HkYZwJRSb"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"The paper proposes a new algorithm for addressing continual learning (CL) by leveraging modular architectures. In contrast to (the few) existing modular CL works, this one focuses on the task-agnostic setting, where the agent is not informed of the task ID. The proposed architecture assigns two components to each ... | [
7,
-1,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1,
6
] | [
5,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
3
] | [
"nips_2021_LJjC6DmSkgT",
"D1q57hNsnXG",
"nips_2021_LJjC6DmSkgT",
"nips_2021_LJjC6DmSkgT",
"n_maaT0ioc4",
"nips_2021_LJjC6DmSkgT",
"vJFnnWTgGuA",
"ZRc3RBYUt3Z",
"M7HkYZwJRSb",
"ocRWpfb4HI_",
"nips_2021_LJjC6DmSkgT"
] |
nips_2021_QniKgFi9GyB | Model-Based Episodic Memory Induces Dynamic Hybrid Controls | Episodic control enables sample efficiency in reinforcement learning by recalling past experiences from an episodic memory. We propose a new model-based episodic memory of trajectories addressing current limitations of episodic control. Our memory estimates trajectory values, guiding the agent towards good policies. Built upon the memory, we construct a complementary learning model via a dynamic hybrid control unifying model-based, episodic and habitual learning into a single architecture. Experiments demonstrate that our model allows significantly faster and better learning than other strong reinforcement learning agents across a variety of environments including stochastic and non-Markovian settings.
| accept | This paper improves on recent works that leverage both episodic and habitual learning by combining state-action episodic memories with parametric value functions like Deep Q-Network.
The strengths of the paper include: The idea is compelling from a theoretical/neuroscience standpoint (i.e., complementary learning systems); The proposed trajectory embedding and memory mechanisms are novel; The proposed dynamic weighting between memory-based and parametric values estimates is also novel; The performance is strong. On the other hand, the reviewers have concerns regarding the complexity of the proposed approach.
| train | [
"v4uo1Ez5VpS",
"cGMrs9UKTIo",
"XXB83sSDiT0",
"qKAcDO62e_p",
"OjX4V0itNkb",
"3CGRjDLnBO",
"XZs0dXx6FYe",
"N-a3o199SRm",
"cKlS0JyGBj",
"0uT_J75s7oK",
"TxlK-LOR2SU",
"MxWHPHy4hOv",
"HoRnVjLfNCR",
"k-OhdosLtc",
"88SJM_SfsMs",
"P44rEsYoRc_",
"6gU_DxdOtju",
"NhdoWeUYhRY",
"rL8GHSjC3Rf"... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"a... | [
" Thank you for your detailed responses and engagement throughout the discussion period. I think including the additional results you shared here and the justification given for the chosen baselines in response to my review and further questioning will improve the paper. Ideally of course the model based baselines ... | [
-1,
-1,
-1,
7,
-1,
7,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
-1,
-1,
-1,
4,
-1,
3,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"k-OhdosLtc",
"OjX4V0itNkb",
"XZs0dXx6FYe",
"nips_2021_QniKgFi9GyB",
"cigzsmEm5HH",
"nips_2021_QniKgFi9GyB",
"HoRnVjLfNCR",
"TxlK-LOR2SU",
"MxWHPHy4hOv",
"nips_2021_QniKgFi9GyB",
"rL8GHSjC3Rf",
"88SJM_SfsMs",
"P44rEsYoRc_",
"6gU_DxdOtju",
"nips_2021_QniKgFi9GyB",
"kIU04Jzy6Ni",
"Nhdo... |
nips_2021_SkDYNXUM4xZ | FedDR – Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization | We develop two new algorithms, called, FedDR and asyncFedDR, for solving a fundamental nonconvex composite optimization problem in federated learning. Our algorithms rely on a novel combination between a nonconvex Douglas-Rachford splitting method, randomized block-coordinate strategies, and asynchronous im- plementation. They can also handle convex regularizers. Unlike recent methods in the literature, e.g., FedSplit and FedPD, our algorithms update only a subset of users at each communication round, and possibly in an asynchronous manner, making them more practical. These new algorithms can handle statistical and sys- tem heterogeneity, which are the two main challenges in federated learning, while achieving the best known communication complexity. In fact, our new algorithms match the communication complexity lower bound up to a constant factor under standard assumptions. Our numerical experiments illustrate the advantages of our methods over existing algorithms on synthetic and real datasets.
| accept | Federated learning algorithms based on operator splitting or ADMM are more sophisticated than simple algorithms such as FedAvg or FedProx, and they bring more flexibility and rigor in algorithm design. The proposed FedDR method and its asynchronous versions extends the classical Douglas-Rachford splitting method, combining with randomized block-coordinate updates, to the federated learning setting with nonconvex loss functions.
The idea of applying operator splitting methods to federated learning has been proposed in a few recent works, but the reviewers are supportive of the generality and technical results obtained in this paper. They make solid contribution to the Federated Learning literature. | test | [
"cXHuFxoYPs6",
"S9WHTCopXQI",
"YdxH_Ra-AAF",
"v-unzTO99P1",
"9LzD65A28Pc",
"A3zDW1J3xZ",
"a6oeMGk9IbR",
"FouvPfvjO3r",
"puNeytBiTMI",
"2j3SRpI7EIw",
"l5NNGejhif",
"MiIIEx3tGD3",
"k4LZDQgDCfL",
"_Xf8orQTFOO",
"rqacBhEqni8"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read the authors' response. While I agree that there are certain technical innovations in this work, the idea of the Douglas Rachford Envelope is not a new one -- and is in particular something that has been employed in prior works analyzing DR type algorithms under non-convex (e.g. hypo convex) problems. ... | [
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
6
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3
] | [
"S9WHTCopXQI",
"MiIIEx3tGD3",
"9LzD65A28Pc",
"nips_2021_SkDYNXUM4xZ",
"2j3SRpI7EIw",
"puNeytBiTMI",
"FouvPfvjO3r",
"l5NNGejhif",
"rqacBhEqni8",
"v-unzTO99P1",
"_Xf8orQTFOO",
"k4LZDQgDCfL",
"nips_2021_SkDYNXUM4xZ",
"nips_2021_SkDYNXUM4xZ",
"nips_2021_SkDYNXUM4xZ"
] |
nips_2021_DE8MOQIgFTK | Adversarial Examples Make Strong Poisons | The adversarial machine learning literature is largely partitioned into evasion attacks on testing data and poisoning attacks on training data. In this work, we show that adversarial examples, originally intended for attacking pre-trained models, are even more effective for data poisoning than recent methods designed specifically for poisoning. In fact, adversarial examples with labels re-assigned by the crafting network remain effective for training, suggesting that adversarial examples contain useful semantic content, just with the "wrong" labels (according to a network, but not a human). Our method, adversarial poisoning, is substantially more effective than existing poisoning methods for secure dataset release, and we release a poisoned version of ImageNet, ImageNet-P, to encourage research into the strength of this form of data obfuscation.
| accept | This paper presents a technique to poison datasets with adversarial examples.
This an interesting and new direction, and the results are (often order of magnitude)
more effective than prior work that relied on much more sophisticated techniques.
While the reviewers are concerned about the tone of the paper over-claiming in
its originality (and I would encourage the authors to compare to the prior work
the reviewers have pointed out), the results are sufficiently strong to merit
publication.
| train | [
"TDhe2-93y3M",
"oFRcpNkMYO",
"cBBijufG_rU",
"iPVIIqncZfW",
"wcfOSgpcSqa",
"KZmerGIBFFr",
"47jRDCJvOWP",
"Vgs_DLtiVnV",
"_DsUJhksWF_",
"ovhgShBIdvm",
"HGBtvnU7ojF",
"hxBpBqVGDlX",
"zPHXMIZk5lN",
"0RXnqrdxPZ4",
"j97t98JIYB",
"vh5G9W-xp1s",
"endx0dqj4Yh"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. \n\n>It turns out that at least three strong factors emerge during the rebuttal:\n\nYes, it does indeed appear that those are the factors that are key for a strong untargeted attack. We agree that this is important to make clear in the main body, especially because it shines new light... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
6
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
2
] | [
"cBBijufG_rU",
"nips_2021_DE8MOQIgFTK",
"iPVIIqncZfW",
"wcfOSgpcSqa",
"KZmerGIBFFr",
"47jRDCJvOWP",
"Vgs_DLtiVnV",
"_DsUJhksWF_",
"HGBtvnU7ojF",
"endx0dqj4Yh",
"oFRcpNkMYO",
"j97t98JIYB",
"vh5G9W-xp1s",
"j97t98JIYB",
"nips_2021_DE8MOQIgFTK",
"nips_2021_DE8MOQIgFTK",
"nips_2021_DE8MOQ... |
nips_2021_GvU4RvMwlGo | Coresets for Decision Trees of Signals | Ibrahim Jubran, Ernesto Evgeniy Sanches Shayda, Ilan Newman, Dan Feldman | accept | The reviewers quite liked the first coreset for this fundamental machine learning model - decision trees - and the technical ingredients and ideas involved in the proof. The reviewers also felt the experimental results were solid. The only suggestions for improvement were to clarify more as to why this is the minimal assumption needed to skirt the lower bounds, and to add more intuition to the paper. | train | [
"Kn2kmrnaA_",
"6vXKVLfJx2V",
"cF57qbCb4B1",
"53bAm3lrv72",
"55toGol2eGw",
"O9IJVN2EK1A"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This work creates coresets for axis aligned decision trees. A decision tree is defined as a partition of the data (set of vectors + label) into axis-parallel rectangles such that all vectors in the same leaf get the same predicted label, and the cost of the tree is defined in terms of some loss (MSE in current pap... | [
7,
-1,
-1,
-1,
7,
6
] | [
4,
-1,
-1,
-1,
3,
4
] | [
"nips_2021_GvU4RvMwlGo",
"O9IJVN2EK1A",
"55toGol2eGw",
"Kn2kmrnaA_",
"nips_2021_GvU4RvMwlGo",
"nips_2021_GvU4RvMwlGo"
] |
nips_2021_Yu8Q6341U7W | Local plasticity rules can learn deep representations using self-supervised contrastive predictions | Learning in the brain is poorly understood and learning rules that respect biological constraints, yet yield deep hierarchical representations, are still unknown. Here, we propose a learning rule that takes inspiration from neuroscience and recent advances in self-supervised deep learning. Learning minimizes a simple layer-specific loss function and does not need to back-propagate error signals within or between layers. Instead, weight updates follow a local, Hebbian, learning rule that only depends on pre- and post-synaptic neuronal activity, predictive dendritic input and widely broadcasted modulation factors which are identical for large groups of neurons. The learning rule applies contrastive predictive learning to a causal, biological setting using saccades (i.e. rapid shifts in gaze direction). We find that networks trained with this self-supervised and local rule build deep hierarchical representations of images, speech and video.
| accept | This paper proposes a biologically plausible learning algorithm that implements unsupervised (contrastive) learning. The algorithm can learn hierarchical representations. A particular innovation is the use of saccades to generate contrastive samples. Reviewers all agree that the contribution is novel and significant, the manuscript is well-written, and the computational experiments are convincing. | val | [
"bJo-_4h3Nd",
"LOTjP_FyRz",
"woJAXINvE0F",
"tU2Kbu2i2d",
"xZuMFS0UETd",
"0C8GL5IOj1J",
"w7vgoxpNOE3",
"rpZ0PSNoZgH",
"Sz006aB6d_0",
"q5ohaIZL749",
"s71YVLfDfce",
"mWCpNGcGm5N",
"naBjlq22a5J",
"YdeVJCkvL-U"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the detailed response, particularly for suggesting a mechanism for computing the surprise signal. I believe acknowledging this challenge and discussing potential solutions will greatly improve your manuscript. I don't think anyone sane would expect you to get all the details right at the first try, ... | [
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
8
] | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"Sz006aB6d_0",
"tU2Kbu2i2d",
"nips_2021_Yu8Q6341U7W",
"xZuMFS0UETd",
"w7vgoxpNOE3",
"q5ohaIZL749",
"woJAXINvE0F",
"woJAXINvE0F",
"naBjlq22a5J",
"mWCpNGcGm5N",
"YdeVJCkvL-U",
"nips_2021_Yu8Q6341U7W",
"nips_2021_Yu8Q6341U7W",
"nips_2021_Yu8Q6341U7W"
] |
nips_2021_tUDO2N40Kd | MobTCast: Leveraging Auxiliary Trajectory Forecasting for Human Mobility Prediction | Human mobility prediction is a core functionality in many location-based services and applications. However, due to the sparsity of mobility data, it is not an easy task to predict future POIs (place-of-interests) that are going to be visited. In this paper, we propose MobTCast, a Transformer-based context-aware network for mobility prediction. Specifically, we explore the influence of four types of context in mobility prediction: temporal, semantic, social, and geographical contexts. We first design a base mobility feature extractor using the Transformer architecture, which takes both the history POI sequence and the semantic information as input. It handles both the temporal and semantic contexts. Based on the base extractor and the social connections of a user, we employ a self-attention module to model the influence of the social context. Furthermore, unlike existing methods, we introduce a location prediction branch in MobTCast as an auxiliary task to model the geographical context and predict the next location. Intuitively, the geographical distance between the location of the predicted POI and the predicted location from the auxiliary branch should be as close as possible. To reflect this relation, we design a consistency loss to further improve the POI prediction performance. In our experimental results, MobTCast outperforms other state-of-the-art next POI prediction methods. Our approach illustrates the value of including different types of context in next POI prediction.
| accept | This paper proposes a transformer-based predictor for future places of interest. Its initial scores were 7, 6, 6, and 5; and they did not change during the discussion. The paper presents good empirical results. However, the reviewers were concerned that it is not very technically deep. I read the paper, and see both positives and negatives. Let me start with the positives:
1. Experiments in Table 1 are impressive. A lot of variability in baselines. Three datasets and top-$K$ for various $K$.
2. The ablation study in Table 2 shows that MobTCast performs well even without social context, and losses $L_2$ and $L_3$. In short, the loss $L_1$ is enough to be comparable or better than the state of the art. This indicates that the improvement is due to better sequence modeling using the transformer.
Now the negatives:
1. I am not sure what is the lesson learned from this paper. How would somebody who uses transformers benefit from this result? The paper is indeed not very deep.
2. I am not 100% sure that MobTCast improves over the state of the art because of the transformer. In particular, do any of the baselines use the semantic context in Section 4.1? If not, the observed improvement may be simply because of better features.
In my opinion, this paper should be judged as an application paper, and it clears the bar because of good empirical results. I strongly suggest that the authors take my comments into account, in addition to the detailed comments of the reviewers. Congratulations! | train | [
"7YNlJCyRZSs",
"hMtB1FcJfRl",
"DfJb_keVo8_",
"XLbZVeI0B2i",
"gcTYIk0LPQ1",
"w_ji4Mt_Lyn",
"kQDVL6N0mDT",
"K2qipFv9NNh",
"bMr6BkGXTV"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for taking the time to write a rebuttal to address my comments.",
" We thank the reviewer for the very valuable, detailed and constructive feedback. Below, we include an answer to the provided comments, questions and suggestions. We hope that the reviewer will find the answers below according to expectat... | [
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
5,
4,
5,
4
] | [
"gcTYIk0LPQ1",
"w_ji4Mt_Lyn",
"kQDVL6N0mDT",
"K2qipFv9NNh",
"bMr6BkGXTV",
"nips_2021_tUDO2N40Kd",
"nips_2021_tUDO2N40Kd",
"nips_2021_tUDO2N40Kd",
"nips_2021_tUDO2N40Kd"
] |
nips_2021_Lpfh1Bpqfk | Early Convolutions Help Transformers See Better | Vision transformer (ViT) models exhibit substandard optimizability. In particular, they are sensitive to the choice of optimizer (AdamW vs. SGD), optimizer hyperparameters, and training schedule length. In comparison, modern convolutional neural networks are easier to optimize. Why is this the case? In this work, we conjecture that the issue lies with the patchify stem of ViT models, which is implemented by a stride-p p×p convolution (p = 16 by default) applied to the input image. This large-kernel plus large-stride convolution runs counter to typical design choices of convolutional layers in neural networks. To test whether this atypical design choice causes an issue, we analyze the optimization behavior of ViT models with their original patchify stem versus a simple counterpart where we replace the ViT stem by a small number of stacked stride-two 3×3 convolutions. While the vast majority of computation in the two ViT designs is identical, we find that this small change in early visual processing results in markedly different training behavior in terms of the sensitivity to optimization settings as well as the final model accuracy. Using a convolutional stem in ViT dramatically increases optimization stability and also improves peak performance (by ∼1-2% top-1 accuracy on ImageNet-1k), while maintaining flops and runtime. The improvement can be observed across the wide spectrum of model complexities (from 1G to 36G flops) and dataset scales (from ImageNet-1k to ImageNet-21k). These findings lead us to recommend using a standard, lightweight convolutional stem for ViT models in this regime as a more robust architectural choice compared to the original ViT model design.
| accept | Reviewers agree after author's response that this paper is a (strong) accept (7,9,7). The paper doesn't have any major weak points. The message is simple and clear and the contribution thorough. I don't consider the implications of the result worthy of a spotlight though. | train | [
"RQW_XO8ctFP",
"BzJeaJE-oYO",
"Yij8LMmhEaQ",
"lD3hbdRU4kt",
"Ry2sk_qnwYe",
"OPdsz5ByxGa",
"5EX43nP3-JJ",
"nYT5UZb5BZa"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper studies the impact of stem in ViT. Specifically, the authors propose to replace the patch-based stem in ViT with a standard, lightweight convolutional stem, and show that this simple revision leads to substantially improved stability wrt the choices of optimizers and several other hyper-parameters. The r... | [
7,
-1,
-1,
-1,
-1,
-1,
7,
9
] | [
4,
-1,
-1,
-1,
-1,
-1,
4,
5
] | [
"nips_2021_Lpfh1Bpqfk",
"lD3hbdRU4kt",
"Ry2sk_qnwYe",
"RQW_XO8ctFP",
"nYT5UZb5BZa",
"5EX43nP3-JJ",
"nips_2021_Lpfh1Bpqfk",
"nips_2021_Lpfh1Bpqfk"
] |
nips_2021_dSqtddFibt2 | Error Compensated Distributed SGD Can Be Accelerated | Gradient compression is a recent and increasingly popular technique for reducing the communication cost in distributed training of large-scale machine learning models. In this work we focus on developing efficient distributed methods that can work for any compressor satisfying a certain contraction property, which includes both unbiased (after appropriate scaling) and biased compressors such as RandK and TopK. Applied naively, gradient compression introduces errors that either slow down convergence or lead to divergence. A popular technique designed to tackle this issue is error compensation/error feedback. Due to the difficulties associated with analyzing biased compressors, it is not known whether gradient compression with error compensation can be combined with acceleration. In this work, we show for the first time that error compensated gradient compression methods can be accelerated. In particular, we propose and study the error compensated loopless Katyusha method, and establish an accelerated linear convergence rate under standard assumptions. We show through numerical experiments that the proposed method converges with substantially fewer communication rounds than previous error compensated algorithms.
| accept | Overall, this paper addresses an important open problem. It is well-written, and solid theoretical results are complemented by distinct advantages in simulation experiments.
Based on my own reading, and the comments of the reviewers, I recommend acceptance.
There are some weak points that should be improved in the camera ready version.
The abstract and part of the introduction are a little bit misleading, referring to Nesterov’s acceleration (easily mistaken for Nesterov’s accelerated gradient method), while the algorithm later uses Katyusha (another acceleration technique).
The writing is a bit too dense, with little exposition on how the algorithm works and how it is related to previous work. In the response to the reviewers, you claim that this is not an issue and it does not matter if you put material in the main text or the appendix. I disagree. The main text should be complete and accessible to a broader audience than researchers in error-compensated training algorithms. I believe that parts of the related work from the appendix need to be moved into the main text and that you need to expand on how the algorithm works, and how its parameters are selected. In the response to the reviewers, you share such intuition and describe parameter tuning techniques. This knowledge should also make it into the main document.
Finally, I also believe that you can clarify in what environment you perform your experiments. I see the main contribution of this paper as mainly theoretical, but you do talk about “nodes” in your experiments, and “send” and “receive” in your algorithm. To me, it is perfectly fine if you have made the experiments in a Python/Matlab/C++… simulation where send and receive operations are instantaneous and error-free. But you should be clear with this in the paper. | train | [
"rbAS-wPXxZ",
"tWyOTdZlWM2",
"VBsa2aOsT4a",
"H6zqtgg-1Dy",
"fqNkT7Z0IpU",
"LRfJwz1cVFT",
"no3KeWB1pHl",
"fzHzHhJkmv",
"iWq9MWmYWe",
"V7vEGaW6ARb",
"FtW6MEboqsQ",
"Av7vSR9pQyu",
"iCSyec6my4N",
"ypJTwHVNGf",
"20EzY-8Zeq4",
"xolhu1OvqR9",
"TB0nuUKx6dY",
"Rktszh-_jTj",
"uO7sTBq3sZ",
... | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"autho... | [
" Dear Reviewer iBtS,\n\nAre you satisfied with our response to the 2 issues you raised?\n\n- If yes, please would you consider raising your score appropriately?\n- If not, please let us known which issues were not clarified sufficiently and why, so that we have a chance to respond.\n\nBest regards,\n\nAuthors",
... | [
-1,
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
5
] | [
-1,
4,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2
] | [
"xolhu1OvqR9",
"nips_2021_dSqtddFibt2",
"nips_2021_dSqtddFibt2",
"xolhu1OvqR9",
"xolhu1OvqR9",
"Rktszh-_jTj",
"uO7sTBq3sZ",
"Qf21RT4ozPi",
"SbQyWDp5zAu",
"xolhu1OvqR9",
"Rktszh-_jTj",
"uO7sTBq3sZ",
"Qf21RT4ozPi",
"SbQyWDp5zAu",
"2iYfh49rDYm",
"2iYfh49rDYm",
"2iYfh49rDYm",
"VBsa2aOs... |
nips_2021_519VBzfEaKW | InfoGCL: Information-Aware Graph Contrastive Learning | Various graph contrastive learning models have been proposed to improve the performance of tasks on graph datasets in recent years. While effective and prevalent, these models are usually carefully customized. In particular, despite all recent work create two contrastive views, they differ in a variety of view augmentations, architectures, and objectives. It remains an open question how to build your graph contrastive learning model from scratch for particular graph tasks and datasets. In this work, we aim to fill this gap by studying how graph information is transformed and transferred during the contrastive learning process, and proposing an information-aware graph contrastive learning framework called InfoGCL. The key to the success of the proposed framework is to follow the Information Bottleneck principle to reduce the mutual information between contrastive parts while keeping task-relevant information intact at both the levels of the individual module and the entire framework so that the information loss during graph representation learning can be minimized. We show for the first time that all recent graph contrastive learning methods can be unified by our framework. Based on theoretical and empirical analysis on benchmark graph datasets, we show that InfoGCL achieves state-of-the-art performance in the settings of both graph classification and node classification tasks.
| accept | The paper proposes a systematic approach to contrastive learning on graphs which builds on ideas from information bottleneck theory and unifies multiple graph representation learning methods in a single framework. The paper is written well and relevant to the NeurIPS community. All reviewers support its acceptance, especially due to the novel and promising approach which unifies existing methods and provides new perspectives for graph learning. Reviewers highlighted also that the approach is technically sound, that claims in the paper are theoretically supported, and convincing experimental results. Please take the feedback on concerns and limitations from reviewers into account when preparing the camera ready version of the manuscript. | train | [
"jeTg_umYE-3",
"YHloR5qr5e",
"vrWIrhPqRB",
"YIBfjHphVfF",
"cnMkL9CQWaL",
"szxKsTA5rZF",
"R9jw1bdVc5"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I have read the author's replies for my review concerns as well as other reviews and discussions. The authors generally addressed my questions and provided a satisfactory response to my comment. I would stick to my current score evaluation considering the innovation, significance, and clarity.",
" Thank you for... | [
-1,
-1,
-1,
-1,
7,
7,
7
] | [
-1,
-1,
-1,
-1,
2,
4,
4
] | [
"vrWIrhPqRB",
"R9jw1bdVc5",
"szxKsTA5rZF",
"cnMkL9CQWaL",
"nips_2021_519VBzfEaKW",
"nips_2021_519VBzfEaKW",
"nips_2021_519VBzfEaKW"
] |
nips_2021_NBpwZs6sm2 | Meta-Learning for Relative Density-Ratio Estimation | The ratio of two probability densities, called a density-ratio, is a vital quantity in machine learning. In particular, a relative density-ratio, which is a bounded extension of the density-ratio, has received much attention due to its stability and has been used in various applications such as outlier detection and dataset comparison. Existing methods for (relative) density-ratio estimation (DRE) require many instances from both densities. However, sufficient instances are often unavailable in practice. In this paper, we propose a meta-learning method for relative DRE, which estimates the relative density-ratio from a few instances by using knowledge in related datasets. Specifically, given two datasets that consist of a few instances, our model extracts the datasets' information by using neural networks and uses it to obtain instance embeddings appropriate for the relative DRE. We model the relative density-ratio by a linear model on the embedded space, whose global optimum solution can be obtained as a closed-form solution. The closed-form solution enables fast and effective adaptation to a few instances, and its differentiability enables us to train our model such that the expected test error for relative DRE can be explicitly minimized after adapting to a few instances. We empirically demonstrate the effectiveness of the proposed method by using three problems: relative DRE, dataset comparison, and outlier detection.
| accept | The reviews are mostly positive and the general sentiment is that the meta-learning approach (for estimating density ratios when there are few instances) is well-motivated and the results encouraging. There were some concerns of lack of theoretical underpinnings (Vnfc), the method limited in terms of applications (Q8XR), and experiments being artificial (r99y). The authors need to address the last two concerns in a revision, and include the additional experimental support in e.g. Supplementary Materials. | train | [
"Wh_yaGCVHCY",
"1ac6i_WGlPV",
"oW77O6_zxm",
"XbKlUUs8QwH",
"rJ8xzD8VWt",
"Yg_N9WEAFL5",
"baUUYp-Xijn",
"XItPgIsun0g",
"5_JMNS5bkx",
"k0A24el0ZV",
"2EdVlO8-cPp"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper shows how one can estimate density ratios when very few data points of the two sets of interested are available. In order to do that, the idea is to leverage other datasets that contain related tasks. The ratio is estimated via a linear model to a latent space that represents each dataset. In general, t... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
7
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2
] | [
"nips_2021_NBpwZs6sm2",
"oW77O6_zxm",
"rJ8xzD8VWt",
"baUUYp-Xijn",
"Wh_yaGCVHCY",
"2EdVlO8-cPp",
"k0A24el0ZV",
"5_JMNS5bkx",
"nips_2021_NBpwZs6sm2",
"nips_2021_NBpwZs6sm2",
"nips_2021_NBpwZs6sm2"
] |
nips_2021_9vur8-1GU7V | Overcoming the curse of dimensionality with Laplacian regularization in semi-supervised learning | As annotations of data can be scarce in large-scale practical problems, leveraging unlabelled examples is one of the most important aspects of machine learning. This is the aim of semi-supervised learning. To benefit from the access to unlabelled data, it is natural to diffuse smoothly knowledge of labelled data to unlabelled one. This induces to the use of Laplacian regularization. Yet, current implementations of Laplacian regularization suffer from several drawbacks, notably the well-known curse of dimensionality. In this paper, we design a new class of algorithms overcoming this issue, unveiling a large body of spectral filtering methods. Additionally, we provide a statistical analysis showing that our estimators exhibit desirable behaviors. They are implemented through (reproducing) kernel methods, for which we provide realistic computational guidelines in order to make our method usable with large amounts of data.
| accept | The paper received positive reviews. However, only 3 out of 4 reviewers submitted a review, and therefore the AC very carefully read the paper on their own. They do share the positive impressions of the reviewers regarding novelty and they also think that for somebody familiar with the topic this paper is very accessible, while for others it is certainly a hard read. Their major concern regarding the paper is Assumption 1, which unfortunately has not been seen as an issue during the review/rebuttal phase. The main problem they see with Assumption 1 is that it is very restrictive, in fact much more restrictive than a usual source condition, as it explicitly assumes that the target function can be expressed by a finite number of eigenfunctions of the Laplacian, whereas "normal" source conditions are usually phrased in terms of an essentially polynomial decay of the Fourier coefficients. Even worse, the main result, Theorem 1, does not specify the dependence on the number of required eigenfunctions, and a quick look at the appendix did not resolve this issue. As a consequence, the AC views the main result, Theorem 1, as rather premature.
These concerns were discussed with the reviewers and SAC, and it was concluded that this paper should nonetheless be accepted. 1) the authors are attacking a problem of interest to the community; 2) they have some new non-trivial results; 3) the writing is good. The only concern is that the assumption made is not as nice as one might like. | train | [
"BcwVZqnU3ii",
"XULWArQ12cF",
"zfeccyOJiMm",
"0hq6gysKoAx",
"8oMjavaNp2K",
"XMqZVSeSX-P"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your appreciation of this work. We are happy to read that you enjoyed it, and we are really thankful for your comments that have helped us to deepen our understanding of our own work.\n\n**Experiments.** We totally agree that the experimental section can be made thicker, we will do our best to incor... | [
-1,
-1,
-1,
6,
8,
7
] | [
-1,
-1,
-1,
3,
4,
2
] | [
"8oMjavaNp2K",
"0hq6gysKoAx",
"XMqZVSeSX-P",
"nips_2021_9vur8-1GU7V",
"nips_2021_9vur8-1GU7V",
"nips_2021_9vur8-1GU7V"
] |
nips_2021_xV6ZDMwRspN | Unlabeled Principal Component Analysis | We introduce robust principal component analysis from a data matrix in which the entries of its columns have been corrupted by permutations, termed Unlabeled Principal Component Analysis (UPCA). Using algebraic geometry, we establish that UPCA is a well-defined algebraic problem in the sense that the only matrices of minimal rank that agree with the given data are row-permutations of the ground-truth matrix, arising as the unique solutions of a polynomial system of equations. Further, we propose an efficient two-stage algorithmic pipeline for UPCA suitable for the practically relevant case where only a fraction of the data have been permuted. Stage-I employs outlier-robust PCA methods to estimate the ground-truth column-space. Equipped with the column-space, Stage-II applies recent methods for unlabeled sensing to restore the permuted data. Experiments on synthetic data, face images, educational and medical records reveal the potential of UPCA for applications such as data privatization and record linkage.
| accept | The authors identified an interesting problem about principal component analysis with unknown permutations of the feature in each data sample. They then proposed an algorithm and provided performance guarantee. All 4 reviewers agreed to accept this paper.
| train | [
"xF7QCg99124",
"rM1QJbSNsL",
"bv0Y86bK4I",
"yzAL4KNVAFJ",
"goI_JXT_VF0",
"U9jMBUDnLOV",
"v9HKO2zsBUQ",
"h0Dn2V1_-lv",
"bSZHKB4GChM",
"ftOZlnj5PB3"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for the authors' responses. I still think the theoretical guarantees in this paper are interesting. But currently, although the proofs in the supplementary material are not long, it is still quite tiring for me to remember the notation and follow the logic/connections between steps and auxiliary results. I... | [
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4,
3
] | [
"goI_JXT_VF0",
"bv0Y86bK4I",
"h0Dn2V1_-lv",
"bSZHKB4GChM",
"ftOZlnj5PB3",
"v9HKO2zsBUQ",
"nips_2021_xV6ZDMwRspN",
"nips_2021_xV6ZDMwRspN",
"nips_2021_xV6ZDMwRspN",
"nips_2021_xV6ZDMwRspN"
] |
nips_2021_h2E5OYma5U | Causal-BALD: Deep Bayesian Active Learning of Outcomes to Infer Treatment-Effects from Observational Data | Estimating personalized treatment effects from high-dimensional observational data is essential in situations where experimental designs are infeasible, unethical, or expensive. Existing approaches rely on fitting deep models on outcomes observed for treated and control populations. However, when measuring individual outcomes is costly, as is the case of a tumor biopsy, a sample-efficient strategy for acquiring each result is required. Deep Bayesian active learning provides a framework for efficient data acquisition by selecting points with high uncertainty. However, existing methods bias training data acquisition towards regions of non-overlapping support between the treated and control populations. These are not sample-efficient because the treatment effect is not identifiable in such regions. We introduce causal, Bayesian acquisition functions grounded in information theory that bias data acquisition towards regions with overlapping support to maximize sample efficiency for learning personalized treatment effects. We demonstrate the performance of the proposed acquisition strategies on synthetic and semi-synthetic datasets IHDP and CMNIST and their extensions, which aim to simulate common dataset biases and pathologies.
| accept | The paper aims to provide a sample efficient strategy to train deep learning models that can effectively predict individual-level treatment effects from high dimensional observation data. A unique challenge in estimating personalized treatment effect is the necessity to choose data samples from regions with support from both treated and control populations so that the treatment effect can be properly identified. As a result, most existing data acquisition functions that primarily focus on uncertainty become less effective. The proposed model, referred to as Causal-BALD, addresses this challenge by integrating Bayesian active learning and causal inference in novel ways that cover the uncertainty and overlapping criteria simultaneously. Effectiveness of the proposed acquisition functions is clearly demonstrated using both synthetic and semi-synthetic datasets.
As acknowledged by all the reviewers, the proposed acquisition functions are clearly motivated, well explained, sufficiently novel, and further supported by convincing evaluation results. It has the potential to impact both the scientific community and multiple application domains.
The authors adequately addressed most of the questions raised by reviewers in their original reviews during the discussion period. In particular, since multiple acquisition functions are proposed and some show better performance than others on certain datasets, it is important for the authors to offer some deeper insight to better understand the behaviors of different acquisition functions. It is also interesting to show some sampled data instances and connect them with the corresponding acquisition function. The authors promised to move some results to the appendix and add necessary discussions to better highlight the key properties of the acquisition functions and their potential limitations.
| train | [
"IhzVjouNK-4",
"Kfd9tomIFzK",
"ZJAFgXSAa9S",
"EsDrgW4lpiI",
"1CNI2BaaPp",
"EI_5UydT2z",
"tBFVdsaz6a",
"O4lMxosPkl4",
"cHQYeVB4uqr",
"AHQruJwZzY",
"I_yuZod_Srs",
"ZFb1gct28LS",
"pGxKopF3hrJ",
"LqwDLQR1Eak",
"74Uc04MoRUn"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for providing a detailed response to my review. I too have no further questions, and continue to recommend acceptance.",
" I thank the authors for their response and for introducing the above comments into the discussion. I will keep my accept recommendation. ",
" Thank you again for your... | [
-1,
-1,
-1,
-1,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
2,
3,
3
] | [
"AHQruJwZzY",
"I_yuZod_Srs",
"tBFVdsaz6a",
"1CNI2BaaPp",
"cHQYeVB4uqr",
"nips_2021_h2E5OYma5U",
"O4lMxosPkl4",
"EI_5UydT2z",
"74Uc04MoRUn",
"pGxKopF3hrJ",
"LqwDLQR1Eak",
"nips_2021_h2E5OYma5U",
"nips_2021_h2E5OYma5U",
"nips_2021_h2E5OYma5U",
"nips_2021_h2E5OYma5U"
] |
nips_2021_q_fMLfwTAJY | Scalable Rule-Based Representation Learning for Interpretable Classification | Rule-based models, e.g., decision trees, are widely used in scenarios demanding high model interpretability for their transparent inner structures and good model expressivity. However, rule-based models are hard to optimize, especially on large data sets, due to their discrete parameters and structures. Ensemble methods and fuzzy/soft rules are commonly used to improve performance, but they sacrifice the model interpretability. To obtain both good scalability and interpretability, we propose a new classifier, named Rule-based Representation Learner (RRL), that automatically learns interpretable non-fuzzy rules for data representation and classification. To train the non-differentiable RRL effectively, we project it to a continuous space and propose a novel training method, called Gradient Grafting, that can directly optimize the discrete model using gradient descent. An improved design of logical activation functions is also devised to increase the scalability of RRL and enable it to discretize the continuous features end-to-end. Exhaustive experiments on nine small and four large data sets show that RRL outperforms the competitive interpretable approaches and can be easily adjusted to obtain a trade-off between classification accuracy and model complexity for different scenarios. Our code is available at: https://github.com/12wang3/rrl.
| accept | This paper presents a new scalable classifier, named Rule-based Representation Learner (RRL), that can automatically learn interpretable rules for data representation and classification. A new gradient-based discrete model training method, i.e., Gradient Grafting, is proposed as well that directly optimizes the discrete model. RRL's performance on structured data seems promising wrt experiments performed.
Overall, the reviewers agree that there is merit in the proposed approach and they all support the acceptance of the manuscript. I also looked at aspects of the manuscript and found the experimental evaluation very well organized and thorough. Moreover, I particularly like the results properly showing the trade-offs of complexity and performance and the comparison with other methods according to these two measures. This can convincingly, and in a quantitative manner show, that the proposed method can produce short explanations (i.e., small number of rules/edges) while still performing relatively well. Moreover, when allowing complex explanations, the model performs provides state-of-the-art results when compared to other approaches that produce less interpretable predictors. I believe that approaches like the proposed ones will be important to get AI accepted with application fields, such as medicine or finance.
In summary, there are strong arguments supporting the acceptance of the manuscript and I recommend accepting the manuscript. | train | [
"PODfRuO9uYj",
"tYMuyoc0Ne3",
"ZMuu6uzmRHV",
"GUiO6ggcy0P",
"v0jndArOXT7",
"iBABwGxusP7",
"IwN7wEU9sC_",
"XytOkTTwaM8",
"PHbL3-kq93q",
"1x4JQfa2FRk"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The ethical review was flagged to ensure the paper was not plagiarizing a previously submitted (but rejected) ICLR paper. As I cannot see the author names, i cannot verify. \n\nPlease have the appropriate party review. n/a The ethical review was flagged to ensure the paper was not plagiarizing a previously submi... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4,
4
] | [
"nips_2021_q_fMLfwTAJY",
"nips_2021_q_fMLfwTAJY",
"PHbL3-kq93q",
"1x4JQfa2FRk",
"XytOkTTwaM8",
"IwN7wEU9sC_",
"nips_2021_q_fMLfwTAJY",
"nips_2021_q_fMLfwTAJY",
"nips_2021_q_fMLfwTAJY",
"nips_2021_q_fMLfwTAJY"
] |
nips_2021_1dcGJjvwl2h | Bridging Non Co-occurrence with Unlabeled In-the-wild Data for Incremental Object Detection | Deep networks have shown remarkable results in the task of object detection. However, their performance suffers critical drops when they are subsequently trained on novel classes without any sample from the base classes originally used to train the model. This phenomenon is known as catastrophic forgetting. Recently, several incremental learning methods are proposed to mitigate catastrophic forgetting for object detection. Despite the effectiveness, these methods require co-occurrence of the unlabeled base classes in the training data of the novel classes. This requirement is impractical in many real-world settings since the base classes do not necessarily co-occur with the novel classes. In view of this limitation, we consider a more practical setting of complete absence of co-occurrence of the base and novel classes for the object detection task. We propose the use of unlabeled in-the-wild data to bridge the non co-occurrence caused by the missing base classes during the training of additional novel classes. To this end, we introduce a blind sampling strategy based on the responses of the base-class model and pre-trained novel-class model to select a smaller relevant dataset from the large in-the-wild dataset for incremental learning. We then design a dual-teacher distillation framework to transfer the knowledge distilled from the base- and novel-class teacher models to the student model using the sampled in-the-wild data. Experimental results on the PASCAL VOC and MS COCO datasets show that our proposed method significantly outperforms other state-of-the-art class-incremental object detection methods when there is no co-occurrence between the base and novel classes during training.
| accept | The paper proposes an approach for incremental object detection, which removes the need for co-occurrence of base and novel object classes in images in the training dataset. The reviewers scored the paper with (5, 6, 6, 6). After discussion and rebuttal,
they lean towards accepting the paper as a poster. | train | [
"RBue-gaZNcL",
"sW8P_Pqakh",
"tfdU_vd6jhT",
"UKmdc3Z_XJa",
"jxVsP-a5IK",
"BMenxYxJpBm",
"6tewrZ6To9",
"Hm70-hlEnRt",
"MAvuF4-0qFk",
"jH51drLGcTF"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I appreciate your clarifications.",
" Thanks for the response! ",
" **Q1: In the novel class data subsets, how frequently do novel classes occur with no co-occurrence to base ones?**\n\n**A:** The following table shows the co-occurrence frequency count of objects from the base and novel classes in the novel d... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
3
] | [
"tfdU_vd6jhT",
"UKmdc3Z_XJa",
"jH51drLGcTF",
"MAvuF4-0qFk",
"Hm70-hlEnRt",
"6tewrZ6To9",
"nips_2021_1dcGJjvwl2h",
"nips_2021_1dcGJjvwl2h",
"nips_2021_1dcGJjvwl2h",
"nips_2021_1dcGJjvwl2h"
] |
nips_2021_GgS40Y04LxA | A Regression Approach to Learning-Augmented Online Algorithms | The emerging field of learning-augmented online algorithms uses ML techniques to predict future input parameters and thereby improve the performance of online algorithms. Since these parameters are, in general, real-valued functions, a natural approach is to use regression techniques to make these predictions. We introduce this approach in this paper, and explore it in the context of a general online search framework that captures classic problems like (generalized) ski rental, bin packing, minimum makespan scheduling, etc. We show nearly tight bounds on the sample complexity of this regression problem, and extend our results to the agnostic setting. From a technical standpoint, we show that the key is to incorporate online optimization benchmarks in the design of the loss function for the regression problem, thereby diverging from the use of off-the-shelf regression tools with standard bounds on statistical error.
| accept | This work makes progress in the recently popular "learning-augmented algorithms" framework and argues that a regression-based approach forms a correct learning solution for many of the problems. The authors do a complete study of this question; of particular interest is their exploration of the learning problem itself. Here they show that special care must be taken in setting up the loss function for the regression problem to achieve good results. Overall, a solid contribution.
| test | [
"aRR2kc1ep8S",
"smYwXG36iHX",
"KR0fQpnn5ED",
"9GqDx3BbXsy",
"uyBH1UOCQOn",
"juzA2Bxj3Vz",
"0pHBNhYywOM",
"vF9NgYxl_Sf",
"XdWXG2H9oyf"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper considers an \"online search problem\", which includes ski rental and variants as special cases and is also applicable to a classical scheduling problem and online bin packing, and proposes an approach for solving these problems in a learning-augmented setting. The online search problem is very generic: ... | [
6,
-1,
-1,
-1,
-1,
-1,
7,
7,
6
] | [
3,
-1,
-1,
-1,
-1,
-1,
4,
4,
1
] | [
"nips_2021_GgS40Y04LxA",
"XdWXG2H9oyf",
"vF9NgYxl_Sf",
"0pHBNhYywOM",
"aRR2kc1ep8S",
"nips_2021_GgS40Y04LxA",
"nips_2021_GgS40Y04LxA",
"nips_2021_GgS40Y04LxA",
"nips_2021_GgS40Y04LxA"
] |
nips_2022_09QFnDWPF8 | Statistical Learning and Inverse Problems: A Stochastic Gradient Approach | Inverse problems are paramount in Science and Engineering. In this paper, we consider the setup of Statistical Inverse Problem (SIP) and demonstrate how Stochastic Gradient Descent (SGD) algorithms can be used to solve linear SIP. We provide consistency and finite sample bounds for the excess risk. We also propose a modification for the SGD algorithm where we leverage machine learning methods to smooth the stochastic gradients and improve empirical performance. We exemplify the algorithm in a setting of great interest nowadays: the Functional Linear Regression model. In this case we consider a synthetic data example and a classification problem for predicting the main activity of bitcoin addresses based on their balances. | Accept | All reviewers recommend accepting the paper. Congratulations!
But in your camera-ready version, please revise the paper to better emphasize the relevance of this line work to a general machine learning audience. For example, during the discussion one reviewer wrote the following:
"The authors have not thoroughly convinced me of the relevance to a ML audience, as I will explain. To me, the relevance is very likely, as they apply SGD to a regression problem, and then ML learners on top of that, which is very much in the vein of ML. My confidence is limited, however, since I am not familiar with functional regression.
The paper could better appeal to an ML audience by mentioning ML applications of their method (whether it be functional linear regression or otherwise) early in their paper and in an appealing way. Section 5.2 (Real Data Application) does a good job of this, but giving motivating use cases could help pique reader interest/help them understand application earlier."
The paper will have greater influence if the final version can convince readers of its relevance to ML! | test | [
"A6YBn-h0rY1O",
"3VrJ-8W5LnJ",
"x0BRrfZcn4F",
"V4fXiDWcVU_",
"Y5Mx8KhRya",
"omqYW6jDEcX",
"P7IJ6j6PPdD-"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are extremely grateful for the positive assessment and highly detailed and constructive feedback on our paper. In this rebuttal, we have addressed the comments of the review team. This has led to multiple improvements including: better exposition of the numerical studies, new applications, more general theore... | [
-1,
-1,
-1,
-1,
8,
5,
7
] | [
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"nips_2022_09QFnDWPF8",
"P7IJ6j6PPdD-",
"omqYW6jDEcX",
"Y5Mx8KhRya",
"nips_2022_09QFnDWPF8",
"nips_2022_09QFnDWPF8",
"nips_2022_09QFnDWPF8"
] |
nips_2022_pnSyqRXx73 | Efficiency Ordering of Stochastic Gradient Descent | We consider the stochastic gradient descent (SGD) algorithm driven by a general stochastic sequence, including i.i.d noise and random walk on an arbitrary graph, among others; and analyze it in the asymptotic sense. Specifically, we employ the notion of `efficiency ordering', a well-analyzed tool for comparing the performance of Markov Chain Monte Carlo (MCMC) samplers, for SGD algorithms in the form of Loewner ordering of covariance matrices associated with the scaled iterate errors in the long term. Using this ordering, we show that input sequences that are more efficient for MCMC sampling also lead to smaller covariance of the errors for SGD algorithms in the limit. This also suggests that an arbitrarily weighted MSE of SGD iterates in the limit becomes smaller when driven by more efficient chains. Our finding is of particular interest in applications such as decentralized optimization and swarm learning, where SGD is implemented in a random walk fashion on the underlying communication graph for cost issues and/or data privacy. We demonstrate how certain non-Markovian processes, for which typical mixing-time based non-asymptotic bounds are intractable, can outperform their Markovian counterparts in the sense of efficiency ordering for SGD. We show the utility of our method by applying it to gradient descent with shuffling and mini-batch gradient descent, reaffirming key results from existing literature under a unified framework. Empirically, we also observe efficiency ordering for variants of SGD such as accelerated SGD and Adam, open up the possibility of extending our notion of efficiency ordering to a broader family of stochastic optimization algorithms. | Accept | The paper proposes an asymptotic analysis of SGD when the noise governed by a Markov chain such as a random walk over an arbitrary graph. The paper connects efficiency ordering with an ordering of covariance matrices inherited from the Central Limit Theorems.
One of the results of the work is that SGD algorithms can achieve smaller covariance errors by using a more efficient noise sequences from MCMC sampling. The paper considers decentralized optimization over a graph using high-order random walks for SGD and mini-batch gradient descent with shuffling on the other hand, as applications. The reviewers raised questions on experimental results on larger graphs for their applications, and the authors have preformed and committed to including these new experiments. As a result, all the reviewers are in favour of accepting, and are now confident in their scores. | train | [
"a_Vtx8nYsYQ",
"uk2GuVH2iK",
"mpyZxZL81Nj",
"H_HP_BPjkXp",
"sjEApTcGc9g",
"FD6DmoKYRVi",
"ra7hzpPbVy",
"iuge7ohY0SE",
"XSH4VAxfIOt",
"QjAEwGuzGz0Q",
"_pK94g4ReHn",
"VC2RrZb9e4",
"QTzyJWaiSpI",
"Qjb2_-2wdJz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" This paper proposes an asymptotic analysis of SGD when the noise sequence is not necessarily iid but can be a Markov chain such as a random walk over an arbitrary graph. The main contribution of this work is to connect the concept of efficiency ordering (which was previously introduced in the literature to compar... | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"nips_2022_pnSyqRXx73",
"sjEApTcGc9g",
"QjAEwGuzGz0Q",
"Qjb2_-2wdJz",
"Qjb2_-2wdJz",
"Qjb2_-2wdJz",
"QTzyJWaiSpI",
"QTzyJWaiSpI",
"QTzyJWaiSpI",
"VC2RrZb9e4",
"nips_2022_pnSyqRXx73",
"nips_2022_pnSyqRXx73",
"nips_2022_pnSyqRXx73",
"nips_2022_pnSyqRXx73"
] |
nips_2022_EqJ5_hZSqgy | Self-Aware Personalized Federated Learning | In the context of personalized federated learning (FL), the critical challenge is to balance local model improvement and global model tuning when the personal and global objectives may not be exactly aligned. Inspired by Bayesian hierarchical models, we develop a self-aware personalized FL method where each client can automatically balance the training of its local personal model and the global model that implicitly contributes to other clients' training. Such a balance is derived from the inter-client and intra-client uncertainty quantification. A larger inter-client variation implies more personalization is needed. Correspondingly, our method uses uncertainty-driven local training steps an aggregation rule instead of conventional local fine-tuning and sample size-based aggregation. With experimental studies on synthetic data, Amazon Alexa audio data, and public datasets such as MNIST, FEMNIST, CIFAR10, and Sent140, we show that our proposed method can achieve significantly improved personalization performance compared with the existing counterparts. | Accept | In this submission, the authors study personalized federated learning and propose a self-aware personalized method to address the balancing challenge from the perspective of uncertainty quantification. This problem is interesting and important (as pointed out by b7BG and rRk4), and the proposed method is derived in a principal way from theory (as pointed out by jvSV) and useful for applications. I recommend accepting this submission.
The authors can include some discussion about the communication and computation cost (as suggested by b7BG), and add an ablation study to show the effect of other factors (as suggested by jvSV), to make this submission better.
Hope that the suggestions from all the reviewers and the discussion between reviewers and authors can make this submission a better on.
| train | [
"lM0l6jXJaUo",
"nWBzIAuSytV",
"pVZtaSC8R_0o",
"9ePw5gKLlxH1",
"PQTKmMw9Rux",
"DQ6G0wOFqiO",
"DktHwh83K-"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear Reviewer rRk4,\n\nWe would like to thank you again for the time you dedicated to reviewing our paper. We believe that we have addressed your concerns. Since the end of discussion period is getting close and we have not heard back from you yet, we would appreciate it if you kindly let us know of any other con... | [
-1,
-1,
-1,
-1,
7,
7,
3
] | [
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"DktHwh83K-",
"DktHwh83K-",
"DQ6G0wOFqiO",
"PQTKmMw9Rux",
"nips_2022_EqJ5_hZSqgy",
"nips_2022_EqJ5_hZSqgy",
"nips_2022_EqJ5_hZSqgy"
] |
nips_2022_xnI37HyfoP | Nonnegative Tensor Completion via Integer Optimization | Unlike matrix completion, tensor completion does not have an algorithm that is known to achieve the information-theoretic sample complexity rate. This paper develops a new algorithm for the special case of completion for nonnegative tensors. We prove that our algorithm converges in a linear (in numerical tolerance) number of oracle steps, while achieving the information-theoretic rate. Our approach is to define a new norm for nonnegative tensors using the gauge of a particular 0-1 polytope; integer linear programming can, in turn, be used to solve linear separation problems over this polytope. We combine this insight with a variant of the Frank-Wolfe algorithm to construct our numerical algorithm, and we demonstrate its effectiveness and scalability through computational experiments using a laptop on tensors with up to one-hundred million entries. | Accept | All reviewers found the paper clearly interesting, and agree that it makes a valuable and novel contribution in the field of tensor learning, and the consensus to accept the paper is thus without ambiguity.
However, all reviewers, who were all from the start fairly positive about the paper, and who made a number of constructive comments and suggestions that could improve the paper, were quite disappointed by the responses of the authors which seem to suggest that the latter were only prepared to make minimal changes to address the concerns of the reviewers. This explains why the ratings of the paper are not higher...
We obviously understand that it would not have been possible to address some of the concerns of the reviewers during the rebuttal period, but the authors are now strongly encouraged to take into account the comments of the reviewers when preparing the final version of this manuscript. The authors are in particular encouraged to take into account the questions and comments about related work and to the extend possible to include more detailed discussions of the related work and the connections with this work. Also, they are encouraged to clarify the technical parts of the manuscript about which the reviewers asked clarification questions. | val | [
"yQwBAn_gYlv",
"U-75G3GAUHoA",
"YArPNsxsano",
"5W2taO_Mt2N",
"anx7xlRmBIB",
"HQpLR0zD0Q",
"kxAjMwQIHOb",
"506j9dMN3e"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. I am keeping my positive score unchanged for now.\nI also agree with reviewer 7JBT, that some suggestions that were meant to help improve the presentation of the paper were not sufficiently addressed.",
" I am somewhat disappointed by the authors' reply.\nMy comments and my requests ... | [
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"YArPNsxsano",
"5W2taO_Mt2N",
"506j9dMN3e",
"kxAjMwQIHOb",
"HQpLR0zD0Q",
"nips_2022_xnI37HyfoP",
"nips_2022_xnI37HyfoP",
"nips_2022_xnI37HyfoP"
] |
nips_2022_OoNmOfYVhEU | TPU-KNN: K Nearest Neighbor Search at Peak FLOP/s | This paper presents a novel nearest neighbor search algorithm achieving TPU (Google Tensor Processing Unit) peak performance, outperforming state-of-the-art GPU algorithms with similar level of recall. The design of the proposed algorithm is motivated by an accurate accelerator performance model that takes into account both the memory and instruction bottlenecks. Our algorithm comes with an analytical guarantee of recall in expectation and does not require maintaining sophisticated index data structure or tuning, making it suitable for applications with frequent updates. Our work is available in the open-source package of Jax and Tensorflow on TPU. | Accept | The authors design an efficient implementation of nearest neighbor search on a TPU accelerator unit. The implementation is motivated by a refined roofline performance model that takes into account the memory and instruction bottlenecks that are found to be significant and that are not typically optimized for. Empirical results demonstrate that the proposed TPU solver outperforms state-of-the art GPU solvers. The package is available on Tensorflow.
The reviewers agree that the paper is well written, well structured and the proposed method can have significant practical impact.
Some concerns regarding the evaluation came up in the reviews but the additional experiments provided could address most of these concerns. What remains is a question on whether the performance gain over GPU comes from the algorithmic optimization itself or the higher efficiency of the TPU, and whether the algorithmic optimization would be similarly effective on other accelerators. The reviewers agree that performing such an analysis is outside the scope of this work. However, I want to encourage the authors to incorporate additional discussion to help the reader understand what parts of the work are specific to TPUs.
Overall this paper represents a well executed piece of work at the intersection between algorithm design and systems with an open source package that is available to the community. I recommend acceptance.
| train | [
"WF0UYAFWmF",
"TOkPamV2QY8",
"IUFeOrUrnJh",
"TFFYGkIihgq",
"Op2sTqs20kb",
"rAp7pPTidEA",
"4-x3-l1Pc6C",
"ydaFcKSnNzw",
"oJH0rqMMocg",
"P952r1Cx9iJ",
"wAGfSpbpEN8",
"D2lV8NiA7LF"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Dear authors,\n\nThanks for your response and answers. Good that you have updated the evaluation with more results. I understand the challenges with implementing optimized versions of your algorithm on different platforms.",
" That makes sense. I've revised my rating.\n\nThank you!",
" Thank you for your repl... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
4,
4
] | [
"ydaFcKSnNzw",
"IUFeOrUrnJh",
"TFFYGkIihgq",
"rAp7pPTidEA",
"D2lV8NiA7LF",
"wAGfSpbpEN8",
"P952r1Cx9iJ",
"oJH0rqMMocg",
"nips_2022_OoNmOfYVhEU",
"nips_2022_OoNmOfYVhEU",
"nips_2022_OoNmOfYVhEU",
"nips_2022_OoNmOfYVhEU"
] |
nips_2022_0Dh8dz4snu | Equivariant Networks for Crystal Structures | Supervised learning with deep models has tremendous potential for applications in materials science. Recently, graph neural networks have been used in this context, drawing direct inspiration from models for molecules. However, materials are typically much more structured than molecules, which is a feature that these models do not leverage. In this work, we introduce a class of models that are equivariant with respect to crystalline symmetry groups. We do this by defining a generalization of the message passing operations that can be used with more general permutation groups, or that can alternatively be seen as defining an expressive convolution operation on the crystal graph. Empirically, these models achieve competitive results with state-of-the-art on the Materials Project dataset. | Accept | This paper proposes an extension of recent work on equivariant graph neural networks to account for equivariance to crystalline symmetries. This work has the potential to be quite impactful since modeling crystalline structures is important and has received little attention compared to the modeling of molecular systems.
Two reviewers argued in favor of acceptance, citing the nontrivial nature of the problem and the solution. They also commented on the high quality of the writing. Lastly, the positive reviewers found the results promising after discussion. The negative reviewer focused on missing background material and lack of novelty of the proposed method. I am confident that the authors have / will continue to address the concerns of the negative referee by adding extra background materials and definitions for common terms. I believe that this paper makes enough novel progress on a difficult problem to be worth accepting to NeurIPS despite the fact that it builds on prior work. | train | [
"1SJJ1BlkdTB",
"p39qhedr-sj",
"Kvem6qET3TH",
"QynOY7oAZN30",
"iFN00YWHbE",
"XC7jdSfmu3",
"2MLlNnD4tV",
"CxFrCgD1Pdk",
"YQr3kj8J_or"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The authors have adequately addressed all concerns and shown great empirical results. I believe this is a strong paper with clean ideas and thorough execution. It's a method I'd be excited to use myself. I believe it fully deserves publication in this venue. ",
" Thank you for your comments, and your efforts to... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
4,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
3
] | [
"nips_2022_0Dh8dz4snu",
"iFN00YWHbE",
"nips_2022_0Dh8dz4snu",
"CxFrCgD1Pdk",
"YQr3kj8J_or",
"2MLlNnD4tV",
"nips_2022_0Dh8dz4snu",
"nips_2022_0Dh8dz4snu",
"nips_2022_0Dh8dz4snu"
] |
nips_2022_s1yaWFDLxVG | Gradient Descent Is Optimal Under Lower Restricted Secant Inequality And Upper Error Bound | The study of first-order optimization is sensitive to the assumptions made on the objective functions.
These assumptions induce complexity classes which play a key role in worst-case analysis, including
the fundamental concept of algorithm optimality. Recent work argues that strong convexity and
smoothness—popular assumptions in literature—lead to a pathological definition of the condition
number. Motivated by this result, we focus on the class of functions
satisfying a lower restricted secant inequality and an upper error bound. On top of being robust to
the aforementioned pathological behavior and including some non-convex functions, this pair of
conditions displays interesting geometrical properties. In particular, the necessary and sufficient
conditions to interpolate a set of points and their gradients within the class can be separated into
simple conditions on each sampled gradient. This allows the performance estimation problem (PEP)
to be solved analytically, leading to a lower bound
on the convergence rate that proves gradient descent to be exactly optimal on this class of functions
among all first-order algorithms. | Accept | A solid theoretical paper with fine execution that establishes the optimality of the vanilla gradient descent method in a class of functions extending the well-studied class of smooth and strongly convex functions. Please make sure to take into account the insightful feedback given by the reviewers in the revised version. | train | [
"U3XDkS0eGXf",
"g4sfdnJ_cho",
"FFblGOrHEB",
"WnfCXz3kiCp",
"fZzE4r63ri",
"1qLQfgN-KQT",
"NKO0UyKE7g7",
"DOlXSdEZnJh",
"FcSESmX81Wq_",
"pLK0SJoZXmk",
"OsbAuxxUli",
"gaoiJDhiueY",
"K0fzof4xB8O"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. I will keep my score.",
" Thank you for your very pertinent reply. \n\nWe fully agree with your suggestion regarding the characterization of our contributions. We will improve our introduction and conclusion to explicitly reflect the implications of our results on impossibility of ac... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
4
] | [
"DOlXSdEZnJh",
"FFblGOrHEB",
"FcSESmX81Wq_",
"1qLQfgN-KQT",
"NKO0UyKE7g7",
"K0fzof4xB8O",
"gaoiJDhiueY",
"OsbAuxxUli",
"pLK0SJoZXmk",
"nips_2022_s1yaWFDLxVG",
"nips_2022_s1yaWFDLxVG",
"nips_2022_s1yaWFDLxVG",
"nips_2022_s1yaWFDLxVG"
] |
nips_2022_02dbnEbEFn | Decoupled Context Processing for Context Augmented Language Modeling | Language models can be augmented with context retriever to incorporate knowledge from large external databases. By leveraging retrieved context, the neural network does not have to memorize the massive amount of world knowledge within its internal parameters, leading to better parameter efficiency, interpretability and modularity. In this paper we examined a simple yet effective architecture for incorporating external context into language models based on decoupled $\texttt{Encoder-Decoder}$ architecture. We showed that such a simple architecture achieves competitive results on auto-regressive language modeling and open domain question answering tasks. We also analyzed the behavior of the proposed model which performs grounded context transfer. Finally we discussed the computational implications of such retrieval augmented models. | Accept | The paper contributes a retrieval-augmented LM that offers some nice features compared to earlier work. First, it is based on a seq2seq architecture instead of a custom one (e.g., RETRO), which is a plus as it is simple, well studied, and the overall approach is quite elegant. Second, the model of the paper also offers cheaper inference compared to many prior models, and context encoding in the paper can be precomputed offline.
While the reviewers agreed on these pros, the main concern during the reviewers’ discussion was in relation to the downstream evaluation. The paper presents evaluation on only one downstream task, which is on the Natural Question (NQ) QA task and results are much worse than FiD. However, as pointed out by the authors, the better performance of FiD isn’t so surprising (as the paper’s model encodes contexts independently of the questions, as this is done offline). By contrast, FID comes with much greater computational costs. Ultimately, we agree with the authors’ argument that the most comparable model is RETRO, and the paper does quite well in comparison. It is nice to see the paper’s model outperforming other strong baselines such as REALM, DPR, and RAG. To mitigate concerns about their model’s relative poor performance relative to FiD, the authors might want to bring up computational efficiency (e.g., concrete running time) earlier in the paper. | train | [
"_vWEwR0Y8Nb",
"zezczNHYuW",
"ziA4GEL7IQh",
"SHuiAlEO1K",
"iqTPOOg_Izl",
"g_gfrtVPYCb"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks for a very detailed review and suggestions! \n\n> the figures are quite complicated and bit hard to follow (esp fig 1).\n\nWe will simplify the figures in the final version.\n\n> Not sure I fully buy the argument about the encoder not needing to contribute to the parameter count in - the retriever paramete... | [
-1,
-1,
-1,
4,
5,
8
] | [
-1,
-1,
-1,
4,
4,
4
] | [
"g_gfrtVPYCb",
"iqTPOOg_Izl",
"SHuiAlEO1K",
"nips_2022_02dbnEbEFn",
"nips_2022_02dbnEbEFn",
"nips_2022_02dbnEbEFn"
] |
nips_2022_7eUOC9fEIRO | Planning to the Information Horizon of BAMDPs via Epistemic State Abstraction | The Bayes-Adaptive Markov Decision Process (BAMDP) formalism pursues the Bayes-optimal solution to the exploration-exploitation trade-off in reinforcement learning. As the computation of exact solutions to Bayesian reinforcement-learning problems is intractable, much of the literature has focused on developing suitable approximation algorithms. In this work, before diving into algorithm design, we first define, under mild structural assumptions, a complexity measure for BAMDP planning. As efficient exploration in BAMDPs hinges upon the judicious acquisition of information, our complexity measure highlights the worst-case difficulty of gathering information and exhausting epistemic uncertainty. To illustrate its significance, we establish a computationally-intractable, exact planning algorithm that takes advantage of this measure to show more efficient planning. We then conclude by introducing a specific form of state abstraction with the potential to reduce BAMDP complexity and gives rise to a computationally-tractable, approximate planning algorithm. | Accept | In post-rebuttal discussion, reviewers debated the merits of the paper and especially reviewer Gwj8's concerns. In the end, reviewer Gwj8 agreed with other reviewers that the paper should be accepted, although all reviewers would like to see the final revision reflect the points and concerns raised in the post-rebuttal discussion.
| test | [
"kaDb1-R1HrP",
"GvYjSDtHXTy",
"8bead8wSXsm",
"w35wQ2URFw-e",
"gi0LvbOimoH",
"cRkrAkwBTqQx",
"r2bdxY3QON",
"jT_OSlm493c",
"BZkUODBOgRr",
"glclKpWZAHZ",
"3pWBVyYG2FI",
"eg53jJr3AElr",
"xewfVOHEQkF",
"SdeD-Ky7lni",
"Xavv0Ex2Sl",
"UiNOEvviX35",
"Y4OHf6UbGXE5",
"PPI2Qaj9Vkj",
"ieckprU... | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_r... | [
" Thank you for your comments responding to my original questions and clarifying what you see as the value of this paper. Leaving a comment about shrinking the policy class over which the BAMDP supremum is defined and mentioning the receding information horizon idea both seem like nice things to have (but not manda... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4,
4
] | [
"Xavv0Ex2Sl",
"8bead8wSXsm",
"w35wQ2URFw-e",
"gi0LvbOimoH",
"jT_OSlm493c",
"r2bdxY3QON",
"BZkUODBOgRr",
"SdeD-Ky7lni",
"glclKpWZAHZ",
"3pWBVyYG2FI",
"FS2zCqYuwhs",
"xewfVOHEQkF",
"SdeD-Ky7lni",
"UG5Og5Xj4T5",
"UiNOEvviX35",
"PPI2Qaj9Vkj",
"ieckprUbd0e",
"nips_2022_7eUOC9fEIRO",
"... |
nips_2022_BUMiizPcby6 | Trust Region Policy Optimization with Optimal Transport Discrepancies: Duality and Algorithm for Continuous Actions | Policy Optimization (PO) algorithms have been proven particularly suited to handle the high-dimensionality of real-world continuous control tasks. In this context, Trust Region Policy Optimization methods represent a popular approach to stabilize the policy updates. These usually rely on the Kullback-Leibler (KL) divergence to limit the change in the policy. The Wasserstein distance represents a natural alternative, in place of the KL divergence, to define trust regions or to regularize the objective function. However, state-of-the-art works either resort to its approximations or do not provide an algorithm for continuous state-action spaces, reducing the applicability of the method.
In this paper, we explore optimal transport discrepancies (which include the Wasserstein distance) to define trust regions, and we propose a novel algorithm - Optimal Transport Trust Region Policy Optimization (OT-TRPO) - for continuous state-action spaces. We circumvent the infinite-dimensional optimization problem for PO by providing a one-dimensional dual reformulation for which strong duality holds.
We then analytically derive the optimal policy update given the solution of the dual problem. This way, we bypass the computation of optimal transport costs and of optimal transport maps, which we implicitly characterize by solving the dual formulation.
Finally, we provide an experimental evaluation of our approach across various control tasks. Our results show that optimal transport discrepancies can offer an advantage over state-of-the-art approaches. | Accept | The paper studies trust region optimization but replace the typical KL divergence with optimal transport distance - which is a natural and meaningful generalization. The authors provided a tractable algorithm by using optimization duality, and provide experimental results on control tasks. All reviewers appreciate the main idea is novel and interesting, and further the paper is very written and easy to understand. Some reviewer have questions about the theory and experiments. The authors largely address the reviewers' comments during rebuttal, and all authors are in favor of acceptance. | train | [
"eoBDI7Z3Q3j",
"dDNsS_LH3SH",
"5hSW88LH8Gr",
"3Po8_NVW7u",
"2jkGC1_BlY7",
"2H1Fv62tQ1E",
"oTtyCU6rbb",
"zSyH6EZwr87",
"WdLp8pbDv0y"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank the reviewer for the feedback. We will include pointers to the sections of the appendix in the final version of the paper.",
" Thank you for addressing my questions, and others too.\nIt would be nice if the authors could add some pointers in the main text to the appropriate sections in the appendix.",
... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"dDNsS_LH3SH",
"3Po8_NVW7u",
"2jkGC1_BlY7",
"zSyH6EZwr87",
"WdLp8pbDv0y",
"oTtyCU6rbb",
"nips_2022_BUMiizPcby6",
"nips_2022_BUMiizPcby6",
"nips_2022_BUMiizPcby6"
] |
nips_2022_kpSAfnHSgXR | Modeling Transitivity and Cyclicity in Directed Graphs via Binary Code Box Embeddings | Modeling directed graphs with differentiable representations is a fundamental requirement for performing machine learning on graph-structured data. Geometric embedding models (e.g. hyperbolic, cone, and box embeddings) excel at this task, exhibiting useful inductive biases for directed graphs. However, modeling directed graphs that both contain cycles and some element of transitivity, two properties common in real-world settings, is challenging. Box embeddings, which can be thought of as representing the graph as an intersection over some learned super-graphs, have a natural inductive bias toward modeling transitivity, but (as we prove) cannot model cycles. To this end, we propose binary code box embeddings, where a learned binary code selects a subset of graphs for intersection. We explore several variants, including global binary codes (amounting to a union over intersections) and per-vertex binary codes (allowing greater flexibility) as well as methods of regularization. Theoretical and empirical results show that the proposed models not only preserve a useful inductive bias of transitivity but also have sufficient representational capacity to model arbitrary graphs, including graphs with cycles. | Accept | This paper extends box embeddings to allow for them to represent directed graphs. In particular, the proposed binary box embeddings can represent cycles in graphs, which was not previously possible. The reviewers appreciated the introduction of binary box embeddings and felt the contribution was novel and elegant. During the discussion, the reviewers felt that the rebuttal generally answered their questions and felt that the contribution would be of interest to the NeurIPS community.
A number of clarity issues brought up in the initial reviews should be addressed for the final version of the work, for instance, the motivation of the binary code embeddings and the discussions with 3KMQ on transitivity holding. Please revise the paper to address the remaining comments from the reviewers and carefully incorporate the additional results presented during the author response discussion phase. | test | [
"zZIju8_KA2U",
"vEE4BbFeNs",
"SdUHC3iDkoY",
"xusPk583bl",
"TGTegReMiZi",
"nAxd3oya_8X",
"jv3ztnsK7Jg",
"XOaUE1H_zIL",
"8pTK7n-R3o",
"CuvSFweJ72k"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" You're quite welcome. We hope we addressed your questions sufficiently! If so, we would greatly appreciate reconsideration of the rating. We would, of course, be happy to answer any further questions and are open to additional suggestions we can make to improve the submission further. Thank you!",
" Thanks for ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"vEE4BbFeNs",
"nAxd3oya_8X",
"xusPk583bl",
"TGTegReMiZi",
"CuvSFweJ72k",
"8pTK7n-R3o",
"XOaUE1H_zIL",
"nips_2022_kpSAfnHSgXR",
"nips_2022_kpSAfnHSgXR",
"nips_2022_kpSAfnHSgXR"
] |
nips_2022_qx51yfvLnE | Simple and Optimal Greedy Online Contention Resolution Schemes | Matching based markets, like ad auctions, ride-sharing, and eBay, are inherently online and combinatorial, and therefore have been extensively studied under the lens of online stochastic combinatorial optimization models. The general framework that has emerged uses Contention Resolution Schemes (CRSs) introduced by Chekuri, Vondrák, and Zenklusen for combinatorial problems, where one first obtains a fractional solution to a (continuous) relaxation of the objective, and then proceeds to round it. When the order of rounding is controlled by an adversary, it is called an Online Contention Resolution Scheme (OCRSs), which has been successfully applied in online settings such as posted-price mechanisms, prophet inequalities and stochastic probing.
The study of greedy OCRSs against an almighty adversary has emerged as one of the most interesting problems since it gives a simple-to-implement scheme against the worst possible scenario. Intuitively, a greedy OCRS has to make all its decisions before the online process starts. We present simple $1/e$ - selectable greedy OCRSs for the single-item setting, partition matroids, and transversal matroids. This improves upon the previous state-of-the-art greedy OCRSs of [FSZ16] that achieves $1/4$ for these constraints. Finally, we show that no better competitive ratio than $1/e$ is possible, making our greedy OCRSs the best possible.
| Accept | Executive summary:
The paper considers the design of greedy online contention resolution schemes (OCRS) for the single-item setting and certain matroids (partition matroids, transversal matroids). The main result is that there is a 1/e-selectable greedy OCRS (which improves over the best known bound of 1/4 for greedy OCRS), and that this is best possible.
Discussion and recommendation:
This is a nice little result. Not tremendously difficult, but fundamental. A plus is that the question is resolved tightly. All but one reviewer felt positively about the paper. A major concern raised in the reviews was that it's unclear why we care about greedy OCRS. In the rebuttal, the authors emphasized that greedy OCRS yield guarantees against an almighty adversary and that they recently found application in a delegation variant of the Pandora's Box problem (Bechtel, Dughmi, and Patel [EC'22]).
(Weak) accept.
---
Additional comments:
I always thought of OCRS as one of the two main techniques that have emerged for proving prophet inequalities and guarantees for posted-price mechanisms; the other being the "balanced prices" framework. I would encourage to extend the discussion in the related work accordingly, and cite the most relevant works on the "balanced prices" framework. Or, at least, cite the most relevant papers in that direction (see list below).
Citations to add:
Kleinberg and Weinberg. Matroid Prophet Inequalities. STOC'12.
Feldman, Gravin, Lucier. Combinatorial Auctions via Posted Prices. SODA'15.
D\"utting, Feldman, Kesselheim, Lucier. Prophet Inequalities made Easy: Stochastic Optimization by Pricing Non-Stochastic Inputs. FOCS'17.
D\"utting, Kesselheim, Lucier. An O(log log m) Prophet Inequality for Subadditive Combinatorial Auctions. FOCS'20. | train | [
"vc742dYTP93",
"n4M6Y9CJAO",
"LTdken0GMOR",
"Spe12HZjo3O",
"ty9rihaoNdU",
"K4GTHPnuEWM",
"eXXJEhGAd3r",
"Qsrm0SzetvR",
"nKY9iOkjsL2"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" The reviewer has raised some very important questions regarding the benefits of using a greedy vs a non-greedy OCRS and we would like to thank them for their input. In general, greedy OCRSs are inherently much simpler than non-greedy OCRSs. Usually, to obtain a non-greedy OCRS for a non-trivial constraint, one ha... | [
-1,
-1,
-1,
-1,
-1,
4,
6,
5,
7
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
3
] | [
"nKY9iOkjsL2",
"Qsrm0SzetvR",
"eXXJEhGAd3r",
"K4GTHPnuEWM",
"nips_2022_qx51yfvLnE",
"nips_2022_qx51yfvLnE",
"nips_2022_qx51yfvLnE",
"nips_2022_qx51yfvLnE",
"nips_2022_qx51yfvLnE"
] |
nips_2022_KRk0lBRPpOC | Evaluating Latent Space Robustness and Uncertainty of EEG-ML Models under Realistic Distribution Shifts | The recent availability of large datasets in bio-medicine has inspired the development of representation learning methods for multiple healthcare applications. Despite advances in predictive performance, the clinical utility of such methods is limited when exposed to real-world data. This study develops model diagnostic measures to detect potential pitfalls before deployment without assuming access to external data. Specifically, we focus on modeling realistic data shifts in electrophysiological signals (EEGs) via data transforms and extend the conventional task-based evaluations with analyses of a) the model's latent space and b) predictive uncertainty under these transforms. We conduct experiments on multiple EEG feature encoders and two clinically relevant downstream tasks using publicly available large-scale clinical EEGs. Within this experimental setting, our results suggest that measures of latent space integrity and model uncertainty under the proposed data shifts may help anticipate performance degradation during deployment. | Accept | The paper introduces model-agnostic ways of quantifying predictive uncertainty and latent space differences in the situation when the distribution of data encountered during deployment differs from what the system was trained on and there is no access to the data itself. The model is evaluated on large scale EEG data.
The paper solves an important problem, in particular to the field of healthcare which has previously seen instances of models underperforming significantly at the time of deployment. As mentioned by the reviewers, this direction has not been sufficiently explored, so there is novelty in the problem itself, as well as in the solution. The experiments on EEG data were seen as convincing by the reviewers, though some questions were raised about the scope of the paper.
Overall, there is considerable merit in the work and recommend acceptance of this paper. In the camera ready, the authors should make sure not to overstate the applicability of their method. While this work could, in theory, be applied (or adapted) to other data, the merits of it outside of models trained on EEG data have not been demonstrated and should therefore not be stated as a given.
Reviewers x119 and PL6S have not engaged in the discussion although the authors responded to the issues they raised, which I kept in mind when issuing my recommendation. | train | [
"7AcK2YEF92Y",
"a6vACrBtRDN",
"p-VqGNGSTx6",
"2sQnDS7mgxk",
"RIZjg7YwyKK",
"PwvuqOGIszn",
"LkLEkmowWmls",
"5LQD4ZYSR3F",
"aJrByuJSur-",
"MqFJNswcTCZ",
"5eyHcHO8fm"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank all reviewers for their feedback on the initial submission.\n\nWe have incorporated their suggestions into the revised manuscript, and the changes are highlighted in blue.",
" Thank you for your response. \n\n> The authors state that lower is better in prediction uncertainty -- that seems misleading\n\... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
3,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3,
5
] | [
"nips_2022_KRk0lBRPpOC",
"p-VqGNGSTx6",
"PwvuqOGIszn",
"5eyHcHO8fm",
"MqFJNswcTCZ",
"aJrByuJSur-",
"5LQD4ZYSR3F",
"nips_2022_KRk0lBRPpOC",
"nips_2022_KRk0lBRPpOC",
"nips_2022_KRk0lBRPpOC",
"nips_2022_KRk0lBRPpOC"
] |
nips_2022_TiZYrQ-mPup | COLD Decoding: Energy-based Constrained Text Generation with Langevin Dynamics | Many applications of text generation require incorporating different constraints to control the semantics or style of generated text. These constraints can be hard (e.g., ensuring certain keywords are included in the output) and soft (e.g., contextualizing the output with the left- or right-hand context). In this paper, we present Energy-based Constrained Decoding with Langevin Dynamics (COLD), a decoding framework which unifies constrained generation as specifying constraints through an energy function, then performing efficient differentiable reasoning over the constraints through gradient-based sampling. COLD decoding is a flexible framework that can be applied directly to off-the-shelf left-to-right language models without the need for any task-specific fine-tuning, as demonstrated through three challenging text generation applications: lexically-constrained generation, abductive reasoning, and counterfactual reasoning. Our experiments on these constrained generation tasks point to the effectiveness of our approach, both in terms of automatic and human evaluation. | Accept | This paper proposes a framework for controlled or constrained text generation where the constraints are encoded with energy-based models (EBMs). Generation proceeds in two steps: first, the discrete words are relaxed into continuous vectors of logits (scores), which enables the use of gradient methods and Langevin dynamics to obtain a sample. Then, to obtain actual words (a discrete output) a LM is used for top-k filtering and the word with the largest score is chosen. The paper demonstrates the applicability of the proposed framework in three different generation tasks, controlled generation, abductive, and counterfactual generation.
All reviewers agree (and I agree with them) that this is a solid paper which brings Langevin dynamics (a well-known technique mainly used with continuous outputs, e.g. in vision tasks) to text generation. Although the use of this technique with EBMs is certainly not novel, making it work for text is non-trivial. The reviewers suggest several improvements: reporting the runtimes (the proposed method requires many gradient iterations which brings a considerable slow down), comparing against more baselines more explicit, and adding references and discussion related for recent work such as FUDGE and mix-and-match. The author response was satisfactory and promised to add these details to the final version.
I strongly encourage the authors to incorporate the reviewer's suggestions in their final version.
| train | [
"rhcPnxOSaJs",
"iMrVDR0bok8",
"iLxiDCQupUv",
"zwSRt-i7JA",
"7BUE2jla3XH",
"wOIsVw_TaEr",
"-ChDtwF_EUg",
"I7YzsFnVjtg",
"7FsfhfLud5U",
"SM7NhG6xKJq",
"LB_C9-PutbR",
"wrXs0TloGXg"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for raising the score! We'll incorporate the new results & comparison in the revised version!",
" Dear authors,\n\nThank you for clarification and providing additional results. I remain positive on my previous assessment.\n\nBest,\nReviewer 5VZW",
" Hi authors, thanks for your detailed response with... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
8,
7,
8
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4,
3
] | [
"iLxiDCQupUv",
"wOIsVw_TaEr",
"zwSRt-i7JA",
"wrXs0TloGXg",
"LB_C9-PutbR",
"SM7NhG6xKJq",
"7FsfhfLud5U",
"nips_2022_TiZYrQ-mPup",
"nips_2022_TiZYrQ-mPup",
"nips_2022_TiZYrQ-mPup",
"nips_2022_TiZYrQ-mPup",
"nips_2022_TiZYrQ-mPup"
] |
nips_2022_xuw7R0hP7G | From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent | Stochastic Gradient Descent (SGD) has been the method of choice for learning large-scale non-convex models. While a general analysis of when SGD works has been elusive, there has been a lot of recent progress in understanding the convergence of Gradient Flow (GF) on the population loss, partly due to the simplicity that a continuous-time analysis buys us. An overarching theme of our paper is providing general conditions under which SGD converges, assuming that GF on the population loss converges. Our main tool to establish this connection is a general \textit{converse Lyapunov} like theorem, which implies the existence of a Lyapunov potential under mild assumptions on the rates of convergence of GF. In fact, using these potentials, we show a one-to-one correspondence between rates of convergence of GF and geometrical properties of the underlying objective. When these potentials further satisfy certain self-bounding properties, we show that they can be used to provide a convergence guarantee for Gradient Descent (GD) and SGD (even when the GF path and GD/SGD paths are quite far apart). It turns out that these self-bounding assumptions are in a sense also necessary for GD/SGD to work. Using our framework, we provide a unified analysis for GD/SGD not only for classical settings like convex losses, or objectives that satisfy PL/ KL properties, but also for more complex problems including Phase Retrieval and Matrix sq-root, and extending the results in the recent work of Chatterjee 2022. | Accept | This paper provides a new way of developing the convergence analysis of gradient descent (GD) and stochastic gradient descent (SGD) by leveraging the convergence of the gradient flow (GF). This framework is very general and provides new insight that future research on SGD will benefit from. All reviewer have positive feedback and I would like to recommend acceptance. | train | [
"8EKQs8ire2T",
"jxMIBlUZ8mu",
"vJ2oujS3gcz",
"vYjjIoE4a7V",
"8D_hlMvlpe",
"ByUxwLcxUAv",
"dwxGbgKkOfw",
"eiiGp6SV0JQ",
"760VeyzP7nij",
"qWm2ZjDzhnp",
"rDUumrNBA5T",
"fscxnQSZdzl",
"xzAdn926QS"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for your response. I do recognize that there is a definite novelty, as I had initially pointed out, and that it is interesting at some level. The non-convex analysis for matrix square root and phase retrieval alleviates my concerns. I have increased my score. ",
" I thank the reviewers for the detaile... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"eiiGp6SV0JQ",
"dwxGbgKkOfw",
"vYjjIoE4a7V",
"8D_hlMvlpe",
"ByUxwLcxUAv",
"xzAdn926QS",
"fscxnQSZdzl",
"rDUumrNBA5T",
"qWm2ZjDzhnp",
"nips_2022_xuw7R0hP7G",
"nips_2022_xuw7R0hP7G",
"nips_2022_xuw7R0hP7G",
"nips_2022_xuw7R0hP7G"
] |
nips_2022_yLilJ1vZgMe | Fast Neural Kernel Embeddings for General Activations | Infinite width limit has shed light on generalization and optimization aspects of deep learning by establishing connections between neural networks and kernel methods. Despite their importance, the utility of these kernel methods was limited in large-scale learning settings due to their (super-)quadratic runtime and memory complexities. Moreover, most prior works on neural kernels have focused on the ReLU activation, mainly due to its popularity but also due to the difficulty of computing such kernels for general activations. In this work, we overcome such difficulties by providing methods to work with general activations. First, we compile and expand the list of activation functions admitting exact dual activation expressions to compute neural kernels. When the exact computation is unknown, we present methods to effectively approximate them. We propose a fast sketching method that approximates any multi-layered Neural Network Gaussian Process (NNGP) kernel and Neural Tangent Kernel (NTK) matrices for a wide range of activation functions, going beyond the commonly analyzed ReLU activation. This is done by showing how to approximate the neural kernels using the truncated Hermite expansion of any desired activation functions. While most prior works require data points on the unit sphere, our methods do not suffer from such limitations and are applicable to any dataset of points in $\mathbb{R}^d$. Furthermore, we provide a subspace embedding for NNGP and NTK matrices with near input-sparsity runtime and near-optimal target dimension which applies to any \emph{homogeneous} dual activation functions with rapidly convergent Taylor expansion. Empirically, with respect to exact convolutional NTK (CNTK) computation, our method achieves $106\times$ speedup for approximate CNTK of a 5-layer Myrtle network on CIFAR-10 dataset. | Accept | Most prior works on neural kernels have focused on using the ReLU activation. In this work, the authors provide new methods that can approximate multi-layered Neural Network Gaussian Process (NNGP) kernels and Neural Tangent Kernel (NTK) matrices for a wide range of activation functions. All the four reviewers recommended acceptance of the paper. | train | [
"b6f1617ttp",
"18Vf5JHJMF",
"f6-aw8-A-A4Y",
"14uh3WkypYr",
"pjrJLjhiah",
"rAfWzMTkQR",
"8ZLgc7BlyQ",
"cghFxFA5RJ",
"h0Qzos4kz9l",
"MGDMHHQWBzm"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I just wanted to caution the authors when preparing their revision regarding the condition of [44] of the polynomial. \n\n> As the reviewer mentioned, as long as the derivative of the dual activation (i.e., RHS in Eq (14)) is finite, exchanging integral with the partial derivative holds. In fact the condition of ... | [
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
1,
4,
3,
4
] | [
"14uh3WkypYr",
"14uh3WkypYr",
"8ZLgc7BlyQ",
"MGDMHHQWBzm",
"h0Qzos4kz9l",
"cghFxFA5RJ",
"nips_2022_yLilJ1vZgMe",
"nips_2022_yLilJ1vZgMe",
"nips_2022_yLilJ1vZgMe",
"nips_2022_yLilJ1vZgMe"
] |
nips_2022_XvI6h-s4un | On Reinforcement Learning and Distribution Matching for Fine-Tuning Language Models with no Catastrophic Forgetting | The availability of large pre-trained models is changing the landscape of Machine Learning research and practice, moving from a "training from scratch" to a "fine-tuning'' paradigm. While in some applications the goal is to "nudge'' the pre-trained distribution towards preferred outputs, in others it is to steer it towards a different distribution over the sample space. Two main paradigms have emerged to tackle this challenge: Reward Maximization (RM) and, more recently, Distribution Matching (DM). RM applies standard Reinforcement Learning (RL) techniques, such as Policy Gradients, to gradually increase the reward signal. DM prescribes to first make explicit the target distribution that the model is fine-tuned to approximate. Here we explore the theoretical connections between the two paradigms and show that methods such as KL-control developed in the RM paradigm can also be construed as belonging to DM. We further observe that while DM differs from RM, it can suffer from similar training difficulties, such as high gradient variance. We leverage connections between the two paradigms to import the concept of baseline into DM methods. We empirically validate the benefits of adding a baseline on an array of controllable language generation tasks such as constraining topic, sentiment, and gender distributions in texts sampled from a language model. We observe superior performance in terms of constraint satisfaction, stability, and sample efficiency. | Accept | All reviewers consistently agree that this paper provides valuable contribution in identifying the connection between different RL training paradigms for language models with clear derivations. In addition, it also develops an improved method by incorporating a baseline into DPG. | train | [
"xxZ_jrAWi8-",
"uhhWGpcDLkU",
"ckgYgfGE5p",
"rN37QexW739",
"3FhPEuTBCLI",
"BVV3rqtTgrF",
"dyIZM2ctjc0",
"po9qJpqhYNu",
"FGxGPVB2xsp",
"0fbJV-FCYNA"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thanks a lot! We agree that lines 788-90 were unsatisfactory, we just uploaded a revised version of the appendix with lines 788-90 removed.",
" Thank you for the clarification, I will increase my score.\n\nJust one additional point. Could you please remove Lines 788-790 or add a proof? It is not clear to me if ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"uhhWGpcDLkU",
"rN37QexW739",
"0fbJV-FCYNA",
"0fbJV-FCYNA",
"FGxGPVB2xsp",
"po9qJpqhYNu",
"nips_2022_XvI6h-s4un",
"nips_2022_XvI6h-s4un",
"nips_2022_XvI6h-s4un",
"nips_2022_XvI6h-s4un"
] |
nips_2022_ZMFQtvVJr40 | Provably tuning the ElasticNet across instances | An important unresolved challenge in the theory of regularization is to set the regularization coefficients of popular techniques like the ElasticNet with general provable guarantees. We consider the problem of tuning the regularization parameters of Ridge regression, LASSO, and the ElasticNet across multiple problem instances, a setting that encompasses both cross-validation and multi-task hyperparameter optimization. We obtain a novel structural result for the ElasticNet which characterizes the loss as a function of the tuning parameters as a piecewise-rational function with algebraic boundaries. We use this to bound the structural complexity of the regularized loss functions and show generalization guarantees for tuning the ElasticNet regression coefficients in the statistical setting. We also consider the more challenging online learning setting, where we show vanishing average expected regret relative to the optimal parameter pair. We further extend our results to tuning classification algorithms obtained by thresholding regression fits regularized by Ridge, LASSO, or ElasticNet. Our results are the first general learning-theoretic guarantees for this important class of problems that avoid strong assumptions on the data distribution. Furthermore, our guarantees hold for both validation and popular information criterion objectives. | Accept | The reviewers agreed that this paper should be accepted -- it studies an interesting and in someways "overlooked" problem and seems like it could be a starting point for others to build on. The paper did have some weaknesses. For example, we feel that the lack of experimental results is a missed opportunity -- the reviewers feel the paper would be made stronger by including at least a simple example where the standard approach of grid search + CV fails. Without such an example, it is a bit hard to be convinced of the importance of the problem being studied. | train | [
"2nScabmnZIm",
"tkUEnl5N5U6",
"ZG-kDdSH7Mi",
"TNis6WZqYKC",
"BPex0l4k1-",
"X8uITiYDYw",
"AXdrA_cdyn",
"DZ3cR_AGsLq",
"iWIuUZ0bj56",
"i9JyBS-qIJ"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the additional comments and clarification. I think I see your point and have updated my review again. I still contend that classification via the ordinary elastic net is not relevant (simply being implemented in a software suite and having been used in a well-known reference is not a strong argument... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
2
] | [
"tkUEnl5N5U6",
"TNis6WZqYKC",
"BPex0l4k1-",
"AXdrA_cdyn",
"DZ3cR_AGsLq",
"iWIuUZ0bj56",
"i9JyBS-qIJ",
"nips_2022_ZMFQtvVJr40",
"nips_2022_ZMFQtvVJr40",
"nips_2022_ZMFQtvVJr40"
] |
nips_2022_6iqd9JAVR1z | LAMP: Extracting Text from Gradients with Language Model Priors | Recent work shows that sensitive user data can be reconstructed from gradient updates, breaking the key privacy promise of federated learning. While success was demonstrated primarily on image data, these methods do not directly transfer to other domains such as text. In this work, we propose LAMP, a novel attack tailored to textual data, that successfully reconstructs original text from gradients. Our attack is based on two key insights: (i) modelling prior text probability via an auxiliary language model, guiding the search towards more natural text, and (ii) alternating continuous and discrete optimization which minimizes reconstruction loss on embeddings while avoiding local minima via discrete text transformations. Our experiments demonstrate that LAMP is significantly more effective than prior work: it reconstructs 5x more bigrams and $23\%$ longer subsequences on average. Moreover, we are first to recover inputs from batch sizes larger than 1 for textual models. These findings indicate that gradient updates of models operating on textual data leak more information than previously thought.
| Accept | This paper describes a novel method to recover the input text based on the computed gradient. This is important in the context of federated learning, which promises to enable learning through gradient sharing while keeping the input text secret. The findings of the paper demonstrate that gradients are sufficient to recover significant parts of the input text questioning the federated learning premise at least in the context of large language models.
The approach in novel and technically sound. Empirical results are convincing. The paper is well-written and clear.
Given current trends to growing model size, it will be great if the paper can further scale the experimental results to larger models. | train | [
"51t6WbfaAYT",
"v0Flb7f_CQ-",
"46_yu1JUcje",
"xtVTVZ5dsNr",
"oXLEDEq3Hd",
"IclBFCSSR5L",
"ZmzChtq0VTq",
"xlafS_zu21J",
"oZReCFaBy-w"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response! I am satisfied with the response, and I would like to increase my score to 7 given that the authors will update the paper accordingly as promised. ",
" **Q:** Can you include a discussion of the negative societal impact of your work?\n\n**A:** The current version of the paper notes (... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"IclBFCSSR5L",
"oZReCFaBy-w",
"oZReCFaBy-w",
"xlafS_zu21J",
"ZmzChtq0VTq",
"ZmzChtq0VTq",
"nips_2022_6iqd9JAVR1z",
"nips_2022_6iqd9JAVR1z",
"nips_2022_6iqd9JAVR1z"
] |
nips_2022_uPyNR2yPoe | ELIGN: Expectation Alignment as a Multi-Agent Intrinsic Reward | Modern multi-agent reinforcement learning frameworks rely on centralized training and reward shaping to perform well. However, centralized training and dense rewards are not readily available in the real world. Current multi-agent algorithms struggle to learn in the alternative setup of decentralized training or sparse rewards. To address these issues, we propose a self-supervised intrinsic reward \textit{ELIGN - expectation alignment - } inspired by the self-organization principle in Zoology. Similar to how animals collaborate in a decentralized manner with those in their vicinity, agents trained with expectation alignment learn behaviors that match their neighbors' expectations. This allows the agents to learn collaborative behaviors without any external reward or centralized training. We demonstrate the efficacy of our approach across 6 tasks in the multi-agent particle and the complex Google Research football environments, comparing ELIGN to sparse and curiosity-based intrinsic rewards. When the number of agents increases, ELIGN scales well in all multi-agent tasks except for one where agents have different capabilities. We show that agent coordination improves through expectation alignment because agents learn to divide tasks amongst themselves, break coordination symmetries, and confuse adversaries. These results identify tasks where expectation alignment is a more useful strategy than curiosity-driven exploration for multi-agent coordination, enabling agents to do zero-shot coordination. | Accept | This paper introduces a novel method for decentralised training in both cooperative and competitive environments.
The main insight is that agents should be predictable to their team mates but unpredictable to adversaries.
Crucially, each agent relies on a local model to optimise this objective, making it compatible with decentralised training.
There were a few concerns from reviewers around both how the method applies to the decentralised training regime and regarding the naming. The authors managed to address the concerns appropriately, leading the actively engaged reviewers to increase their scores substantially. Reviewer TEM3 kept their rating at a borderline reject even though the concerns were addressed by the rebuttal. As such I recommend to discard this review. Reviewer whMR stated that they would increase their score but to the best of my knowledge didn't do so. Again, their review should be seen in this context.
| train | [
"ESWkpD_WJnO",
"lBs2evBNlm0",
"e5aL1dHJcFf",
"z1Djip8D3-",
"xsVSbulhZ9",
"gt5w_iimi_Q",
"gyM5HauTpo",
"8x693EsWUXY",
"j3unhLKZKvW",
"Uao9yoO0x5rF",
"j1BeNYwEdP_",
"xSs0Zyr9PWa",
"XQiZsIns8qJ",
"EvjfPkH0lw",
"T2CTb47Yrk3",
"Qkwi_AwwObV",
"4GtxXGplEJ",
"M95nZAxAM4M",
"dZAl1AHA5z7"
... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"... | [
" Thank you for the authors' response. Although I still think the authors should tune the temperature parameter in all considered environments, I will increase my score.\n\n\n",
" We ended up considering a number of other options: expectation matching, synchronization, invariability, stability, standardization, ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
7,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
4,
3
] | [
"e5aL1dHJcFf",
"xsVSbulhZ9",
"gt5w_iimi_Q",
"EvjfPkH0lw",
"XQiZsIns8qJ",
"j1BeNYwEdP_",
"8x693EsWUXY",
"j3unhLKZKvW",
"4GtxXGplEJ",
"dZAl1AHA5z7",
"xSs0Zyr9PWa",
"Qkwi_AwwObV",
"M95nZAxAM4M",
"T2CTb47Yrk3",
"nips_2022_uPyNR2yPoe",
"nips_2022_uPyNR2yPoe",
"nips_2022_uPyNR2yPoe",
"ni... |
nips_2022_82N_rasrUT_ | Explicable Policy Search | Human teammates often form conscious and subconscious expectations of each other during interaction. Teaming success is contingent on whether such expectations can be met. Similarly, for an intelligent agent to operate beside a human, it must consider the human’s expectation of its behavior. Disregarding such expectations can lead to the loss of trust and degraded team performance. A key challenge here is that the human’s expectation may not align with the agent’s optimal behavior, e.g., due to the human’s partial or inaccurate understanding of the task domain. Prior work on explicable planning described the ability of agents to respect their human teammate’s expectations by trading off task performance for more expected or “explicable” behaviors. In this paper, we introduce Explicable Policy Search (EPS) to significantly extend such an ability to stochastic domains in a reinforcement learning (RL) setting with continuous state and action spaces. Furthermore, in contrast to the traditional RL methods, EPS must at the same time infer the human’s hidden expectations. Such inferences require information about the human’s belief about the domain dynamics and her reward model but directly querying them is impractical. We demonstrate that such information can be necessarily and sufficiently encoded by a surrogate reward function for EPS, which can be learned based on the human’s feedback on the agent’s behavior. The surrogate reward function is then used to reshape the agent’s reward function, which is shown to be equivalent to searching for an explicable policy. We evaluate EPS in a set of navigation domains with synthetic human models and in an autonomous driving domain with a user study. The results suggest that our method can generate explicable behaviors that reconcile task performance with human expectations intelligently and has real-world relevance in human-agent teaming domains. | Accept | I thank the authors for their submission and active participation in the discussions. This paper studies the problem of devising an RL planner that produces bahviour consistent with human observer preferences. Reviewers remarked that the paper studies a timely problem [VqJ9,gaQq,5GPj], containing clear writing [VqJ9,PBHQ] and useful visualizations [VqJ9], and provides insightful human evaluations [VqJ9,gaQq,5GPj], and that the method is sound and elegant [PBHQ,TX5X]. During AC/reviewer discussion, reviewers TX5X, gaQq and VqJ9 agree that the author response has addressed the main concerns. I am slightly discounting the negative score by reviewer PBHQ as I don't find their suggestion to include more experiments and real world data very concrete or actionable. Thus, I overall see support for accepting the paper and am therefore recommending acceptance while encouraging the authors to further improve their paper based on the reviewer feedback.
| val | [
"XX6v91K5U54",
"49Z2P4mjcWJ",
"RircSeee5Nw",
"86z9U1VXO6u",
"pW1QBozU4m",
"ydxjvkilLs1",
"YRGfH5dlxUU",
"6TrCViVmkD",
"VC4OWOSUw6s",
"oBLh_4VtINoy",
"ICtcQpp3CSx",
"IetrbjbmUad",
"_X35Cvbinlr",
"l8cAHf1JsG",
"_78FBLeWN5I",
"ds_DLOQM3Re",
"YrlpZEXW6hi",
"3TaHhTw-T_l",
"hgbfbH7mpw8... | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_re... | [
" I want to thank the authors for their detailed answers and clarifications, as well as the changes in the manuscript. I agree with other reviewers that the changes improve the quality of the submission. Lastly, sorry for the belated reply to the response.\n\nMy main uncertainty about the paper dealt with seeing th... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
5,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
3,
5,
3
] | [
"_X35Cvbinlr",
"86z9U1VXO6u",
"pW1QBozU4m",
"ICtcQpp3CSx",
"IetrbjbmUad",
"YRGfH5dlxUU",
"VC4OWOSUw6s",
"nips_2022_82N_rasrUT_",
"oBLh_4VtINoy",
"_78FBLeWN5I",
"sGx---8VEvV",
"hgbfbH7mpw8",
"3TaHhTw-T_l",
"YrlpZEXW6hi",
"ds_DLOQM3Re",
"nips_2022_82N_rasrUT_",
"nips_2022_82N_rasrUT_",... |
nips_2022_WBv9Z6qpA8x | A Practical, Progressively-Expressive GNN | Message passing neural networks (MPNNs) have become a dominant flavor of graph neural networks (GNNs) in recent years. Yet, MPNNs come with notable limitations; namely, they are at most as powerful as the 1-dimensional Weisfeiler-Leman (1-WL) test in distinguishing graphs in a graph isomorphism testing frame-work. To this end, researchers have drawn inspiration from the k-WL hierarchy to develop more expressive GNNs. However, current k-WL-equivalent GNNs are not practical for even small values of k, as k-WL becomes combinatorially more complex as k grows. At the same time, several works have found great empirical success in graph learning tasks without highly expressive models, implying that chasing expressiveness with a “coarse-grained ruler” of expressivity like k-WL is often unneeded in practical tasks. To truly understand the expressiveness-complexity tradeoff, one desires a more “fine-grained ruler,” which can more gradually increase expressiveness. Our work puts forth such a proposal: Namely, we first propose the (k, c)(≤)-SETWL hierarchy with greatly reduced complexity from k-WL, achieved by moving from k-tuples of nodes to sets with ≤k nodes defined over ≤c connected components in the induced original graph. We show favorable theoretical results for this model in relation to k-WL, and concretize it via (k, c)(≤)-SETGNN, which is as expressive as (k, c)(≤)-SETWL. Our model is practical and progressively-expressive, increasing in power with k and c. We demonstrate effectiveness on several benchmark datasets, achieving several state-of-the-art results with runtime and memory usage applicable to practical graphs. We open source our implementation at https://github.com/LingxiaoShawn/KCSetGNN.
| Accept | The papers recognizes an important open question in GNNs: expressiveness-complexity tradeoff. Current more expressive GNNs (k-WL equivalent GNNs) are not practical for even small values of k, and several works have found great empirical success in graph learning tasks without highly expressive models. This paper puts forth a more “fine-grained ruler,” which can more gradually increase expressiveness to investigate the expressiveness-complexity tradeoff.
The paper addresses an important problem of going beyond 1-WL, however, in such a manner such that expressivity is improved in a progressive way while runtime is still manageable. Experimental results on graph classification and substructure identification as well as regression Zinc12k are promising.
However, the committee has concerns on the presentation of the paper. The authors are suggested to make the paper more accessible by providing more high-level descriptions and interpretations of the results. | val | [
"FOUkKnEdDf",
"uUINGziq07r",
"lSvjti_MTbx",
"HOZ9Z0hKPsW",
"gB5WebYIJzR",
"9zp7Rk_y57",
"HRPCqwdJvs",
"r-E43GtO7xi",
"z0pAdb7bsU0",
"BuOI7kgE3_L",
"PzQTnTal-H4",
"3CfWXeRDkTm",
"UHq_mSFzu3Y",
"w0wIIulaBkI",
"0OJXkw93fxg",
"a66uBxBPMhq",
"YuqR5h-eBWS",
"pkXoFgeluTq"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I encourage the authors to use the empirical results to prove strictness for some k and c.\nThat is, use some pairs in EXP and SR to show they become distinguishable only with greater k and c.\nAlso, please note that SpeqNet proves strictness.\n\nI have increased my score to a Borderline reject.",
" Thank you f... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
7,
5,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4,
4
] | [
"lSvjti_MTbx",
"UHq_mSFzu3Y",
"gB5WebYIJzR",
"9zp7Rk_y57",
"HRPCqwdJvs",
"z0pAdb7bsU0",
"r-E43GtO7xi",
"BuOI7kgE3_L",
"3CfWXeRDkTm",
"w0wIIulaBkI",
"0OJXkw93fxg",
"a66uBxBPMhq",
"YuqR5h-eBWS",
"pkXoFgeluTq",
"nips_2022_WBv9Z6qpA8x",
"nips_2022_WBv9Z6qpA8x",
"nips_2022_WBv9Z6qpA8x",
... |
nips_2022_F_9w7Wl78IH | The Impact of Task Underspecification in Evaluating Deep Reinforcement Learning | Evaluations of Deep Reinforcement Learning (DRL) methods are an integral part of scientific progress of the field. Beyond designing DRL methods for general intelligence, designing task-specific methods is becoming increasingly prominent for real-world applications. In these settings, the standard evaluation practice involves using a few instances of Markov Decision Processes (MDPs) to represent the task. However, many tasks induce a large family of MDPs owing to variations in the underlying environment, particularly in real-world contexts. For example, in traffic signal control, variations may stem from intersection geometries and traffic flow levels. The select MDP instances may thus inadvertently cause overfitting, lacking the statistical power to draw conclusions about the method's true performance across the family. In this article, we augment DRL evaluations to consider parameterized families of MDPs. We show that in comparison to evaluating DRL methods on select MDP instances, evaluating the MDP family often yields a substantially different relative ranking of methods, casting doubt on what methods should be considered state-of-the-art. We validate this phenomenon in standard control benchmarks and the real-world application of traffic signal control. At the same time, we show that accurately evaluating on an MDP family is nontrivial. Overall, this work identifies new challenges for empirical rigor in reinforcement learning, especially as the outcomes of DRL trickle into downstream decision-making. | Accept | This paper proposes to evaluate deep RL algorithms on a family of MDPs, rather than single 'point' MDPs. This seems a reasonable way to reduce performance variance, though I found the paper to be quite lacking in terms of depth, realism, and real impact.
In particular, the construction of the family of MDPs seems like a hard problem in general, which does not get a deep treatment in this paper. For instance, for many 'point' MDPs that might actually be of real-world interested it might be quite easy to construct a family of MDPs in a seemingly reasonable way, but for which change the conclusions in such a way that they do not generalise to problems of actual interest anymore. Furthermore, as also raised by reviewers, if the goal is to evaluate generality of an algorithm, it would seem much better to evaluate on a benchmark of carefully-chosen (e.g., actually-interesting) diverse point MDPs than on a family of similar MDPs constructed around a single point. Finally, many 'point' MDPs might contain substantial randomness themselves, and the distinction and similarlities between sampling multiple random (task) seeds on a single point MDP versus sampling from a family of MDPs is not sufficiently discussed.
Despite these limitations, I think it is good for the community to engage into discussions on how best to evaluate its methods. I will therefore take the recommendation of the reviewers, and accept this paper. I hope the comments on limitations above will resonate with the authors, or might inspire reasonable pushback about parts on which I'm perhaps being naive, given the authors will likely have thought about these issues a great deal. | val | [
"MvJuVo3yh2",
"GpMKhkZNme9",
"_H_na3wCCK",
"an17oeuNhIU",
"fnNbG0vTNbl",
"430dEEAIyP-",
"xSxwo3RBVhi",
"UQ4yZK5SVdi",
"Cb_K3LBPG5M",
"jLiderbCOCZU",
"5Ct-snQvj4W0",
"wkWXK4XeeYW",
"dRIJWv-sClv",
"XxWJc32JYRl",
"FU5m8hHAqT",
"tHciR3FHdB",
"MEHQFgBbnHB",
"cE1akf5bjoE",
"3nXF4eqX96C... | [
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official... | [
" We are thankful to the reviewer for taking a careful look at our rebuttal and taking the time to revise their assessment! We are glad that the reviewer found our new experiments interesting. We also would like to thank the reviewer for pointing out the debate between the family of MDPs from the same task vs. a su... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"GpMKhkZNme9",
"wkWXK4XeeYW",
"fnNbG0vTNbl",
"430dEEAIyP-",
"UQ4yZK5SVdi",
"XxWJc32JYRl",
"A42zzO5-trW",
"A42zzO5-trW",
"EH7kMADdQoN",
"EH7kMADdQoN",
"EH7kMADdQoN",
"EH7kMADdQoN",
"_s35tVibrQV",
"_s35tVibrQV",
"nips_2022_F_9w7Wl78IH",
"nips_2022_F_9w7Wl78IH",
"nips_2022_F_9w7Wl78IH",... |
nips_2022_ffy-h0GKZbK | Chaotic Dynamics are Intrinsic to Neural Network Training with SGD | With the advent of deep learning over the last decade, a considerable amount of effort has gone into better understanding and enhancing Stochastic Gradient Descent so as to improve the performance and stability of artificial neural network training. Active research fields in this area include exploiting second order information of the loss landscape and improving the understanding of chaotic dynamics in optimization. This paper exploits the theoretical connection between the curvature of the loss landscape and chaotic dynamics in neural network training to propose a modified SGD ensuring non-chaotic training dynamics to study the importance thereof in NN training. Building on this, we present empirical evidence suggesting that the negative eigenspectrum - and thus directions of local chaos - cannot be removed from SGD without hurting training performance. Extending our empirical analysis to long-term chaos dynamics, we challenge the widespread understanding of convergence against a confined region in parameter space. Our results show that although chaotic network behavior is mostly confined to the initial training phase, models perturbed upon initialization do diverge at a slow pace even after reaching top training performance, and that their divergence can be modelled through a composition of a random walk and a linear divergence. The tools and insights developed as part of our work contribute to improving the understanding of neural network training dynamics and provide a basis for future improvements of optimization methods. | Accept | This paper studies the role of (local) chaos in determining the training dynamics of neural networks. The authors first introduce a standard global notion of chaos via the Lyapunov matrix and introduce a greedy local version which determines whether the dynamics are “locally chaotic”. The authors relate these dynamics to the eigenvalues of the hessian. The authors close with interesting experiments showing that the chaotic, negative, eigenvalues of the hessian are important for training.
This paper generated some debate and ultimately two reviewers favored acceptance, while one reviewer wanted to reject the paper. The two reviewers who wanted to accept the paper appreciated the new emphasis that the paper brought to the negative eigenvalues of the hessian. The negative reviewer focused on weak experiments and thought the analysis was too similar to the noisy quadratic model. Ultimately, I do think the analysis done here might provide some insight not present in the NQM, since as the authors note, they consider the spectrum throughout training whereas the NQM considers a noise model on top of a fixed hessian. I share the positive reviewers’ feeling that the results about the negative eigenvalues seem quite interesting. At the same time, I do share the negative reviewer’s issue that the experiments might be too simplistic to draw interesting inferences from.
Ultimately, I do think the pros probably outweigh the cons of accepting this paper. However, I would ask the authors whether the description in terms of chaos makes sense since, as they note, they generally only discuss the local behavior for which the notion of chaos doesn’t necessarily make sense.
| train | [
"jBwf9jCg-Zy",
"NjkxrtanAp",
"Aq5dCTqy0M",
"_BhCTIGM3iw",
"H2hdmHGSodM",
"MIeLcLBsYUC8",
"QmQkDmZpb_r",
"B_8VFvAVBpx",
"knFLnAhFGZY",
"1PWKSMKVgfH"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for reading our response. We would be thankful if you could pinpoint what aspects of our current analysis you believe to have been shown through quadratic models (ideally by referring us to a paper) so we can more accurately address this issue. To our knowledge, you are referring to Neural Quadratic Mod... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
3,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"_BhCTIGM3iw",
"MIeLcLBsYUC8",
"H2hdmHGSodM",
"QmQkDmZpb_r",
"1PWKSMKVgfH",
"B_8VFvAVBpx",
"knFLnAhFGZY",
"nips_2022_ffy-h0GKZbK",
"nips_2022_ffy-h0GKZbK",
"nips_2022_ffy-h0GKZbK"
] |
nips_2022_6dfYc2IUj4 | A PAC-Bayesian Generalization Bound for Equivariant Networks | Equivariant networks capture the inductive bias about the symmetry of the learning task by building those symmetries into the model. In this paper, we study how equivariance relates to generalization error utilizing PAC Bayesian analysis for equivariant networks, where the transformation laws of feature spaces are deter- mined by group representations. By using perturbation analysis of equivariant networks in Fourier domain for each layer, we derive norm-based PAC-Bayesian generalization bounds. The bound characterizes the impact of group size, and multiplicity and degree of irreducible representations on the generalization error and thereby provide a guideline for selecting them. In general, the bound indicates that using larger group size in the model improves the generalization error substantiated by extensive numerical experiments.
| Accept | This is a borderline paper studying an interesting question around generalization bounds for equivariant networks. Initially there were significant concerns around presentation of the key results and related work. During the rebuttal phase authors updated the manuscript as per reviewers suggestions, resulting in a significantly better manuscript. I applaud the efforts of all the reviewers who engaged with authors leading to a better submission. One reviewer still kept their negative score, but other reviewers and I believe their concerns were addressed in the updated manuscript.
Overall I recommend acceptance and it is important that the authors revise the manuscript highlighting the key assumptions upfront about the Fourier space following Reviewer fd7r's suggestions. | test | [
"Li9arZ7hbjX",
"eIQ8i1X85f",
"f3BA-oG4P8",
"Q1YexlQM6jX",
"US-cSTIrZE",
"_ma4E6GeW73",
"3cq8svaNVaX",
"6cbEYHWrJjf",
"z8W1ycoXBor",
"H9G4KlWb4l2t",
"axvuoX4y8dJ",
"G0wWG4Iaqs",
"o9hJG9BZ-bi",
"Lhk3ELwBWY9",
"M714yrsOe4c"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" I thank the authors for updating their paper. This version is indeed much better. Some comments/concerns still remain which can be addressed in the final version:\n- Requiring parameters in Fourier space needs to be clearly listed as an assumption and potentially even a limitation. You state this in your paper, e... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
2,
4
] | [
"Q1YexlQM6jX",
"f3BA-oG4P8",
"_ma4E6GeW73",
"3cq8svaNVaX",
"nips_2022_6dfYc2IUj4",
"z8W1ycoXBor",
"6cbEYHWrJjf",
"o9hJG9BZ-bi",
"M714yrsOe4c",
"Lhk3ELwBWY9",
"G0wWG4Iaqs",
"nips_2022_6dfYc2IUj4",
"nips_2022_6dfYc2IUj4",
"nips_2022_6dfYc2IUj4",
"nips_2022_6dfYc2IUj4"
] |
nips_2022_1vusesyN7E | Autoregressive Perturbations for Data Poisoning | The prevalence of data scraping from social media as a means to obtain datasets has led to growing concerns regarding unauthorized use of data. Data poisoning attacks have been proposed as a bulwark against scraping, as they make data ``unlearnable'' by adding small, imperceptible perturbations. Unfortunately, existing methods require knowledge of both the target architecture and the complete dataset so that a surrogate network can be trained, the parameters of which are used to generate the attack. In this work, we introduce autoregressive (AR) poisoning, a method that can generate poisoned data without access to the broader dataset. The proposed AR perturbations are generic, can be applied across different datasets, and can poison different architectures. Compared to existing unlearnable methods, our AR poisons are more resistant against common defenses such as adversarial training and strong data augmentations. Our analysis further provides insight into what makes an effective data poison. | Accept | The paper proposed a novel auto-regressive perturbation method to make the data unlearning. The method is independent to models and data, making it more easy to be used. Reviewers found the idea is novel and intuitively reasonable. The authors responded to reviewers' detailed questions about the method and experiments. The rebuttal succeeded to remove the confusions and convince us about the empirical significance. We suggest the authors improve the paper according to the review comments in the next version. | train | [
"Id4392fnArt",
"7Et5iM7eKiP",
"8bm3gkjHUrR",
"06lSfSHO3eU",
"Pv2nO2V8knN",
"yTu-DyBbts",
"wV2VDIlYnkc",
"5OtTI6NCsLA",
"yc0AsyS3sjy",
"LH7WFi7pZK1",
"4s6zQaGSY6T",
"uTTNgZIVubW",
"9y3pjgOA3cN",
"67TlknM7kNa",
"CnrzvU3QDMJ",
"Rkvj0Te2L0l"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for the response. I already increased the score to 5 (borderline accept). \n\nWhat the paper proposes is a defense to make the data unlearnable, so developing novel defenses means developing adaptive attacks? In some related literature, designing adaptive attacks is considered necessary to verify the ef... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
5,
4
] | [
"7Et5iM7eKiP",
"8bm3gkjHUrR",
"06lSfSHO3eU",
"Rkvj0Te2L0l",
"LH7WFi7pZK1",
"yc0AsyS3sjy",
"4s6zQaGSY6T",
"uTTNgZIVubW",
"Rkvj0Te2L0l",
"CnrzvU3QDMJ",
"67TlknM7kNa",
"9y3pjgOA3cN",
"nips_2022_1vusesyN7E",
"nips_2022_1vusesyN7E",
"nips_2022_1vusesyN7E",
"nips_2022_1vusesyN7E"
] |
nips_2022_SiSv_XDMksL | Near-Optimal No-Regret Learning Dynamics for General Convex Games | A recent line of work has established uncoupled learning dynamics such that, when employed by all players in a game, each player's regret after $T$ repetitions grows polylogarithmically in $T$, an exponential improvement over the traditional guarantees within the no-regret framework. However, so far these results have only been limited to certain classes of games with structured strategy spaces---such as normal-form and extensive-form games. The question as to whether $O(\mathrm{polylog} T)$ regret bounds can be obtained for general convex and compact strategy sets---as is the case in many fundamental models in economics and multiagent systems---while retaining efficient strategy updates is an important question. In this paper, we answer this in the positive by establishing the first uncoupled learning algorithm with $O(\log T)$ per-player regret in general convex games, that is, games with concave utility functions supported on arbitrary convex and compact strategy sets. Our learning dynamics are based on an instantiation of optimistic follow-the-regularized-leader over an appropriately lifted space using a self-concordant regularizer that is peculiarly not a barrier for the feasible region. Our learning dynamics are efficiently implementable given access to a proximal oracle for the convex strategy set, leading to $O(\log\log T)$ per-iteration complexity; we also give extensions when access to only a linear optimization oracle is assumed. Finally, we adapt our dynamics to guarantee $O(\sqrt{T})$ regret in the adversarial regime. Even in those special cases where prior results apply, our algorithm improves over the state-of-the-art regret bounds either in terms of the dependence on the number of iterations or on the dimension of the strategy sets. | Accept | Reviewers are all positive and appreciate the theoretical contributions. Good work! Please make sure you address all the reviewers' comments and incorporate them (and any new experimental results, if applicable) in your camera-ready. | train | [
"5qhvscy3LgU",
"GBWcU4kzTG",
"Fv7uLzeMRAG",
"LPwsY7t4QO",
"jo_Av8AhKJb",
"ClvsNG85OKR",
"zkjqRXqEwc",
"av2plozQFVKO",
"Ydi1U3ZLCBZ",
"9ibWMCb66VO",
"6FvQ4YaD_sg",
"YbMtBiulug",
"q9w4JylwWk",
"cuM0PwcGLHv",
"dVzjBJux61j",
"bZXTx29OrtR"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We thank again the reviewer for the helpful feedback. Given that the discussion period is soon coming at an end, please let us know if the experiments we added in the revised version address the reviewer's concerns, and if the reviewer has any further questions or suggestions.",
" Once again, we are grateful to... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
6,
6,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
1,
3,
4,
4
] | [
"av2plozQFVKO",
"Fv7uLzeMRAG",
"LPwsY7t4QO",
"jo_Av8AhKJb",
"zkjqRXqEwc",
"YbMtBiulug",
"9ibWMCb66VO",
"q9w4JylwWk",
"bZXTx29OrtR",
"6FvQ4YaD_sg",
"dVzjBJux61j",
"cuM0PwcGLHv",
"nips_2022_SiSv_XDMksL",
"nips_2022_SiSv_XDMksL",
"nips_2022_SiSv_XDMksL",
"nips_2022_SiSv_XDMksL"
] |
nips_2022_WcxJooGBCc | Learning the Structure of Large Networked Systems Obeying Conservation Laws | Many networked systems such as electric networks, the brain, and social networks of opinion dynamics are known to obey conservation laws. Examples of this phenomenon include the Kirchoff laws in electric networks and opinion consensus in social networks. Conservation laws in networked systems are modeled as balance equations of the form $X = B^\ast Y$, where the sparsity pattern of $B^\ast \in \mathbb{R}^{p\times p}$ captures the connectivity of the network on $p$ nodes, and $Y, X \in \mathbb{R}^p$ are vectors of ''potentials'' and ''injected flows'' at the nodes respectively. The node potentials $Y$ cause flows across edges which aim to balance out the potential difference, and the flows $X$ injected at the nodes are extraneous to the network dynamics. In several practical systems, the network structure is often unknown and needs to be estimated from data to facilitate modeling, management, and control. To this end, one has access to samples of the node potentials $Y$, but only the statistics of the node injections $X$. Motivated by this important problem, we study the estimation of the sparsity structure of the matrix $B^\ast$ from $n$ samples of $Y$ under the assumption that the node injections $X$ follow a Gaussian distribution with a known covariance $\Sigma_X$. We propose a new $\ell_{1}$-regularized maximum likelihood estimator for tackling this problem in the high-dimensional regime where the size of the network may be vastly larger than the number of samples $n$. We show that this optimization problem is convex in the objective and admits a unique solution. Under a new mutual incoherence condition, we establish sufficient conditions on the triple $(n,p,d)$ for which exact sparsity recovery of $B^\ast$ is possible with high probability; $d$ is the degree of the underlying graph. We also establish guarantees for the recovery of $B^\ast$ in the element-wise maximum, Frobenius, and operator norms. Finally, we complement these theoretical results with experimental validation of the performance of the proposed estimator on synthetic and real-world data. | Accept | The reviewers and the area chair judged the paper as technically sound and found that the problem treated is an interesting variant in the spirit of (but different from) the well-known group-LASSO. The range of applicability of the approach was considered promising. While the narrative of the proof leading to a convex problem was judged standard and by itself perhaps less interesting, overall the contribution was judged as valuable to the community and we recommend acceptance of the paper. | val | [
"cSXX2UXCimi",
"lOKSj00dgMj",
"kduBITAxSy",
"bh2OcYq51c-",
"EzdmcNOLY8B",
"13aNmO_4g0e",
"Ej8g3uV9g1",
"3TAG1zFTh2I",
"oclcI4Fjkzf",
"pmi3q5D9OW6",
"xje1fZHD43X",
"sO2aMnS6a8N",
"v-SwXYKtpaN",
"Z8_cLRspELc",
"mj42qFFKjlR",
"GLOO_QUmDp3",
"MOBRuyorT_1",
"9sEozKBYyv",
"r6te4jzzvmx"... | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" We are glad that our responses helped clarify things. We are also grateful for your careful reading of the paper and for your valuable feedback. This will certainly allow us to refine the quality of our manuscript. \nWe wonder if you think our responses and planned updates warrant a revision of your initial score... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
7,
7,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
2,
2,
2
] | [
"bh2OcYq51c-",
"kduBITAxSy",
"3TAG1zFTh2I",
"Ej8g3uV9g1",
"mj42qFFKjlR",
"MOBRuyorT_1",
"r6te4jzzvmx",
"Z8_cLRspELc",
"r6te4jzzvmx",
"r6te4jzzvmx",
"r6te4jzzvmx",
"MOBRuyorT_1",
"MOBRuyorT_1",
"GLOO_QUmDp3",
"9sEozKBYyv",
"nips_2022_WcxJooGBCc",
"nips_2022_WcxJooGBCc",
"nips_2022_W... |
nips_2022_CLMuNJSJfhv | Neural Payoff Machines: Predicting Fair and Stable Payoff Allocations Among Team Members | In many multi-agent settings, participants can form teams to achieve collective outcomes that may far surpass their individual capabilities. Measuring the relative contributions of agents and allocating them shares of the reward that promote long-lasting cooperation are difficult tasks. Cooperative game theory offers solution concepts identifying distribution schemes, such as the Shapley value, that fairly reflect the contribution of individuals to the performance of the team or the Core, which reduces the incentive of agents to abandon their team. Applications of such methods include identifying influential features and sharing the costs of joint ventures or team formation. Unfortunately, using these solutions requires tackling a computational barrier as they are hard to compute, even in restricted settings. In this work, we show how cooperative game-theoretic solutions can be distilled into a learned model by training neural networks to propose fair and stable payoff allocations. We show that our approach creates models that can generalize to games far from the training distribution and can predict solutions for more players than observed during training. An important application of our framework is Explainable AI: our approach can be used to speed-up Shapley value computations on many instances. | Accept | This paper proposes a deep learning approach to computing agent/feature importance in complex cooperative game theoretic settings. The reviewers are overall positive, albeit some are a bit lukewarm, about the paper. Overall, the consensus after the discussion is that the idea of studying how well neural computation can be used to approximate the relevant quantities (hard to solve exactly) in cooperative game theory is itself a contribution, and this paper has done a decent job in analyzing the approach and establishing its validity. We would encourage the authors to incorporate the reviewer comments (e.g., move the many vs one discussion to the main paper to strengthen the motivation, include comparisons with more baselines, etc) into the final version and produce a stronger paper. | train | [
"Ve0e5khG-Lc",
"_u0Id3P2br_",
"02iuL8P-eG",
"PuBaexuWeI",
"ZcYm_8FS1J",
"ezZuWBw7G2r",
"fcUaLcBE-9y",
"wGOMkf3p08E",
"u4VShrwhu9a",
"iIjiO91lWWB",
"LSJQm8Yq_Ki",
"0994enkuQ77"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Thank you for all your comments and suggestions, we have taken and implemented your feedback which has made the paper much better.\n\nWe would be happy to discuss our corrections or any other feedback you have that we did not address as we are nearing the end of the rebuttal period.\n\n",
" We thank you for the... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"fcUaLcBE-9y",
"PuBaexuWeI",
"ZcYm_8FS1J",
"wGOMkf3p08E",
"u4VShrwhu9a",
"nips_2022_CLMuNJSJfhv",
"0994enkuQ77",
"LSJQm8Yq_Ki",
"iIjiO91lWWB",
"nips_2022_CLMuNJSJfhv",
"nips_2022_CLMuNJSJfhv",
"nips_2022_CLMuNJSJfhv"
] |
nips_2022_St5q10aqLTO | Implicit Neural Representations with Levels-of-Experts | Coordinate-based networks, usually in the forms of MLPs, have been successfully applied to the task of predicting high-frequency but low-dimensional signals using coordinate inputs. To scale them to model large-scale signals, previous works resort to hybrid representations, combining a coordinate-based network with a grid-based representation, such as sparse voxels. However, such approaches lack a compact global latent representation in its grid, making it difficult to model a distribution of signals, which is important for generalization tasks. To address the limitation, we propose the Levels-of-Experts (LoE) framework, which is a novel coordinate-based representation consisting of an MLP with periodic, position-dependent weights arranged hierarchically. For each linear layer of the MLP, multiple candidate values of its weight matrix are tiled and replicated across the input space, with different layers replicating at different frequencies. Based on the input, only one of the weight matrices is chosen for each layer. This greatly increases the model capacity without incurring extra computation or compromising generalization capability. We show that the new representation is an efficient and competitive drop-in replacement for a wide range of tasks, including signal fitting, novel view synthesis, and generative modeling. | Accept | This paper presents a framework for position-dependent MLPs where the weights in each layer depend on the input coordinate periodically, with hierarchically tiled periodic weight patterns across successive layers. The paper shows that such models outperform prior work on a variety of tasks involving data representation. The reviewers all agree that the proposed approach is novel and highly effective, and the paper is clear and compelling. I accordingly recommend acceptance. | train | [
"oXu9_xYNa_X",
"h01ANQqK8kQ",
"d3JBDOrmboM",
"RW9DanyE0C_",
"WLwNy73YMC",
"9KKOQr2BA_q",
"UfYGBUNekCK7",
"AjCmUMFFzyI",
"Sm6dvqF3goJ",
"LkhqkQQWL8u",
"Cy9Skvzi7-M",
"AVk4H9GUCSL",
"8RhVUwWvxfj",
"BSDEme1jeOR",
"T7sr0LgM17j",
"AOHXPnV5-Ya"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" KiloNeRF and the nearest interpolated version of LoE model share a similar idea of partitioning the input space and assigning a different set of network parameters to each partition. However, LoE generalizes KiloNeRF in terms of how the space is partitioned and how the network parameters are derived.\n * In KiloN... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
7,
7,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2,
2
] | [
"h01ANQqK8kQ",
"Cy9Skvzi7-M",
"AVk4H9GUCSL",
"9KKOQr2BA_q",
"LkhqkQQWL8u",
"UfYGBUNekCK7",
"Sm6dvqF3goJ",
"nips_2022_St5q10aqLTO",
"AOHXPnV5-Ya",
"T7sr0LgM17j",
"BSDEme1jeOR",
"8RhVUwWvxfj",
"nips_2022_St5q10aqLTO",
"nips_2022_St5q10aqLTO",
"nips_2022_St5q10aqLTO",
"nips_2022_St5q10aqL... |
nips_2022_9sKZ60VtRmi | LieGG: Studying Learned Lie Group Generators | Symmetries built into a neural network have appeared to be very beneficial for a wide range of tasks as it saves the data to learn them. We depart from the position that when symmetries are not built into a model a priori, it is advantageous for robust networks to learn symmetries directly from the data to fit a task function. In this paper, we present a method to extract symmetries learned by a neural network and to evaluate the degree to which a network is invariant to them. With our method, we are able to explicitly retrieve learned invariances in a form of the generators of corresponding Lie-groups without prior knowledge of symmetries in the data. We use the proposed method to study how symmetrical properties depend on a neural network's parameterization and configuration. We found that the ability of a network to learn symmetries generalizes over a range of architectures. However, the quality of learned symmetries depends on the depth and the number of parameters. | Accept | The paper proposes a novel technique to extract symmetry inductive biases from data that can be applied to any neural network architecture. Please include more discussions with the related work and possible experimental comparisons in the updated version. | val | [
"nxmg6S1sI9u",
"XtbkfLECNpv",
"PxfIxefWpTy",
"loW6CvMHUtm",
"B1hX-GG54qP",
"fCW68enNoLt",
"X3W1LGcMpL",
"LcX8ao_1x5c"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
" Before the closure of the rebuttal/discussion period, we want to address the sample complexity question. At this time, we have finished the sample complexity study of the proposed method.\n\nThe results suggest that we can effectively extract the learned symmetry using only a fraction of the dataset. For the synt... | [
-1,
-1,
-1,
-1,
-1,
7,
6,
6
] | [
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"PxfIxefWpTy",
"loW6CvMHUtm",
"X3W1LGcMpL",
"fCW68enNoLt",
"LcX8ao_1x5c",
"nips_2022_9sKZ60VtRmi",
"nips_2022_9sKZ60VtRmi",
"nips_2022_9sKZ60VtRmi"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.