paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
nips_2022_sQ2LdeHNMej
Federated Hypergradient Descent
In this work, we explore combining automatic hyperparameter tuning and optimization for federated learning (FL) in an online, one-shot procedure. We apply a principled approach on a method for adaptive client learning rate, number of local steps, and batch size. In our federated learning applications, our primary motivations are minimizing communication budget as well as local computational resources in the training pipeline. Conventionally, hyperparameter tuning methods involve at least some degree of trial-and-error, which is known to be sample inefficient. In order to address our motivations, we propose FATHOM (Federated AuTomatic Hyperparameter OptiMization) as a one-shot online procedure. We investigate the challenges and solutions of deriving analytical gradients with respect to the hyperparameters of interest. Our approach is inspired by the fact that all components involved in our training process are open-boxed, and this fact can be exploited in our algorithm impactfully. We show that FATHOM is more communication efficient than Federated Averaging (FedAvg) with optimized, static valued hyperparameters, and is also more computationally efficient overall. As a communication efficient, one-shot online procedure, FATHOM solves the bottleneck of costly communication and limited local computation, by eliminating a potentially wasteful tuning process, and by optimizing the hyperparamters adaptively throughout the training procedure without trial-and-error. We show our numerical results through extensive empirical experiments with the Federated EMNIST-62 (FEMNIST) and Federated Stack Overflow (FSO) datasets, using FedJAX as our baseline framework.
Reject
We thank the authors and the reviewers for their involvement in this interactive reviewing process. While the paper clearly generated interest and brings new notions to the community, we felt that a major revision is necessary before the paper can be a successfull NeurIPS submission. Consequently, we unfortunately recommend rejection. Points that stood out are 1. at least one essential assumption is simultaneously strong, hard to check in practice, and not appropriately discussed in the original version; 2. unusual but central notions like discrete convexity and hypergradients are not appropriately introduced. This made important technical parts of the paper hard to proofread; 3. lack of some natural baselines. We hope that the detailed discussions will be useful in further improving the manuscript.
train
[ "cv7P0-Y6nr", "DKDBx4fXrl", "2k3Y98wXzW3", "TawjFYe2lph", "fy2r16b2kVs", "VZ6NbWUwhQm", "kTlc1Kkfrolb", "k2tyB0cysfo", "k0XcAshYdLn", "HRUZPBycpr1", "SABLVFhNjE", "lvRYXM_pF8i" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you again for your time.", " Thank you so much for your time and your insightful feedback!", " Thank you for the reply. I've read the authors' response and the other reviewers' review, and I'd like to keep my current rating.", " Okay, thank you for the update; please see the comments in my updated rev...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 1, 3 ]
[ "2k3Y98wXzW3", "TawjFYe2lph", "k0XcAshYdLn", "fy2r16b2kVs", "VZ6NbWUwhQm", "kTlc1Kkfrolb", "lvRYXM_pF8i", "SABLVFhNjE", "HRUZPBycpr1", "nips_2022_sQ2LdeHNMej", "nips_2022_sQ2LdeHNMej", "nips_2022_sQ2LdeHNMej" ]
nips_2022_ONFaDyl_uVq
Learning to Mitigate AI Collusion on Economic Platforms
Algorithmic pricing on online e-commerce platforms raises the concern of tacit collusion, where reinforcement learning algorithms learn to set collusive prices in a decentralized manner and through nothing more than profit feedback. This raises the question as to whether collusive pricing can be prevented through the design of suitable "buy boxes," i.e., through the design of the rules that govern the elements of e-commerce sites that promote particular products and prices to consumers. In this paper, we demonstrate that reinforcement learning (RL) can also be used by platforms to learn buy box rules that are effective in preventing collusion by RL sellers. For this, we adopt the methodology of Stackelberg POMDPs, and demonstrate success in learning robust rules that continue to provide high consumer welfare together with sellers employing different behavior models or having out-of-distribution costs for goods.
Accept
This paper proposes an RL based method to prevent collusive pricing by sellers through a framework that solves the two-level, Stackelberg problem. The paper received a mixed evaluation from the reviewers, ranging from accept (7) to weak reject (4). The strengths of the paper mentioned by the reviewers were: - The considered problem was appreciated and acknowledged to be an important research direction - Novelty in casting the problem as a Stackelberg game - The proposed approach is competitive and robust On the other hand, the identified weaknesses were: - Potentially limited support in a real-world setting and a limited scope (only designed and testified in a specific economic model) - Problems with the presentation Despite the weaknesses mentioned above, I lean toward acceptance with my recommendation.
train
[ "htijRvgKja", "FbUcfCACnoe", "xZH8JsnOlM", "zZL53IQ344", "Z-DBIAZeMjv", "LvdQm-gbH3d", "YyIaGcHMSOd", "JGLNjfVdegW", "V_dLz8yGJ_l" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your thorough feedback!\n\nWe have included new results in Figure 4 of Appendix C. These results are for a setting that has greater horizontal differentiation across products (\\mu=0.4) and show that, even in this scenario, we can learn optimal interventions. \n\nOn studying a wider variety of econo...
[ -1, -1, -1, -1, -1, 4, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, 3, 2, 4, 4 ]
[ "V_dLz8yGJ_l", "JGLNjfVdegW", "YyIaGcHMSOd", "LvdQm-gbH3d", "nips_2022_ONFaDyl_uVq", "nips_2022_ONFaDyl_uVq", "nips_2022_ONFaDyl_uVq", "nips_2022_ONFaDyl_uVq", "nips_2022_ONFaDyl_uVq" ]
nips_2022_qSs7C7c4G8D
A Unified Analysis of Federated Learning with Arbitrary Client Participation
Federated learning (FL) faces challenges of intermittent client availability and computation/communication efficiency. As a result, only a small subset of clients can participate in FL at a given time. It is important to understand how partial client participation affects convergence, but most existing works have either considered idealized participation patterns or obtained results with non-zero optimality error for generic patterns. In this paper, we provide a unified convergence analysis for FL with arbitrary client participation. We first introduce a generalized version of federated averaging (FedAvg) that amplifies parameter updates at an interval of multiple FL rounds. Then, we present a novel analysis that captures the effect of client participation in a single term. By analyzing this term, we obtain convergence upper bounds for a wide range of participation patterns, including both non-stochastic and stochastic cases, which match either the lower bound of stochastic gradient descent (SGD) or the state-of-the-art results in specific settings. We also discuss various insights, recommendations, and experimental results.
Accept
The paper was appreciated by all three reviewers, and all gave strong endorsement. Some of the positive comments include: - The paper is well-written in general, all assumptions and definitions are clearly stated. - All formulations of theorems are clear. - Theoretical analysis seems to be correct. I checked the appendix and I did not find mistakes, but I might miss something. - Obtained results are interesting and they lead to new prospectives and insights. - The experimental comparison is meaningful and illustrative. - The paper is largely well-written and provides adequate intuition and explanation for the math introduced. - The paper makes a relevant and significant contribution, addressing the problem of arbitrary client participation. - In general, the paper is well written and provides meaningful insight into federated learning with a unified convergence analysis for arbitrary client participation. - The paper theoretically and empirically addresses an important problem of federated learning which is arbitrary client participation. While many work in federated learning considers a fixed client participation pattern, in real implementations of cross-device federated learning, clients can leave or join arbitrarily depending on their circumstances. There are not much work that has a unified analysis of different patterns (including stochastic) of client participation, and this paper contributes to the federated learning community largely by presenting such analysis. - The paper compares its analysis with relevant other work including [11], [27] referenced in the paper as well as the other work which shows linear speedup in convergence with partial client participation in federated learning [31]. It is interesting to see in what scenarios for arbitrary client participation we can achieve a similar linear speedup which the paper thoroughly provides insights on. - Although the contribution of this paper is more on the theoretical side, it proposes an interesting amplification method (line 11-14 in Algorithm 1) which does not really need additional computation/communication at the client side, and only requires additional memory saving at the server side (which in general is a plus for federated learning). This method also achieves 0 convergence error in some client participation patterns which is interesting and validated in the experiments. I have read the reviews, rebuttal, and also skimmed through the paper. Virtually all criticism was successfully addressed; and I would ask the authors to make sure all changes that were promised would be implemented. I agree with the reviewers that this paper clearly passes the acceptance bar. Congratulations on such a nice paper! AC
test
[ "TDMbqf7WpD4", "JP6ja-f6gBg", "V4bzLocnZgL", "LCVskzwBX6W", "V3k1Mxp8ZRt", "XqT579ZH48", "dXHW22cjUs2", "EGHxY3SsQs2", "0wugga5zRYK", "n8nTa8hYMoO", "is9lqsc9Hx4" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank all of you for your kind review. It is great to see a unanimously positive opinion about our work. We are also very happy that you have all replied to our responses.\n\nSince the reviewer-author discussion period is ending soon, we would like to provide a short summary and reflection on the mai...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "nips_2022_qSs7C7c4G8D", "dXHW22cjUs2", "V3k1Mxp8ZRt", "XqT579ZH48", "EGHxY3SsQs2", "is9lqsc9Hx4", "n8nTa8hYMoO", "0wugga5zRYK", "nips_2022_qSs7C7c4G8D", "nips_2022_qSs7C7c4G8D", "nips_2022_qSs7C7c4G8D" ]
nips_2022_CF1ThuQ8vpG
Iterative Feature Matching: Toward Provable Domain Generalization with Logarithmic Environments
Domain generalization aims at performing well on unseen test environments with data from a limited number of training environments. Despite a proliferation of proposed algorithms for this task, assessing their performance both theoretically and empirically is still very challenging. Distributional matching algorithms such as (Conditional) Domain Adversarial Networks [Ganin et al., 2016, Long et al., 2018] are popular and enjoy empirical success, but they lack formal guarantees. Other approaches such as Invariant Risk Minimization (IRM) require a prohibitively large number of training environments---linear in the dimension of the spurious feature space $d_s$---even on simple data models like the one proposed by [Rosenfeld et al., 2021]. Under a variant of this model, we show that ERM and IRM can fail to find the optimal invariant predictor with $o(d_s)$ environments. We then present an iterative feature matching algorithm that is guaranteed with high probability to find the optimal invariant predictor after seeing only $O(\log d_s)$ environments. Our results provide the first theoretical justification for distribution-matching algorithms widely used in practice under a concrete nontrivial data model.
Accept
This paper gives theoretical guarantees for the problem of domain generalization from a few training environments. This paper gives an algorithm that achieves strong theoretical improvements over prior work (logarithmically many training environments vs linear number for existing methods). The reviewers also felt that there are novel insights that may be of independent interest in this space. There was some concern by some reviewers about the extent of the empirical evaluations. However, since this is primarily a theoretical paper with strong improvements, the contributions seemed above the bar for NeurIPS.
train
[ "yqtZmGD_2vL", "iBP8ONz_DEQ", "gizw6sawqlJ", "1hU9X-nsdfv", "Haq8ypeXj5U", "fEowh7Tosmp", "rCvP0VmSbOY", "RbJGI4539u8", "qwjkdTjguNcz", "qyF5bFIAZ6S", "pLzSdFijbW9", "8-ZRDvK79N40", "EVCagPwO9lL", "T3tqXWhWbrJ", "F5GHhG8QdQZ" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for joining the discussion and hope the following answers address your lingering concerns:\n\n(1) “From the results of IRM, I find that with only 2 environments, it is enough to achieve a good OOD generalization performance.”:\n\nWe would like to point out that the values reported in the ori...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "iBP8ONz_DEQ", "8-ZRDvK79N40", "nips_2022_CF1ThuQ8vpG", "EVCagPwO9lL", "rCvP0VmSbOY", "EVCagPwO9lL", "RbJGI4539u8", "qwjkdTjguNcz", "F5GHhG8QdQZ", "pLzSdFijbW9", "T3tqXWhWbrJ", "EVCagPwO9lL", "nips_2022_CF1ThuQ8vpG", "nips_2022_CF1ThuQ8vpG", "nips_2022_CF1ThuQ8vpG" ]
nips_2022_yNPsd3oG_s
Training with More Confidence: Mitigating Injected and Natural Backdoors During Training
The backdoor or Trojan attack is a severe threat to deep neural networks (DNNs). Researchers find that DNNs trained on benign data and settings can also learn backdoor behaviors, which is known as the natural backdoor. Existing works on anti-backdoor learning are based on weak observations that the backdoor and benign behaviors can differentiate during training. An adaptive attack with slow poisoning can bypass such defenses. Moreover, these methods cannot defend natural backdoors. We found the fundamental differences between backdoor-related neurons and benign neurons: backdoor-related neurons form a hyperplane as the classification surface across input domains of all affected labels. By further analyzing the training process and model architectures, we found that piece-wise linear functions cause this hyperplane surface. In this paper, we design a novel training method that forces the training to avoid generating such hyperplanes and thus remove the injected backdoors. Our extensive experiments on five datasets against five state-of-the-art attacks and also benign training show that our method can outperform existing state-of-the-art defenses. On average, the ASR (attack success rate) of the models trained with NONE is 54.83 times lower than undefended models under standard poisoning backdoor attack and 1.75 times lower under the natural backdoor attack. Our code is available at https://github.com/RU-System-Software-and-Security/NONE.
Accept
This paper introduces a new algorithm to mitigate backdoors in neural network models. The reviewers agreed this paper proposes an interesting defense that is well motivated, carefully evaluated with many ablation studies, and highly effective. The weaknesses raised have also been mitigated in the rebuttal period and the paper is generlly strong.
train
[ "reMsoHkWdcR", "CDq65Yr3gm", "zEI6z2OZ1oU", "RqLpwWY56Np", "36x42Fv29dj", "OnKs0DHdZxh", "LvWiSkSkgte", "QWkqP0F2RL0", "cZEqH6GynUd", "VlBIUuXyw_", "OvbSw5Ani66", "0iqD6U9lD5Y", "Tzb7m2o6aqt", "fIueOoWbngb", "WCbrOirHobZ" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank all reviewers again for their insightful questions and suggestions. Below is our revision summary:\n\n**[Section 2]** We added clarification for \"slow poisoning\", following the suggestion of Reviewer obsx and Reviewer bmQS. \n\n**[Section 3]** We added the assumptions in Theorem 3.3, following the sugg...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "nips_2022_yNPsd3oG_s", "zEI6z2OZ1oU", "QWkqP0F2RL0", "36x42Fv29dj", "OnKs0DHdZxh", "LvWiSkSkgte", "WCbrOirHobZ", "cZEqH6GynUd", "fIueOoWbngb", "OvbSw5Ani66", "Tzb7m2o6aqt", "nips_2022_yNPsd3oG_s", "nips_2022_yNPsd3oG_s", "nips_2022_yNPsd3oG_s", "nips_2022_yNPsd3oG_s" ]
nips_2022_Z26xiZkbjgE
Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power
It is well-known that modern neural networks are vulnerable to adversarial examples. To mitigate this problem, a series of robust learning algorithms have been proposed. However, although the robust training error can be near zero via some methods, all existing algorithms lead to a high robust generalization error. In this paper, we provide a theoretical understanding of this puzzling phenomenon from the perspective of expressive power for deep neural networks. Specifically, for binary classification problems with well-separated data, we show that, for ReLU networks, while mild over-parameterization is sufficient for high robust training accuracy, there exists a constant robust generalization gap unless the size of the neural network is exponential in the data dimension $d$. This result holds even if the data is linear separable (which means achieving standard generalization is easy), and more generally for any parameterized function classes as long as their VC dimension is at most polynomial in the number of parameters. Moreover, we establish an improved upper bound of $\exp({\mathcal{O}}(k))$ for the network size to achieve low robust generalization error when the data lies on a manifold with intrinsic dimension $k$ ($k \ll d$). Nonetheless, we also have a lower bound that grows exponentially with respect to $k$ --- the curse of dimensionality is inevitable. By demonstrating an exponential separation between the network size for achieving low robust training and generalization error, our results reveal that the hardness of robust generalization may stem from the expressive power of practical models.
Accept
This paper gives new lower bounds and upper bounds on the size of a feedforward ReLU network needed for robust generalization (and not just robust training error. In particular, they give an exponential lower bound on the size of the network, even for separable data. The reviewers were in agreement about the strengths the paper. This points one of the challenges in obtaining neural networks with robust test error.
train
[ "Y_RjSwMS0mCE", "XYEMnADJlKR", "6QioWpWCHxt", "YURGqe11i6s", "DP6M9ec0e9D", "HLtwdrKLdeL", "5DtVI1Zy7Jf", "-coAV1gzU4" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the clarification. We will incorporate your suggestion in the revision.", " Dear authors,\n\nThank you for your reply. Just to clarify: I do not think that your paper needs an empirical validation. I suggest to simply point towards the empirical evidence in [1] that you mention. \n\n\nCheers", " We...
[ -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "XYEMnADJlKR", "6QioWpWCHxt", "-coAV1gzU4", "5DtVI1Zy7Jf", "HLtwdrKLdeL", "nips_2022_Z26xiZkbjgE", "nips_2022_Z26xiZkbjgE", "nips_2022_Z26xiZkbjgE" ]
nips_2022_GdMqXQx5fFR
Few-shot Task-agnostic Neural Architecture Search for Distilling Large Language Models
Traditional knowledge distillation (KD) methods manually design student architectures to compress large models given pre-specified computational cost. This requires several trials to find viable students, and repeating the process with change in computational budget. We use Neural Architecture Search (NAS) to automatically distill several compressed students with variable cost from a large model. Existing NAS methods train a single SuperLM consisting of millions of subnetworks with weight-sharing, resulting in interference between subnetworks of different sizes. Additionally, many of these works are task-specific requiring task labels for SuperLM training. Our framework AutoDistil addresses above challenges with the following steps: (a) Incorporates inductive bias and heuristics to partition Transformer search space into K compact sub-spaces (e.g., K=3 can generate typical student sizes of base, small and tiny); (b) Trains one SuperLM for each sub-space using task-agnostic objective (e.g., self-attention distillation) with weight-sharing of students; (c) Lightweight search for the optimal student without re-training. Task-agnostic training and search allow students to be reused for fine-tuning on any downstream task. Experiments on GLUE benchmark demonstrate AutoDistil to outperform state-of-the-art KD and NAS methods with upto 3x reduction in computational cost and negligible loss in task performance. Code and model checkpoints are available at https://github.com/microsoft/autodistil.
Accept
The submission introduces an approach to searching for student architectures to distill large language models into. The authors divide the search space into different sizes of student model, train a SuperLM for each model, and then select the optimal student from within these using knowledge distillation. While several of the techniques here have been used before, the reviewers found that the overall approach was well motivated and effective empirically, outperforming strong baselines. As pointed out by reviewer ov4n, the writing could be improved to make the paper more accessible to people less familiar with NAS, but overall this is solid work and I recommend acceptance.
train
[ "jeMUKu4-iHb", "uNhCFMPgkY", "DMBEmTmJed1", "q4OHEJPsk7h", "Ny6BbwvZhWX", "m3caErEAhUN", "AjSn2V2JI2", "auzA5Lqp2u_", "VG-ee5oa4P", "QvEpSq2jKR", "MedPeHY2XuZ", "fsji_N7FpBx", "Q2LJT-96f8I" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We submitted a revised manuscript with the following changes (highlighted in blue color). Most of these are added to the Appendix in Supplementary due to limited space in main manuscript.\n\n(1) Additional discussion on our differences with existing works on NAS and KD with regards to our fine-grained search spac...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 2 ]
[ "nips_2022_GdMqXQx5fFR", "Q2LJT-96f8I", "q4OHEJPsk7h", "Q2LJT-96f8I", "m3caErEAhUN", "fsji_N7FpBx", "auzA5Lqp2u_", "MedPeHY2XuZ", "nips_2022_GdMqXQx5fFR", "nips_2022_GdMqXQx5fFR", "nips_2022_GdMqXQx5fFR", "nips_2022_GdMqXQx5fFR", "nips_2022_GdMqXQx5fFR" ]
nips_2022_X5eFS09r9hm
“Why Not Other Classes?”: Towards Class-Contrastive Back-Propagation Explanations
Numerous methods have been developed to explain the inner mechanism of deep neural network (DNN) based classifiers. Existing explanation methods are often limited to explaining predictions of a pre-specified class, which answers the question “why is the input classified into this class?” However, such explanations with respect to a single class are inherently insufficient because they do not capture features with class-discriminative power. That is, features that are important for predicting one class may also be important for other classes. To capture features with true class-discriminative power, we should instead ask “why is the input classified into this class, but not others?” To answer this question, we propose a weighted contrastive framework for explaining DNNs. Our framework can easily convert any existing back-propagation explanation methods to build class-contrastive explanations. We theoretically validate our weighted contrast explanation in general back-propagation explanations, and show that our framework enables class-contrastive explanations with significant improvements in both qualitative and quantitative experiments. Based on the results, we point out an important blind spot in the current explainable artificial intelligence (XAI) study, where explanations towards the predicted logits and the probabilities are obfuscated. We suggest that these two aspects should be distinguished explicitly any time explanation methods are applied.
Accept
Reviewers expressed overwhelmingly positive opinions about the simple, easily implementable, and at the same time innovative procedure proposed in the paper for obtaining gradient-based class-contrastive explanations. Appreciation also transpired for the significance of this work in clarifying some technical points in the gradient-based XAI literature and the potential for future work that the paper opens up. One of the main criticisms raised in the reviews, the lack of comparisons against other contrastive explanation methods, has been addressed satisfactorily with additional experiments and discussions in the rebuttals. The most important remaining criticism was a doubt on the merits of one of the key technical points in the paper regarding whether gradient-based attributions should be computed according to the softmax outputs or logits. Reviewers pointed out that computing attributions with respect to softmax outputs instead of logits is already common practice in the field. Reviewers expressed strongly the opinion that it would be appropriate to characterizing and clarify this situation, as it could be potentially misleading and indeed counterproductive even for the paper to denote attribution methods with respect to logits as "standard", while it's instead the case that some implementation of gradient-based attribution methods already attribute with respect to the softmax output (albeit inconsistently). In conclusion, the reviewing panel voted for accepting the paper, under the condition that the camera-ready version of the paper explicitly clarify the distinction between the two approaches and discuss the implication of choosing one of the other, without however referring to attribution with respect to logits as standard, but merely pointing out that until now the distinction has been vague and implementations inconsistent. From this technical standpoint, Reviewers ask that the contribution of the paper should then be explicitly characterized as clarifying the distinction between logits and softmax attributions, rather than as the proposal of a new procedure in opposition to an already established standard. This is already perceived as a strong contribution to the community, as phrasing it specifically as indicated would help elucidate the state of affairs in the literature and make the community aware of this outstanding blind-spot.
val
[ "VnxannlagRh", "zJXC_pZ9w35", "ekjhKECanM2", "w8RrU2fKp2o", "4PSdctwrr_5", "rUWrwJUJT7", "dhYBcZv2SB", "r76QN9y0lbM", "CCvtDxVuvb7", "4duVp8zddS", "E_utbVinM6I", "_W1xd5pvaPF", "hO8bs2ismCy" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer,\n\nThank you very much for your constructive feedback and for acknowledging our work. We will revise our final paper accordingly.\n\nBest,\nAuthors", " Thank you for the response! Your clarification has made the equations much easier to understand. Please consider the following changes in your re...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "zJXC_pZ9w35", "CCvtDxVuvb7", "nips_2022_X5eFS09r9hm", "4PSdctwrr_5", "4duVp8zddS", "dhYBcZv2SB", "r76QN9y0lbM", "hO8bs2ismCy", "_W1xd5pvaPF", "E_utbVinM6I", "nips_2022_X5eFS09r9hm", "nips_2022_X5eFS09r9hm", "nips_2022_X5eFS09r9hm" ]
nips_2022_R8Cngx78A-V
PAC: Assisted Value Factorization with Counterfactual Predictions in Multi-Agent Reinforcement Learning
Multi-agent reinforcement learning (MARL) has witnessed significant progress with the development of value function factorization methods. It allows optimizing a joint action-value function through the maximization of factorized per-agent utilities due. In this paper, we show that in partially observable MARL problems, an agent’s ordering over its own actions could impose concurrent constraints (across different states) on the representable function class, causing significant estimation error during training. We tackle this limitation and propose PAC, a new framework leveraging Assistive information generated from Counterfactual Predictions of optimal joint action selection, which enable explicit assistance to value function factorization through a novel counterfactual loss. A variational inference-based information encoding method is developed to collect and encode the counterfactual predictions from an estimated baseline. To enable decentralized execution, we also derive factorized per-agent policies inspired by a maximum-entropy MARL framework. We evaluate the proposed PAC on multi-agent predator-prey and a set of StarCraft II micromanagement tasks. Empirical results demonstrate improved results of PAC over state-of-the-art value-based and policy-based multi-agent reinforcement learning algorithms on all benchmarks.
Accept
After reading the reviews and feedbacks, I lean towards acceptance. A majority of reviewers gave a positive score after the rebuttal period and some concern were answered in the authors response. Specifically authors have shown that the recent baselines they use outperform other baselines and therefore those do not need to be added in the paper, they have also clarified some proofs and some notations. Overall, the reviewers found the method presented interesting, the paper well written and appreciated the comparison to other methods of the literature. Finally, experiments shows interesting results on large scale domains which is a sign that the proposed method could scale up.
train
[ "aVFOQBNRO0N", "VHZvJZZ6fp-", "Sh3nT1OeeXR", "nxxgbfJvjQk", "qw_ZLK3hzdL", "btkO8DvoM9-", "Tlq87aAoL10", "sUGbGV5XYde", "erpniFPXFd6", "Hz9SegMXzXnG", "706NRVtrjtx", "kjUkbuAxZUd", "dx761lxwbtP", "6MnXao52Mxi", "M2S0ggBmXZp", "UmKwG1M-Zwq", "Pd_e8z4U3Ko", "gJrCt70mnzR", "Zdej_fpY...
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you again for your review and the reply. We believe we’ve covered most of your concerns in the revised version and rebuttal now. Please consider raising our score, or let us know if there are other places that need addressing. Thank you!", " Thank you for your review. We hope our response has solved your...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "erpniFPXFd6", "gJrCt70mnzR", "sUGbGV5XYde", "sUGbGV5XYde", "gJrCt70mnzR", "erpniFPXFd6", "UmKwG1M-Zwq", "Hz9SegMXzXnG", "dx761lxwbtP", "706NRVtrjtx", "Zdej_fpYnP", "gJrCt70mnzR", "6MnXao52Mxi", "Pd_e8z4U3Ko", "UmKwG1M-Zwq", "nips_2022_R8Cngx78A-V", "nips_2022_R8Cngx78A-V", "nips_2...
nips_2022_lTZBRxm2q5
Learning Fractional White Noises in Neural Stochastic Differential Equations
Differential equations play important roles in modeling complex physical systems. Recent advances present interesting research directions by combining differential equations with neural networks. By including noise, stochastic differential equations (SDEs) allows us to model data with uncertainty and measure imprecision. There are many variants of noises known to exist in many real-world data. For example, previously white noises are idealized and induced by Brownian motions. Nevertheless, there is a lack of machine learning models that can handle such noises. In this paper, we introduce a generalized fractional white noise to existing models and propose an efficient approximation of noise sample paths based on classical integration methods and sparse Gaussian processes. Our experimental results demonstrate that the proposed model can capture noise characteristics such as continuity from various time series data, therefore improving model fittings over existing models. We examine how we can apply our approach to score-based generative models, showing that there exists a case of our generalized noise resulting in a better image generation measure.
Accept
Ratings: 4/8/3/7/7. Confidence: 3/4/4/2/4. There was no discussion among reviewers, but there was a good amount of discussion between authors and reviewers. Note that there are 5 reviewers since the original reviews had low confidence. Summary: This paper studies a variant of neural SDEs that replace the standard Brownian motion driving the NSDE with a Riemann–Liouville fractional Brownian motion that has a learnt time-varying Hurst function. The paper and subsequent discussion unfortunately failed to persuade 2/5 of the reviewers, especially in terms of accessibility of the theory and the empirical validation. However, the reviewers agree that the topic and results are interesting. Recommendation: I recommend to accept this submission.
train
[ "_Wfw1ykNB0N", "9aw2fG-BbF8", "XwfJjEGwS2w", "FcLtKDIfCWD", "2qBnoY8qwYt", "SBlLWKrmKj", "7yS_ydYlClx", "FFbhKH20NWE", "mtxhUbNg5uA", "i1b9ancyEH", "UR8BV_uyFYP", "QOq1Juf51cl", "1veLbrIdoDp", "5TjYidPSmk", "kJi3Q4cdfY0", "esRA31jm1cX", "qoNmxuFVyaC", "7rBfxaObQP3", "oIxQADLcSG",...
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", ...
[ " The paper proposes an extension of the Neural SDE model proposed by [Li et al. (AISTATS 2020)](https://proceedings.mlr.press/v108/li20i.html). This more general model is obtained by replacing the Brownian motion driving the NSDE with a Riemann–Liouville fractional Brownian motion (R–L fBM) that has a learnt time-...
[ 7, -1, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 8, 4 ]
[ 4, -1, 2, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "nips_2022_lTZBRxm2q5", "XwfJjEGwS2w", "nips_2022_lTZBRxm2q5", "FFbhKH20NWE", "1veLbrIdoDp", "nips_2022_lTZBRxm2q5", "FFbhKH20NWE", "pp3jebMRmz", "pp3jebMRmz", "QOq1Juf51cl", "1veLbrIdoDp", "5TjYidPSmk", "esRA31jm1cX", "oIxQADLcSG", "pp3jebMRmz", "7rBfxaObQP3", "nips_2022_lTZBRxm2q5"...
nips_2022_Snp3iEj7NJ
On the Epistemic Limits of Personalized Prediction
Machine learning models are often personalized by using group attributes that encode personal characteristics (e.g., sex, age group, HIV status). In such settings, individuals expect to receive more accurate predictions in return for disclosing group attributes to the personalized model. We study when we can tell that a personalized model upholds this principle for every group who provides personal data. We introduce a metric called the benefit of personalization (BoP) to measure the smallest gain in accuracy that any group expects to receive from a personalized model. We describe how the BoP can be used to carry out basic routines to audit a personalized model, including: (i) hypothesis tests to check that a personalized model improves performance for every group; (ii) estimation procedures to bound the minimum gain in personalization. We characterize the reliability of these routines in a finite-sample regime and present minimax bounds on both the probability of error for BoP hypothesis tests and the mean-squared error of BoP estimates. Our results show that we can only claim that personalization improves performance for each group who provides data when we explicitly limit the number of group attributes used by a personalized model. In particular, we show that it is impossible to reliably verify that a personalized classifier with $k \geq 19$ binary group attributes will benefit every group who provides personal data using a dataset of $n = 8\times10^9$ samples -- one for each person in the world.
Accept
Thank you for submitting your paper to NeurIPS! This paper proposes a new metric, the benefit of personalization (BoP), to verify fair use of predictive models that have varied subgroup-level performance. Reviewers raised a variety of concerns about how the approach should be used in real-world contexts. I believe that these concerns were satisfactorily addressed by the authors and therefore I recommend acceptance (despite the fact that the most negative reviewer failed to engage in the discussion and update their score). There were additional concerns raised by the ethics reviewers that authors did not have a chance to address, but that I fully expect the authors to incorporate in the body of the paper (in addition to the concerns flagged by the reviewers). Let me summarize the concerns raised in the ethics reviews. First, the choice of the baseline is critical -- poor selection of the baseline could result in very low performance on specific groups to begin with, allowing the improvement criterion to be met on these groups relatively easily, despite potentially large disparities in performance across groups. Second, fairness can be context-dependent, e.g. in clinical trials, improvements in any subgroup is desirable even if some subgroups do not benefit; on the other hand, the proposed definition aligns incentives to improve performance for all subgroups. These trade-offs are crucial and merit discussion. Finally, there was concern regarding whether subgroup accuracy is a suitable stand-in for subgroup preference and/or harm, i.e. can increasing accuracy (according to the target label Y) lead to worse outcomes for a subgroup? See, for example, work by Ustun et al. on risk assessment trained on proxy targets (re-arrest rate for risk of recidivism, or health care costs for risk of adverse health outcomes), where increasing accuracy may not be desirable (e.g. due to sampling bias). I strongly urge the authors to discuss these issues (as they have partly done already) and highlight them in prominent portions of the paper.
train
[ "1FSAoAPceEJ", "_FWaH8S13L", "Z6PkeoJZR9", "XV9MopyVC7", "yETCpfV2DF", "IUggk7A7Grv", "r0JNx9aMWq6", "scwgbHAkCBc", "1VGeVFAQb92", "Ax6YT6hlyj", "_h6F8c2fR6z", "e5KbmTYpkkA", "hbqBp0wwZGaq", "S8gpMrVFIdVP", "p2DhCwot1xL", "USOFDXIp6xU", "GdAAgnO4y_Y", "oD3sltitPG8", "XcAH8Rp_Dh" ...
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We need to clarify something while we still can: the primary purpose of BoP is **not** to “select which models to explore.” It's true that BoP could be used in this way, though we agree with you that this would require further work.\n\n**However, this is not the primary purpose, nor the “main insight”, of this wo...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "_FWaH8S13L", "yETCpfV2DF", "r0JNx9aMWq6", "scwgbHAkCBc", "1VGeVFAQb92", "nips_2022_Snp3iEj7NJ", "S8gpMrVFIdVP", "USOFDXIp6xU", "e5KbmTYpkkA", "nips_2022_Snp3iEj7NJ", "nips_2022_Snp3iEj7NJ", "hbqBp0wwZGaq", "GdAAgnO4y_Y", "p2DhCwot1xL", "XcAH8Rp_Dh", "oD3sltitPG8", "nips_2022_Snp3iEj...
nips_2022_B4EsCSj1vQL
Deep Learning: When Conventional Wisdom Fails to be Wise
A major tenet of conventional wisdom dictates that models should not be over-parameterized: the number of free parameters should not exceed the number of training data points. This tenet originates from centuries of shallow learning, primarily in the form of linear or logistic regression. It is routinely applied to all kinds of data analyses and modeling and even to infer properties of the brain. However, we show that this conventional wisdom is completely wrong as soon as one moves from shallow to deep learning. In particular, we construct sequences of both linear and non-linear deep learning models whose number of parameters can grow to arbitrarily large values, and which remain well defined and trainable using a fixed, finite size, training set. In deep models, the parameter space is partitioned into large equivalence classes. Learning can be viewed as a communication process where information is communicated from the data to the synaptic weights. The information in the training data only can, and needs to, specify an equivalence class of the parameters. It cannot, and does not need to, specify individual parameter values. As such, the number of training examples can be smaller than the number of free parameters.
Reject
The paper explores the question of why overparameterized networks can generalize well, and shows a number of theoretical examples where overparameterized networks attain good generalization. The question explored in the paper is an important one and the presented examples have potential pedagogical value. However, there are serious issues with the premise and the presentation of the paper; all four reviewers discussed at length those issues and recommended rejection. Unfortunately the authors did not engage in discussion with the reviewers. Given the above, it is clear that the paper is not suitable for NeurIPS.
train
[ "xC8yt5DgQO", "a_ERlhoSgSN", "qIlDLrjDpej", "jwuB4ms9-dJ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors revisit the problem of overfitting to data set and construct an example where neural networks overfit to their training data without it affecting their performance. The authors hypothesise that this effect arises because the data only specifies parameters up to a certain equivalence class, rather than...
[ 2, 3, 2, 2 ]
[ 5, 4, 5, 5 ]
[ "nips_2022_B4EsCSj1vQL", "nips_2022_B4EsCSj1vQL", "nips_2022_B4EsCSj1vQL", "nips_2022_B4EsCSj1vQL" ]
nips_2022__ekGcr07Dsp
Meta-DMoE: Adapting to Domain Shift by Meta-Distillation from Mixture-of-Experts
In this paper, we tackle the problem of domain shift. Most existing methods perform training on multiple source domains using a single model, and the same trained model is used on all unseen target domains. Such solutions are sub-optimal as each target domain exhibits its own speciality, which is not adapted. Furthermore, expecting the single-model training to learn extensive knowledge from the multiple source domains is counterintuitive. The model is more biased toward learning only domain-invariant features and may result in negative knowledge transfer. In this work, we propose a novel framework for unsupervised test-time adaptation, which is formulated as a knowledge distillation process to address domain shift. Specifically, we incorporate Mixture-of-Experts (MoE) as teachers, where each expert is separately trained on different source domains to maximize their speciality. Given a test-time target domain, a small set of unlabeled data is sampled to query the knowledge from MoE. As the source domains are correlated to the target domains, a transformer-based aggregator then combines the domain knowledge by examining the interconnection among them. The output is treated as a supervision signal to adapt a student prediction network toward the target domain. We further employ meta-learning to enforce the aggregator to distill positive knowledge and the student network to achieve fast adaptation. Extensive experiments demonstrate that the proposed method outperforms the state-of-the-art and validates the effectiveness of each proposed component. Our code is available at https://github.com/n3il666/Meta-DMoE.
Accept
This paper addresses the DG-TTA problem setting, drawing inspiration from the recent Adaptive Risk Minimisation (ARM). It goes beyond ARM to introduce a mixture of experts and distills from those mixture of experts during adaptation. Reviewers agree the MoE idea makes sense, and the distillation-based adaptation is interesting and novel enough, and they appreciated the good results and evaluation. Concerns included various writing clarity issues and evaluation on other datasets, but these questions were generally resolved in the rebuttal. Given that concerns were resolved and all reviewers were positive, I recommend accept.
val
[ "wTrZGfEbqS", "HqPuGRTZaK2", "20KWLyfNRDf", "NmcsBmX37_", "ESUGrc0vWmq", "OV5hrO0qupe", "Gs-5BcQJnJh", "9ccygBrNCw", "-4Vojlq-7kL", "9HDZob8YOYL", "T8pil6fG05", "QuTGzopHig4", "WgywoXryN0H", "Vm0_ziIEbs", "71503_bGjwY" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The revisions look great to me.", " Dear Reviewer cPHa,\n\nWe thank you for the valuable time and comments to make our paper stronger. We have incorporated the contents in the rebuttal to the main paper and modified the Related Work. Please refer to the recent submitted revision and supplementary material. The ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 2 ]
[ "HqPuGRTZaK2", "20KWLyfNRDf", "-4Vojlq-7kL", "ESUGrc0vWmq", "OV5hrO0qupe", "WgywoXryN0H", "71503_bGjwY", "Vm0_ziIEbs", "Vm0_ziIEbs", "WgywoXryN0H", "WgywoXryN0H", "nips_2022__ekGcr07Dsp", "nips_2022__ekGcr07Dsp", "nips_2022__ekGcr07Dsp", "nips_2022__ekGcr07Dsp" ]
nips_2022_LvW71lgly25
Few-shot Relational Reasoning via Connection Subgraph Pretraining
Few-shot knowledge graph (KG) completion task aims to perform inductive reasoning over the KG: given only a few support triplets of a new relation $\bowtie$ (e.g., (chop,$\bowtie$,kitchen), (read,$\bowtie$,library), the goal is to predict the query triplets of the same unseen relation $\bowtie$, e.g., (sleep,$\bowtie$,?). Current approaches cast the problem in a meta-learning framework, where the model needs to be first jointly trained over many training few-shot tasks, each being defined by its own relation, so that learning/prediction on the target few-shot task can be effective. However, in real-world KGs, curating many training tasks is a challenging ad hoc process. Here we propose Connection Subgraph Reasoner (CSR), which can make predictions for the target few-shot task directly without the need for pre-training on the human curated set of training tasks. The key to CSR is that we explicitly model a shared connection subgraph between support and query triplets, as inspired by the principle of eliminative induction. To adapt to specific KG, we design a corresponding self-supervised pretraining scheme with the objective of reconstructing automatically sampled connection subgraphs. Our pretrained model can then be directly applied to target few-shot tasks on without the need for training few-shot tasks. Extensive experiments on real KGs, including NELL, FB15K-237, and ConceptNet, demonstrate the effectiveness of our framework: we show that even a learning-free implementation of CSR can already perform competitively to existing methods on target few-shot tasks; with pretraining, CSR can achieve significant gains of up to 52% on the more challenging inductive few-shot tasks where the entities are also unseen during (pre)training.
Accept
This paper studies few-shot knowledge graph completion problem. It proposes learning a hypothesis proposal module that given different support evidence graphs, finds a common hypothesis that is supported by the evidence. The authors present 2 approaches for the hypothesis proposal and evidence proposal modules: an optimization-based training free method, and a fully trainable GCNN approach. The reviewers agree that the proposed method is interesting and solid, the experiments are thorough, and the results provide valuable insights for future work. Reviewers' raised concerns and questions are properly addressed by the author's response.
train
[ "q57EuNaHq8f", "ljVAB83C-jd", "6ujVkom7C6l", "ur-XwW9by14", "OPaoAH6hU5W", "STRsO5HTU7G", "4CEHgZvVuU_", "kmVU0E7uW_", "esG688Ga-A", "oqQERaNAfNk", "GH20k547zEr", "5_djsQdOf3a", "qh_wBdd4ATN" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Those are helpful, thanks!", " Thank you for the questions and we apologize for the confusion from our writing. Please see our clarifications below:\n\n> “I don't see \"cook\" in the two-hop path?”\n\nSorry that we mistyped “chop” as “cook” in Line 62 as also pointed out by another reviewer. This is supposed to...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "ljVAB83C-jd", "6ujVkom7C6l", "oqQERaNAfNk", "4CEHgZvVuU_", "qh_wBdd4ATN", "5_djsQdOf3a", "kmVU0E7uW_", "GH20k547zEr", "nips_2022_LvW71lgly25", "qh_wBdd4ATN", "nips_2022_LvW71lgly25", "nips_2022_LvW71lgly25", "nips_2022_LvW71lgly25" ]
nips_2022_hq-p55-qil9
Associating Objects and Their Effects in Video through Coordination Games
We explore a feed-forward approach for decomposing a video into layers, where each layer contains an object of interest along with its associated shadows, reflections, and other visual effects. This problem is challenging since associated effects vary widely with the 3D geometry and lighting conditions in the scene, and ground-truth labels for visual effects are difficult (and in some cases impractical) to collect. We take a self-supervised approach and train a neural network to produce a foreground image and alpha matte from a rough object segmentation mask under a reconstruction and sparsity loss. Under reconstruction loss, the layer decomposition problem is underdetermined: many combinations of layers may reconstruct the input video. Inspired by the game theory concept of focal points---or \emph{Schelling points}---we pose the problem as a coordination game, where each player (network) predicts the effects for a single object without knowledge of the other players' choices. The players learn to converge on the ``natural'' layer decomposition in order to maximize the likelihood of their choices aligning with the other players'. We train the network to play this game with itself, and show how to design the rules of this game so that the focal point lies at the correct layer decomposition. We demonstrate feed-forward results on a challenging synthetic dataset, then show that pretraining on this dataset significantly reduces optimization time for real videos.
Accept
All reviewers found that the paper provides a novel, interesting solution, and is well written. They appreciated that the proposed method outperforms prior work on synthetic experiments and shows reasonable results on real data. The video results were particularly helpful in judging the results. The majority of the reviewers were concerned about the convergence of the proposed coordination game to the correct solution. While the authors provided some empirical evidence, a more formal analysis could alleviate concerns much more easily and would provide a strong justification for the proposed method. The requests by reviewers for more ablations, simple heuristic baselines, and quantitative results on real data were simply ignored by the authors. This does not induce confidence that any of these requests will be addressed in a final version.
train
[ "cE6mh6nDonm", "XHEiqLEiyKs", "i53eEMoSZME", "9UL0FwEj-C", "0tDygnG2df1", "ZYpYIQzydMW", "5b2zGKVxsEe", "7OeqDxMv9G-", "Mmp7WoNa8Ul", "6RsUhbJrFoL", "K9WOv_C2V2Z" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. We will incorporate this information into the revised version.", " Many thanks for your replies here. \n\nI think these answers should definitely be incorporated into the text of your paper. ", " Thank you for the detailed comments.\n\n**Technical contribution**\n\nThe Visual Cent...
[ -1, -1, -1, -1, -1, -1, -1, 4, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "XHEiqLEiyKs", "9UL0FwEj-C", "7OeqDxMv9G-", "K9WOv_C2V2Z", "Mmp7WoNa8Ul", "6RsUhbJrFoL", "nips_2022_hq-p55-qil9", "nips_2022_hq-p55-qil9", "nips_2022_hq-p55-qil9", "nips_2022_hq-p55-qil9", "nips_2022_hq-p55-qil9" ]
nips_2022_Gf5DxrgD2cT
Provably Efficient Model-Free Constrained RL with Linear Function Approximation
We study the constrained reinforcement learning problem, in which an agent aims to maximize the expected cumulative reward subject to a constraint on the expected total value of a utility function. In contrast to existing model-based approaches or model-free methods accompanied with a `simulator’, we aim to develop the first \emph{model-free}, \emph{simulator-free} algorithm that achieves a sublinear regret and a sublinear constraint violation even in \emph{large-scale} systems. To this end, we consider the episodic constrained Markov decision processes with linear function approximation, where the transition dynamics and the reward function can be represented as a linear function of some known feature mapping. We show that $\tilde{\mathcal{O}}(\sqrt{d^3H^3T})$ regret and $\tilde{\mathcal{O}}(\sqrt{d^3H^3T})$ constraint violation bounds can be achieved, where $d$ is the dimension of the feature mapping, $H$ is the length of the episode, and $T$ is the total number of steps. Our bounds are attained without explicitly estimating the unknown transition model or requiring a simulator, and they depend on the state space only through the dimension of the feature mapping. Hence our bounds hold even when the number of states goes to infinity. Our main results are achieved via novel adaptations of the standard LSVI-UCB algorithms. In particular, we first introduce primal-dual optimization into the LSVI-UCB algorithm to balance between regret and constraint violation. More importantly, we replace the standard greedy selection with respect to the state-action function with a soft-max policy. This turns out to be key in establishing uniform concentration (a critical step for provably efficient model-free exploration) for the constrained case via its approximation-smoothness trade-off. Finally, we also show that one can achieve an even zero constraint violation for large enough $T$ by trading the regret a little bit but still maintaining the same order with respect to $T$.
Accept
This paper considers minimizing the regret while learning a near-optimal policy in an episodic constrained MDP with linear function approximation. It proposes and analyzes a UCB-based algorithm proving sublinear regret. The reviewers found the paper well-motivated and technically sound, and unanimously recommend acceptance. Please incorporate the reviewers' feedback in the final version of the paper. In order to strengthen the final paper, it would be helpful to: - Incorporate toy experiments and empirically validate some of the paper's claims - Include a discussion about the tightness of the upper bound
val
[ "C58DZ2sP1E9", "grCFrYaSm1", "0WJqhJ4XG9W", "UbZ5CPOdcZY", "nZUUI9E0RVky", "4sQJtfA-ZaoO", "YHLcjOgjMHE", "ljTeh-eSvbs", "SV8MmycGTF_", "OIKMBKWhaJz", "IulKbjGOqWE", "rUz3NQXtIXF" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Since the discussion period is closing soon, we just wanted to check in and ask if the rebuttal clarified and answered the questions raised in your review. We would be very happy to engage further if there are additional questions! ", " I thank the authors for answering all my questions.", " In this part, we ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 5, 3 ]
[ "nZUUI9E0RVky", "ljTeh-eSvbs", "UbZ5CPOdcZY", "rUz3NQXtIXF", "4sQJtfA-ZaoO", "IulKbjGOqWE", "OIKMBKWhaJz", "SV8MmycGTF_", "nips_2022_Gf5DxrgD2cT", "nips_2022_Gf5DxrgD2cT", "nips_2022_Gf5DxrgD2cT", "nips_2022_Gf5DxrgD2cT" ]
nips_2022_lYZQRpqLesi
Get More at Once: Alternating Sparse Training with Gradient Correction
Recently, a new trend of exploring training sparsity has emerged, which remove parameters during training, leading to both training and inference efficiency improvement. This line of works primarily aims to obtain a single sparse model under a pre-defined large sparsity ratio. It leads to a static/fixed sparse inference model that is not capable of adjusting or re-configuring its computation complexity (i.e., inference structure, latency) after training for real-world varying and dynamic hardware resource availability. To enable such run-time or post-training network morphing, the concept of `dynamic inference' or `training-once-for-all' has been proposed to train a single network consisting of multiple sub-nets once, but each sub-net could perform the same inference function with different computing complexity. However, the traditional dynamic inference training method requires a joint training scheme with multi-objective optimization, which suffers from very large training overhead. In this work, for the first time, we propose a novel alternating sparse training (AST) scheme to train multiple sparse sub-nets for dynamic inference without extra training cost compared to the case of training a single sparse model from scratch. Furthermore, to mitigate the interference of weight update among sub-nets, we propose gradient correction within the inner-group iterations to reduce their weight update interference. We validate the proposed AST on multiple datasets against state-of-the-art sparse training method, which shows that AST achieves similar or better accuracy, but only needs to train once to get multiple sparse sub-nets with different sparsity ratios. More importantly, compared with the traditional joint training based dynamic inference training methodology, the large training overhead is completely eliminated without affecting the accuracy of each sub-net.
Accept
The authors presented an alternating sparse training method for building dynamic inference networks. The method comes with an extensive empirical study over multiple benchmark datasets showcasing its advantages over prior dynamic inference methods. The paper is generally well organized and clearly presented. The technical novelty, however, seems to be somewhat limited given that the inspiration of method and theoretical interpretations are drawn largely from the findings in Repitle for gradient based meta-learning. The reviewers also raised several other concerns regarding the connection to meta-learning, additional baselines and experiment details. The authors managed to address some of these in their responses. After discussions, the reviewers reached a majority consensus that this is an interesting and technically sound paper where strengths outweigh weaknesses. Therefore, I recommend that paper can be accepted if room available and the promised revision is made to address the issues raised.
test
[ "5ZcSwpMCQv", "sWc8-nCNZ1", "9BopUcJrSwa", "-2QNeIAKMm0", "b5PowE3K0J", "iBreb1b77BE", "sIY5oc5Z87", "KHNuQcotYPC", "5nFVT5TYgK", "_XzOmrZqjF", "x5Un7q8_hct", "X9HfTEp7j4", "SXAN8KSW2bA", "-yx33E0jMIA", "3G7TRqntLBp", "Lzheq3uCawN", "0RJ97EVVnv1", "_zxFekW-WZ2", "51Pmo2GceJg", ...
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", ...
[ " Thank you for your clarification. I will increase my score to 5.", " Dear reviewer MN8L,\n\nThanks again for all your response and fruitful comments. We hope our detailed responses can resolve your questions and concerns, please kindly reconsider the overall score of our work\n\nBest wishes, Author", " Dear r...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 5 ]
[ "KHNuQcotYPC", "KHNuQcotYPC", "51Pmo2GceJg", "5nFVT5TYgK", "iBreb1b77BE", "sIY5oc5Z87", "_XzOmrZqjF", "x5Un7q8_hct", "Lzheq3uCawN", "3G7TRqntLBp", "0RJ97EVVnv1", "rw2U3ZpBz0V", "BWzU4XxcHhH", "51Pmo2GceJg", "rw2U3ZpBz0V", "iiSC_0Zbwc2", "BWzU4XxcHhH", "51Pmo2GceJg", "nips_2022_lY...
nips_2022_zqQKGaNI4lp
Consistent Interpolating Ensembles via the Manifold-Hilbert Kernel
Recent research in the theory of overparametrized learning has sought to establish generalization guarantees in the interpolating regime. Such results have been established for a few common classes of methods, but so far not for ensemble methods. We devise an ensemble classification method that simultaneously interpolates the training data, and is consistent for a broad class of data distributions. To this end, we define the manifold-Hilbert kernel for data distributed on a Riemannian manifold. We prove that kernel smoothing regression using the manifold-Hilbert kernel is weakly consistent in the setting of Devroye et al. 1998. For the sphere, we show that the manifold-Hilbert kernel can be realized as a weighted random partition kernel, which arises as an infinite ensemble of partition-based classifiers.
Accept
This paper presents an extension of the Hilbert kernel in Devroye et al (1998) to the Riemannian manifold setting and shows that kernel smoothing regression is consistent while interpolating the training data on the manifold. Reviewers generally appreciate the theoretical results presented and agree that this is a well-written paper, despite its technical nature. The authors did acknowledge the lack of convergence rate, as pointed out by Reviewer w6Vf. Despite of this limitation, I believe this would make a solid contribution to the theoretical study of interpolating ensemble methods. Note: Since the original reviewers lack expertise in Riemannian geometry, I asked an expert to provide an additional review (below). It also contains suggestions for improving the paper. The authors might also want to try to improve their presentation to make it more accessible to the general ML community. **Added extra review by an expert** The paper under review extended a result due to Devroye, Gyorfi, and Krzyzak [DGK98] on the Hilbert Kernel Regression Estimate from the case $M = \mathbb{R}^n$ to the case M is a complete Riemannian manifold. Its main technique is the Riemannian logarithm (Theorem 3.2, Lemma 4.1), see also Comment (2) below. As a result, combining with a remark by Pinelis [Pinelis19], the author(s) of the paper under review showed that a random partition of the sphere $S^d$ with a certain weight generates a manifold Hilbert kernel introduced by the author(s) in Theorem 3.2 and therefore has the interpolating consistent property (Theorem 5.2, Corollary 5.5). The results are correct, beautiful and I strongly recommend for publication. *Comments and suggestions* (1) l.48: $\alpha$ must belong to $L^1(\beta)$? (2) l 224: *Alternative proof of Lemma 4.1*. It is known that $M = I_x\cup C_x$ where $I_x$ is diffeomorphic to $\tilde{I}_x \subset T_xM$ and $C_x$ is a closed subset of M and of the Riemannian measure 0. Now define $\log_x : M \rightarrow T_xM$ as follows: for $m \in I_x$ set $\log_x(m) = \exp^{-1}_x(m)$, for $m \in C_x$ set $\log_x(m) = 0 \in T_xM$. This map is measurable, since it is continuous on $I_x$ and it maps the measurable subset $C_x$ of 0-measure to a point. (So we don't need the measurable selection and the reference Nr 13, which has not been correctly cited, and we don't need supplement A.1 as well as supplement A.2. (Subsection 2 presents a sufficient background on Riemannian geometry for this paper in my opinion) (3) l. 243-247: Propositions 4.2 (i) and (ii): a more general formula in geometric measure theory, called the area formula, is valid [AT04, p. 44-45]. The author could keep their exposition but should add that is a simple exposition of a known fact. *References* [AT04] Ambrosio, L., Tilli, P.: Topics in Analysis on Metric Spaces. Oxford University Press, Oxford (2004). [DGK98] Luc Devroye, Laszlo Gyorfi, and Adam Krzyzak. The Hilbert kernel regression estimate". In: Journal of Multivariate Analysis 65.2 (1998), pp. 209-227. [Pinelis19] Iosif Pinelis, Probability of two points being divided by an high-dimensional hyperplane. MathOverflow. URL:https://mathoverow.net/q/323697 (version: 2019-02-21). 2019.
train
[ "16-_r9_pJ4G", "nkaeP8ZDdD2", "l0ivZ7N5F0C", "6JWZZgrQOo", "sC2AtTWBP0U", "tx2cwJPCg4", "z5ypf0vM5Nn", "wfNIAdS8WTt" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the positive comments and the constructive criticisms.\n\n\n> the paper is very technical and very hard to read. \n\n\nWe added an additional figure (Figure 1 in the new revised version) to Section 1.1 and 1.2 to improve the exposition.\nHopefully this addresses some of the readabilit...
[ -1, -1, -1, -1, 5, 7, 7, 6 ]
[ -1, -1, -1, -1, 1, 2, 1, 1 ]
[ "wfNIAdS8WTt", "z5ypf0vM5Nn", "tx2cwJPCg4", "sC2AtTWBP0U", "nips_2022_zqQKGaNI4lp", "nips_2022_zqQKGaNI4lp", "nips_2022_zqQKGaNI4lp", "nips_2022_zqQKGaNI4lp" ]
nips_2022_OQtY993Y4TV
Learning Symmetric Rules with SATNet
SATNet is a differentiable constraint solver with a custom backpropagation algorithm, which can be used as a layer in a deep-learning system. It is a promising proposal for bridging deep learning and logical reasoning. In fact, SATNet has been successfully applied to learn, among others, the rules of a complex logical puzzle, such as Sudoku, just from input and output pairs where inputs are given as images. In this paper, we show how to improve the learning of SATNet by exploiting symmetries in the target rules of a given but unknown logical puzzle or more generally a logical formula. We present SymSATNet, a variant of SATNet that translates the given symmetries of the target rules to a condition on the parameters of SATNet and requires that the parameters should have a particular parametric form that guarantees the condition. The requirement dramatically reduces the number of parameters to learn for the rules with enough symmetries, and makes the parameter learning of SymSATNet much easier than that of SATNet. We also describe a technique for automatically discovering symmetries of the target rules from examples. Our experiments with Sudoku and Rubik's cube show the substantial improvement of SymSATNet over the baseline SATNet.
Accept
This paper describes a refinement of SATNet that incorporates or finds symmetries in constrain satisfaction problems. Experiments are given on Rubik's cube and Sudoku both of which exhibit significant symmetries (rotations and reflections of the cube and permutations of some rows and columns in Sudoku). We have three reviews which are uniformly positive. One weakness expressed is that the experiments seem toyish. I also have that concern. I would have to have see experiments on larger SAT problems before being convinced that there is something useful here. But has the reviews are positive I will recommend acceptance.
train
[ "cjimpE--9Pe", "ojVW0QquAP", "YqXfQqE4m74", "pJpvBYe5i4v", "Fdzq21hhH_u", "CVjoaEwZaT", "gyOWuuJae0", "RHii6YHXDWX", "fMMfsVQYoc", "jnFmdRTN1-", "BntMhwEMWTh", "IK9RPveNn12", "gnw6XyJlLdY", "KZPnLmgfV57" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We appreciate your reply with further comments and suggestions. We respond to the last point in the reply.\n- [Q5] Can you visualise the emergence of the soft symmetries, including the underfitting and overfitting mentioned in the author's response? Running the SymFind algorithm every epoch (or every few epochs) ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "YqXfQqE4m74", "CVjoaEwZaT", "RHii6YHXDWX", "Fdzq21hhH_u", "KZPnLmgfV57", "gyOWuuJae0", "gnw6XyJlLdY", "IK9RPveNn12", "jnFmdRTN1-", "BntMhwEMWTh", "nips_2022_OQtY993Y4TV", "nips_2022_OQtY993Y4TV", "nips_2022_OQtY993Y4TV", "nips_2022_OQtY993Y4TV" ]
nips_2022_Lz2N6UqRYqB
Off-Policy Evaluation for Episodic Partially Observable Markov Decision Processes under Non-Parametric Models
We study the problem of off-policy evaluation (OPE) for episodic Partially Observable Markov Decision Processes (POMDPs) with continuous states. Motivated by the recently proposed proximal causal inference framework, we develop a non-parametric identification result for estimating the policy value via a sequence of so-called V-bridge functions with the help of time-dependent proxy variables. We then develop a fitted-Q-evaluation-type algorithm to estimate V-bridge functions recursively, where a non-parametric instrumental variable (NPIV) problem is solved at each step. By analyzing this challenging sequential NPIV estimation, we establish the finite-sample error bounds for estimating the V-bridge functions and accordingly that for evaluating the policy value, in terms of the sample size, length of horizon and so-called (local) measure of ill-posedness at each step. To the best of our knowledge, this is the first finite-sample error bound for OPE in POMDPs under non-parametric models.
Accept
The reviewers carefully analyzed this work and agreed that the topics investigated in this paper are important and relevant to the field. This paper studies the off-policy evaluation problem (OPE) on POMDPs. The authors established nonparametric identification results linking the value of a policy to a sequence of V-bridge functions. One reviewer argued that this is an important problem and that the theoretical results in this paper are solid. As limitations, this reviewer argued that the paper makes many assumptions, and that it would be beneficial for readers if the authors could include further discussions on why such assumptions are reasonable. Another reviewer also acknowledged the paper's contributions to studying the OPE problem within the setting of partially observable MDPs. They believe the work is novel and that the authors have provided relevant theoretical results considering error bounds. As limitations, this reviewer points out that the method appears to require a set of hyper-parameters to be carefully tuned, which may not always be possible in OPE setting; and that it is unclear how the misspecification of such hyper-parameters may impact errors. They also point out that the paper has no empirical analyses, but mentioned that this is acceptable since this is primarily a theory paper. Finally, a third reviewer argued that this paper makes a nice contribution to the field by approaching the OPE problem for POMDPs from an unconventional perspective. As a limitation, however, this reviewer points out that it is hard to read the paper, given the complex notation used, little to no intuitions given, and the lack of experiments. Overall, all reviewers were positively impressed with the quality of this work and look forward to an updated version of the paper that addresses the suggestions mentioned in their reviews.
test
[ "wSh88HQBaNw", "csQs52F4wtK", "n7tisL4rJOt", "16XZBijWNnC", "KeNeme0uTR", "v-M1DjWAbxp", "RhnAMfEg2Z7", "_hZXdS3nAi", "u7LQl_HE724", "6IYZn5AH-o", "woTX_a0Tm0", "915sg2uMuLD", "mXX0K4hy8e", "MUdhDlHe3vW", "Rv5Bj4Arsjm" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Many thanks for your valuable and constructive comments on clarifying, correcting, and improving the materials in the paper!\nAs you have said,\n\n>I'd like to increase my rating to borderline accept.\n\nWe really appreciate your recognition. As the rebuttal DDL is approaching, could you please increase the ratin...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 3 ]
[ "KeNeme0uTR", "n7tisL4rJOt", "6IYZn5AH-o", "KeNeme0uTR", "_hZXdS3nAi", "RhnAMfEg2Z7", "mXX0K4hy8e", "u7LQl_HE724", "Rv5Bj4Arsjm", "woTX_a0Tm0", "MUdhDlHe3vW", "nips_2022_Lz2N6UqRYqB", "nips_2022_Lz2N6UqRYqB", "nips_2022_Lz2N6UqRYqB", "nips_2022_Lz2N6UqRYqB" ]
nips_2022_bhvUOhnsgZ
A simple but strong baseline for online continual learning: Repeated Augmented Rehearsal
Online continual learning (OCL) aims to train neural networks incrementally from a non-stationary data stream with a single pass through data. Rehearsal-based methods attempt to approximate the observed input distributions over time with a small memory and revisit them later to avoid forgetting. Despite their strong empirical performance, rehearsal methods still suffer from a poor approximation of past data’s loss landscape with memory samples. This paper revisits the rehearsal dynamics in online settings. We provide theoretical insights on the inherent memory overfitting risk from the viewpoint of biased and dynamic empirical risk minimization, and examine the merits and limits of repeated rehearsal. Inspired by our analysis, a simple and intuitive baseline, repeated augmented rehearsal (RAR), is designed to address the underfitting-overfitting dilemma of online rehearsal. Surprisingly, across four rather different OCL benchmarks, this simple baseline outperforms vanilla rehearsal by 9\%-17\% and also significantly improves the state-of-the-art rehearsal-based methods MIR, ASER, and SCR. We also demonstrate that RAR successfully achieves an accurate approximation of the loss landscape of past data and high-loss ridge aversion in its learning trajectory. Extensive ablation studies are conducted to study the interplay between repeated and augmented rehearsal, and reinforcement learning (RL) is applied to dynamically adjust the hyperparameters of RAR to balance the stability-plasticity trade-off online.
Accept
This work considers rehearsal-based methods in continual learning and revisits revisits the rehearsal dynamics. Most reviewers praised the analysis of overfitting/underfitting in replay-based methods and the presentation. The proposed repeated rehearsal with data-augmentation was shown to be effective in empirical evaluations and the ablation studies. Finally, the authors addressed on of the main concerns raised by the reviewers, namely the relation to the reweighted ER baseline, during the rebuttal.
train
[ "srBqe-s_hXn", "23ZP3Qc40Ys", "eCYFham8yj_", "Pywagfnutw0", "2-eWeLP4tGd", "3hJs9jWp6xf", "kbDQRdd_TZz", "Gt6_vbruiyF", "ctTMrWq8P8o", "DFbOiH72IxB", "p83kuEwOAWp", "jSZN2HSKJk", "4ise5vydxUz", "3Ne1E4qwVW", "3vdZthOYWoJ" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewers for the constructive feedback and insightful discussions to help refine the paper. \n\nThe main revisions are briefly summarized as follows:\n* add a large-scale experiment using *ImageNet-1k*;\n* add discussion and comparison experiment with *DER*;\n* add discussion and experiments for *re...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "nips_2022_bhvUOhnsgZ", "eCYFham8yj_", "2-eWeLP4tGd", "nips_2022_bhvUOhnsgZ", "3hJs9jWp6xf", "ctTMrWq8P8o", "nips_2022_bhvUOhnsgZ", "jSZN2HSKJk", "4ise5vydxUz", "3Ne1E4qwVW", "3vdZthOYWoJ", "nips_2022_bhvUOhnsgZ", "nips_2022_bhvUOhnsgZ", "nips_2022_bhvUOhnsgZ", "nips_2022_bhvUOhnsgZ" ]
nips_2022_w0O3F4cTNfG
Causal Discovery in Linear Latent Variable Models Subject to Measurement Error
We focus on causal discovery in the presence of measurement error in linear systems where the mixing matrix, i.e., the matrix indicating the independent exogenous noise terms pertaining to the observed variables, is identified up to permutation and scaling of the columns. We demonstrate a somewhat surprising connection between this problem and causal discovery in the presence of unobserved parentless causes, in the sense that there is a mapping, given by the mixing matrix, between the underlying models to be inferred in these problems. Consequently, any identifiability result based on the mixing matrix for one model translates to an identifiability result for the other model. We characterize to what extent the causal models can be identified under a two-part faithfulness assumption. Under only the first part of the assumption (corresponding to the conventional definition of faithfulness), the structure can be learned up to the causal ordering among an ordered grouping of the variables but not all the edges across the groups can be identified. We further show that if both parts of the faithfulness assumption are imposed, the structure can be learned up to a more refined ordered grouping. As a result of this refinement, for the latent variable model with unobserved parentless causes, the structure can be identified. Based on our theoretical results, we propose causal structure learning methods for both models, and evaluate their performance on synthetic data.
Accept
There is a consensus among all expert reviewers that the paper provides solid and rigorous theoretical results that contribute to the sub-field of causal inference, especially the identifiability of linear non-Gaussian models **when there is a measurement noise**, although the authors could have done better at providing concrete motivation and demonstrating the real-world impact of the results. Some of the issues however have been addressed during the discussion phase. I do hope the authors will take reviewers' comments into consideration when preparing the camera-ready version of this paper. The main contribution of this work is to show the analogy between linear causal models in the presence of measurement error and linear causal models **with parentless latent variables**. This constitutes an interesting perspective on causal discovery without causal sufficiency and enables the authors to obtain identifiability results based on the mixing matrix in both cases. The results, therefore, bridge an important gap in the causal discovery literature.
train
[ "k5T6_DwPAM", "vU7ho5jdkv_", "klfyLHbbe6", "vRf23dSgDHc", "eKLEfXWc4vj", "fmg_pzWRw9x", "ni9VC2SRNZMS", "WSnsibrPSlO", "Nspk89cOz8aB", "Hk5qoDAhYLD", "9zBkZ2sBYI", "0Cz1CiWjPOc", "XMiE25uNXJQ", "p5Iq_Gf09g2", "dVoWyGEocTj", "X0Yb616lsH5" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the response. We will definitely implement these changes if the paper is accepted.", " Thank you very much for the patient explanation regarding the star graph. Indeed, on my initial reading, I had missed that each induced subgraph is a star graph, and not the graph as a whole.\n\nThan...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "vU7ho5jdkv_", "XMiE25uNXJQ", "vRf23dSgDHc", "Hk5qoDAhYLD", "X0Yb616lsH5", "p5Iq_Gf09g2", "WSnsibrPSlO", "0Cz1CiWjPOc", "nips_2022_w0O3F4cTNfG", "X0Yb616lsH5", "dVoWyGEocTj", "dVoWyGEocTj", "p5Iq_Gf09g2", "nips_2022_w0O3F4cTNfG", "nips_2022_w0O3F4cTNfG", "nips_2022_w0O3F4cTNfG" ]
nips_2022_jtW73TIGnd
Maximum-Likelihood Quantum State Tomography by Soft-Bayes
Quantum state tomography (QST), the task of estimating an unknown quantum state given measurement outcomes, is essential to building reliable quantum computing devices. Whereas computing the maximum-likelihood (ML) estimate corresponds to solving a finite-sum convex optimization problem, the objective function is not smooth nor Lipschitz, so most existing convex optimization methods lack sample complexity guarantees; moreover, both the sample size and dimension grow exponentially with the number of qubits in a QST experiment, so a desired algorithm should be highly scalable with respect to the dimension and sample size, just like stochastic gradient descent. In this paper, we propose a stochastic first-order algorithm that computes an $\varepsilon$-approximate ML estimate in $O( ( D \log D ) / \varepsilon ^ 2 )$ iterations with $O( D^3 )$ per-iteration time complexity, where $D$ denotes the dimension of the unknown quantum state and $\varepsilon$ denotes the optimization error. Our algorithm is an extension of Soft-Bayes to the quantum setup.
Reject
Overall: The paper propose a stochastic first-order algorithm that computes an -approximate ML estimate for the QST problem. Reviews: The paper received four reviews. Borderline reject (less confident), Borderline accept (confident), Borderline reject (absolutely confident), Reject (confident). Overall, from the reviews there is not a reviewer that champions the paper for acceptance. Main issues raised are: - Clarity of presentation, notation - Scalability/applicability - Less relevant to the ML community. After rebuttal: While the authors have been active responding to the reviewers' comments, the rebuttal discussion was relatively silent. While the AC has reached out to find additional reviewers, this effort was unsuccessful. One of the reviewer was responsive, but the outcome was that the paper still lacks significance and applicability within the ML community. This suggests that it might be preferable the paper be submitted to a near future conference venue (and maybe a more theoretical one), following these suggestions + corrections. Confidence of reviews: The reviewers are fairly confident in their reviews. The thorough reviews among the four definitely get more weight than the rest of the reviews. Overall, the paper feels to be in good state but none of the reviewers feels extremely confident championing the paper for acceptance at this venue. We highly suggest the authors to consider near future ML conferences or more quantum-related conferences for resubmission
train
[ "kB2A1qmFqkK", "CIBnarEDZ59", "rSzWE4AdCo8", "k0aC8hAh-Mf", "9_VbkK19ktN", "Xmthabug5g7", "hRtCgIF16BC", "GaklIOxlCMD", "fvwKH99hDE", "0C7gNxYZ0Q5", "1EITIT9HJLA", "JQd9sEk7pX" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer 3n1B, \n\nWe have answered the questions you raised about the problem formulation and you did not point out any insufficiency. We wonder why you still think the significance is borderline. We would appreciate it if you can explain what is still unsatisfactory to you, so we know how to improve on thi...
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 5, 4 ]
[ "CIBnarEDZ59", "Xmthabug5g7", "nips_2022_jtW73TIGnd", "nips_2022_jtW73TIGnd", "fvwKH99hDE", "0C7gNxYZ0Q5", "1EITIT9HJLA", "JQd9sEk7pX", "nips_2022_jtW73TIGnd", "nips_2022_jtW73TIGnd", "nips_2022_jtW73TIGnd", "nips_2022_jtW73TIGnd" ]
nips_2022_An5MaWw4L4I
Finite-Time Regret of Thompson Sampling Algorithms for Exponential Family Multi-Armed Bandits
We study the regret of Thompson sampling (TS) algorithms for exponential family bandits, where the reward distribution is from a one-dimensional exponential family, which covers many common reward distributions including Bernoulli, Gaussian, Gamma, Exponential, etc. We propose a Thompson sampling algorithm, termed ExpTS, which uses a novel sampling distribution to avoid the under-estimation of the optimal arm. We provide a tight regret analysis for ExpTS, which simultaneously yields both the finite-time regret bound as well as the asymptotic regret bound. In particular, for a $K$-armed bandit with exponential family rewards, ExpTS over a horizon $T$ is sub-UCB (a strong criterion for the finite-time regret that is problem-dependent), minimax optimal up to a factor $\sqrt{\log K}$, and asymptotically optimal, for exponential family rewards. Moreover, we propose ExpTS$^+$, by adding a greedy exploitation step in addition to the sampling distribution used in ExpTS, to avoid the over-estimation of sub-optimal arms. ExpTS$^+$ is an anytime bandit algorithm and achieves the minimax optimality and asymptotic optimality simultaneously for exponential family reward distributions. Our proof techniques are general and conceptually simple and can be easily applied to analyze standard Thompson sampling with specific reward distributions.
Accept
This paper provides a good contribution on Thompson sampling for exponential families, while several concerns on the presentation and technical points are raised. After the discussions with reviewers and my own reading of the paper, I judged that the issues on the presentation is not so much serious and considering the strong results of the paper I determined to recommend acceptance. Still, I partly agree with the opinion that technical novelty is not enough or well presented, and I strongly encourage the authors to seriously address the raised concerns in the final version. Following is my own comment to the paper. - In the discussion phase the authors explained that the algorithm can also consider the exponential families other than Bernoulli or Gaussian distributions by restricting the parameter space. Though it is supported through existing work, some problems become significantly easy when we are allowed to consider a compact parameter space in my experience. Though the boundedness of the space is not explicitly used, I believe that this limitation on the model must be at least explicitly clarified (or practical examples of models without restriction other than Bernoullis or Gaussians should be given). In practice we would not know the exact space and it would make another problem of whether we can take the space conservatively. The response on the necessity of bounded $V$ is not convincing to me and at least not formally explained. - One of the biggest reasons of success of TS would be its easy implementation. In this viewpoints the proposed algorithm does not seem to be practical and the original motivation of using TS seems to be somewhat weakened. In particular, in the exponential families KL-based algorithms are easily implemented. Though this kind of algorithms sometimes requires to solve the inverse of KL, the instance to be solved converges rapidly and the actual number of iterations for computing the inverse becomes very small. On the other hand, the proposed algorithm requires to solve randomly sampled instance and iterations at each round seems to become considerably large. It is fine so far if the TS-based algorithm is essentially necessary to achieve bounds of the paper, but it does not seem to be explained well and I expect that the motivation of using the TS-based algorithm despite its computational burden is clarified (other than its technical interestingness).
train
[ "2ioga1hc0HD", "cZs7iwaFr90", "XC8RzUY9Tj2", "s0nNPX3JnJ", "ZF4HVCaAN_w", "AMAru9g3ByK", "G9es9v6e4vJ", "GKtMfZgktk4", "nkXnkpnrjvk", "UWiAcw7PEr", "G_vjvIW96q", "FxZeayCzQW", "7zBo4uEdPu", "w_5zci0d0AP", "qO5a8C3NqJL", "8tdwaCbDMdw", "1bNYNroNGXu", "tzXJwLe9-gD_", "ND8E0_BFna", ...
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "...
[ " Thank you for the clarification. \nI understood that the authors' standpoint that the bounded variance is justified through the existing work, and exponential and gamma distributions can also be covered by restriction of the parameter space. \n(I refrain from writing my own opinion here for neutrality as an AC....
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 8, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "ZF4HVCaAN_w", "XC8RzUY9Tj2", "s0nNPX3JnJ", "ZF4HVCaAN_w", "AMAru9g3ByK", "nips_2022_An5MaWw4L4I", "UWiAcw7PEr", "G_vjvIW96q", "FxZeayCzQW", "7zBo4uEdPu", "7zBo4uEdPu", "1bNYNroNGXu", "LNw3ONc3p1B", "hc6OesoOZ_QF", "8tdwaCbDMdw", "ND8E0_BFna", "Yr3KjbQUoBD", "QjoQ8PBrvjvS", "dhNw...
nips_2022_o8nYuR8ekFm
PAC-Bayes Compression Bounds So Tight That They Can Explain Generalization
While there has been progress in developing non-vacuous generalization bounds for deep neural networks, these bounds tend to be uninformative about why deep learning works. In this paper, we develop a compression approach based on quantizing neural network parameters in a linear subspace, profoundly improving on previous results to provide state-of-the-art generalization bounds on a variety of tasks, including transfer learning. We use these tight bounds to better understand the role of model size, equivariance, and the implicit biases of optimization, for generalization in deep learning. Notably, we find large models can be compressed to a much greater extent than previously known, encapsulating Occam’s razor.
Accept
After reading the submission and reviews, my understanding is that this submission combines the compression approach of Zhou et al with PAC-Bayes theory to obtain tight generalization bounds. While the approach is a mere combination of pre-existing approach, the insight that led to this method provides compelling results for generalization bounds and discussion with reviewers has strengthened the submission. Therefore, I recommend this paper for acceptance.
train
[ "6WodT1K7_os", "PhJZrlJ4xQl", "5jQ5LsCU7Hv", "tBmP6reotQR", "yWZNl-YCw32", "VcSvAVJzQmo", "zMPHZ9BZcwa6", "a9gM-qbTSVX", "vxCeAJSYaZI", "bTf427vH4k", "OXHm3YNqCeB", "ZLYPH_SULv2", "S29rLdZJv9Q", "pbPO2EXDTi" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you again for your thoughtful review. Does our response help address your questions? We would appreciate the opportunity to engage further if needed.", " Thank you again for your thoughtful review. Does our response help address your questions? We would appreciate the opportunity to engage further if need...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 2 ]
[ "zMPHZ9BZcwa6", "VcSvAVJzQmo", "yWZNl-YCw32", "a9gM-qbTSVX", "ZLYPH_SULv2", "S29rLdZJv9Q", "pbPO2EXDTi", "vxCeAJSYaZI", "OXHm3YNqCeB", "nips_2022_o8nYuR8ekFm", "nips_2022_o8nYuR8ekFm", "nips_2022_o8nYuR8ekFm", "nips_2022_o8nYuR8ekFm", "nips_2022_o8nYuR8ekFm" ]
nips_2022_LtJMqnbslJe
Fault-Aware Neural Code Rankers
Large language models (LLMs) have demonstrated an impressive ability to generate code for various programming tasks. In many instances, LLMs can generate a correct program for a task when given numerous trials. Consequently, a recent trend is to do large scale sampling of programs using a model and then filtering/ranking the programs based on the program execution on a small number of known unit tests to select one candidate solution. However, these approaches assume that the unit tests are given and assume the ability to safely execute the generated programs (which can do arbitrary dangerous operations such as file manipulations). Both of the above assumptions are impractical in real-world software development. In this paper, we propose CodeRanker, a neural ranker that can predict the correctness of a sampled program without executing it. Our CodeRanker is fault-aware i.e., it is trained to predict different kinds of execution information such as predicting the exact compile/runtime error type (e.g., an IndexError or a TypeError). We show that CodeRanker can significantly increase the pass@1 accuracy of various code generation models (including Codex, GPT-Neo, GPT-J) on APPS, HumanEval and MBPP datasets.
Accept
The paper was well-received. The main idea is fairly simple, but the problem is important and the writing and empirical evaluation are solid. Based on the reviewers' advice, I am recommending acceptance. Please make sure to incorporate the reviewer feedback into the final version.
test
[ "4bUjofPJlEo", "XP0jelHXTUq", "Kc61PnnGUS", "jGVsmbiY0g-", "-lJQNxep2k", "0dwV1rRqurI", "BZgXKOX8Ox", "pcM7Hyj6lBL0", "QBIFu57YqRe", "xYlc9qNydmt", "qNMK4HhlOt", "iRPneE4ZmML", "gBEI8iSSLf9" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you, we are very glad that your suggestions helped us to perform a useful experiment! ", " I appreciate these details. These are indeed important. I am increasing my score. Thank you.", " Your concerns about setting a strong baseline is valid and we did set up a run with GPT-Neo 1.3B ranker before our r...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "XP0jelHXTUq", "Kc61PnnGUS", "-lJQNxep2k", "QBIFu57YqRe", "xYlc9qNydmt", "BZgXKOX8Ox", "pcM7Hyj6lBL0", "gBEI8iSSLf9", "iRPneE4ZmML", "qNMK4HhlOt", "nips_2022_LtJMqnbslJe", "nips_2022_LtJMqnbslJe", "nips_2022_LtJMqnbslJe" ]
nips_2022_NYpU9BRODos
Accelerated Training of Physics-Informed Neural Networks (PINNs) using Meshless Discretizations
Physics-informed neural networks (PINNs) are neural networks trained by using physical laws in the form of partial differential equations (PDEs) as soft constraints. We present a new technique for the accelerated training of PINNs that combines modern scientific computing techniques with machine learning: discretely-trained PINNs (DT-PINNs). The repeated computation of the partial derivative terms in the PINN loss functions via automatic differentiation during training is known to be computationally expensive, especially for higher-order derivatives. DT-PINNs are trained by replacing these exact spatial derivatives with high-order accurate numerical discretizations computed using meshless radial basis function-finite differences (RBF-FD) and applied via sparse-matrix vector multiplication. While in principle any high-order discretization may be used, the use of RBF-FD allows for DT-PINNs to be trained even on point cloud samples placed on irregular domain geometries. Additionally, though traditional PINNs (vanilla-PINNs) are typically stored and trained in 32-bit floating-point (fp32) on the GPU, we show that for DT-PINNs, using fp64 on the GPU leads to significantly faster training times than fp32 vanilla-PINNs with comparable accuracy. We demonstrate the efficiency and accuracy of DT-PINNs via a series of experiments. First, we explore the effect of network depth on both numerical and automatic differentiation of a neural network with random weights and show that RBF-FD approximations of third-order accuracy and above are more efficient while being sufficiently accurate. We then compare the DT-PINNs to vanilla-PINNs on both linear and nonlinear Poisson equations and show that DT-PINNs achieve similar losses with 2-4x faster training times on a consumer GPU. Finally, we also demonstrate that similar results can be obtained for the PINN solution to the heat equation (a space-time problem) by discretizing the spatial derivatives using RBF-FD and using automatic differentiation for the temporal derivative. Our results show that fp64 DT-PINNs offer a superior cost-accuracy profile to fp32 vanilla-PINNs, opening the door to a new paradigm of leveraging scientific computing techniques to support machine learning.
Accept
All reviewers agreed that this paper has several strengths, such as a convincing motivation, a well structured and well-formulated model and solid theoretical grounding. While two reviewers had a very positive general impression of the paper (emphasizing, in particular, the novelty and originality of this work), one reviewer raised some concerns about the application cases being too simplistic and not well suited for demonstrating potential strengths or weaknesses of the method. In my opinion, however, theses concerns (and further questions) could be addressed reasonably well in the rebuttal, and therefore, I recommend to accept this paper.
train
[ "t6USY6BR_WO", "_YXQpbIo2X", "wnAlgPtErRX", "5VLYX8dCIeS", "2YeSv3uZEy-", "g8eTSZsaioR", "GPMw0AEAeh", "8NEPpUpZpr23", "-T_VXaSDoq", "COzuozm_g7P", "jBUxph7qchp", "gMnonArCSYM", "mlNyNkHcKiF", "xRiBSqPLKJn", "6r2jauq202C" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We now have some limited results on an irregular domain in the appendix.", " We have now added error bars to the results in the appendix also.", " Here, we summarize the revisions made in response to reviewer comments.\n\n1. All our plots now have error bars.\n2. A more detailed justification for the speedups...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "mlNyNkHcKiF", "COzuozm_g7P", "nips_2022_NYpU9BRODos", "2YeSv3uZEy-", "8NEPpUpZpr23", "6r2jauq202C", "6r2jauq202C", "6r2jauq202C", "xRiBSqPLKJn", "xRiBSqPLKJn", "mlNyNkHcKiF", "mlNyNkHcKiF", "nips_2022_NYpU9BRODos", "nips_2022_NYpU9BRODos", "nips_2022_NYpU9BRODos" ]
nips_2022_oR5WIUtsXmx
On the Frequency-bias of Coordinate-MLPs
We show that typical implicit regularization assumptions for deep neural networks (for regression) do not hold for coordinate-MLPs, a family of MLPs that are now ubiquitous in computer vision for representing high-frequency signals. Lack of such implicit bias disrupts smooth interpolations between training samples, and hampers generalizing across signal regions with different spectra. We investigate this behavior through a Fourier lens and uncover that as the bandwidth of a coordinate-MLP is enhanced, lower frequencies tend to get suppressed unless a suitable prior is provided explicitly. Based on these insights, we propose a simple regularization technique that can mitigate the above problem, which can be incorporated into existing networks without any architectural modifications.
Accept
The paper initially received three positive reviews and a borderline reject one. After the rebuttal, all reviewers are voting for accepting the paper. The area chair agrees with their assessment and follows their recommendation.
train
[ "owM-C9yt0PK", "mR6cc8Yhpw7", "ntQwWAlI2B", "nYTCZY2kUlf", "weJTrW8a06", "T00PApQfd6o", "dmGAh1fwVu", "ZQ6ItF6huiY", "pwGgaCwfJlO", "Zi5-O6Tlyqo", "FJFGgVq-My", "eYcVplekfnt", "u02FjINSmpN", "-whu1d_SZG", "TN19qPCgLw", "71IfhW6B2Q9" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the clarifications and the response, I have no further concerns. I'll keep my score at 6.", " We thank the reviewer for the enlightening discussion and apologize for misunderstanding the comment regarding the selection of the subset. In the current work, we did not use any specific method to choos...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "FJFGgVq-My", "nYTCZY2kUlf", "ZQ6ItF6huiY", "pwGgaCwfJlO", "eYcVplekfnt", "71IfhW6B2Q9", "71IfhW6B2Q9", "71IfhW6B2Q9", "TN19qPCgLw", "TN19qPCgLw", "-whu1d_SZG", "u02FjINSmpN", "nips_2022_oR5WIUtsXmx", "nips_2022_oR5WIUtsXmx", "nips_2022_oR5WIUtsXmx", "nips_2022_oR5WIUtsXmx" ]
nips_2022_2ndfW2bw4mi
Geodesic Self-Attention for 3D Point Clouds
Due to the outstanding competence in capturing long-range relationships, self-attention mechanism has achieved remarkable progress in point cloud tasks. Nevertheless, point cloud object often has complex non-Euclidean spatial structures, with the behavior changing dynamically and unpredictably. Most current self-attention modules highly rely on the dot product multiplication in Euclidean space, which cannot capture internal non-Euclidean structures of point cloud objects, especially the long-range relationships along the curve of the implicit manifold surface represented by point cloud objects. To address this problem, in this paper, we introduce a novel metric on the Riemannian manifold to capture the long-range geometrical dependencies of point cloud objects to replace traditional self-attention modules, namely, the Geodesic Self-Attention (GSA) module. Our approach achieves state-of-the-art performance compared to point cloud Transformers on object classification, few-shot classification and part segmentation benchmarks.
Accept
The paper addresses an issue of existing self-attention module that is mainly designed for data on Euclidean domain; for those on non-Euclidean domains, e.g., those on Riemannian manifold, the paper proposes a Geodesic self-attention counterpart. Experiments on tasks of 3D classification and segmentation show the efficacy. All reviewers acknowledge the problem importance and contributions made in the paper, although a few concerns are raised, including additional ablation studies and comparisons with other methods using geodesic metrics. In the rebuttal, the authors clearly respond and address the reviewers’ concerns. Acceptance is recommended. Congratulations!
val
[ "nEbe5oKVgm1", "OVdNI8ZJfe9", "1cicIPxcpECh", "zTrv73kIlyC", "6bLqnyF8T3u", "-gTWHVOMD78", "wk-_2ir1ceG", "e_V-9iknZKm", "mm6zjpz148U", "WKQMRNiQQ2L", "iqd81KZcqu1" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Sorry to bother you again, we appreciate your effort in helping us make the paper stronger.\n\nYou mentioned that \"For now, I would recommend a **borderline accept rating (5)** for the paper.\" But the temporary rating we obtained is a **borderline reject rating (4)** in the above review system. Could I give you...
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "mm6zjpz148U", "mm6zjpz148U", "-gTWHVOMD78", "-gTWHVOMD78", "wk-_2ir1ceG", "iqd81KZcqu1", "WKQMRNiQQ2L", "mm6zjpz148U", "nips_2022_2ndfW2bw4mi", "nips_2022_2ndfW2bw4mi", "nips_2022_2ndfW2bw4mi" ]
nips_2022_I3mLa12s_H
Point Transformer V2: Grouped Vector Attention and Partition-based Pooling
As a pioneering work exploring transformer architecture for 3D point cloud understanding, Point Transformer achieves impressive results on multiple highly competitive benchmarks. In this work, we analyze the limitations of the Point Transformer and propose our powerful and efficient Point Transformer V2 model with novel designs that overcome the limitations of previous work. In particular, we first propose group vector attention, which is more effective than the previous version of vector attention. Inheriting the advantages of both learnable weight encoding and multi-head attention, we present a highly effective implementation of grouped vector attention with a novel grouped weight encoding layer. We also strengthen the position information for attention by an additional position encoding multiplier. Furthermore, we design novel and lightweight partition-based pooling methods which enable better spatial alignment and more efficient sampling. Extensive experiments show that our model achieves better performance than its predecessor and achieves state-of-the-art on several challenging 3D point cloud understanding benchmarks, including 3D point cloud segmentation on ScanNet v2 and S3DIS and 3D point cloud classification on ModelNet40. Our code will be available at https://github.com/Gofinge/PointTransformerV2.
Accept
The paper focuses on modifying specific internal modules in the PointTransformer pipeline for added benefits. It received four detailed reviews from expert reviewers. The discussion period included healthy back-and-forth between the authors and the reviewers, and many of their concerns were addressed and their questions answered. The AC does partially agree with Reviewer MnQR's assessment of the broad impact of this paper, specifically that it may not have a significant influence on the future of research in 3D recognition given the simple technical improvements it proposes. However, the empirical influence of these improvements and their justification/motivation are clear, as clearly stated by the other reviewers. As such, the AC sees that the current level of impact and contribution of the paper reaches the level expected for NeurIPS.
train
[ "VM0o0YFg3m4", "RxqhdMf-MHj", "L3nfmh5e1CY", "Bt0EunutcA", "SSe4L9pz1aP", "b7bsYOKuvD8", "SXSe7EKOywe", "Si3LomCl7fu", "HerDobIqCUL", "9esCMUn_Rl9", "XIokL4a202", "yyj9uAUCqxP", "mjKUBIIAJz" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your comment and acknowledgment. We will optimize our final paper based on our discussion. A well-designed codebase for point cloud representation learning will also be released with our paper. We believe our work will contribute to the development of our area.", " **(f) About Grouped Linear**\n> Als...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "Bt0EunutcA", "SSe4L9pz1aP", "SSe4L9pz1aP", "HerDobIqCUL", "b7bsYOKuvD8", "9esCMUn_Rl9", "XIokL4a202", "yyj9uAUCqxP", "mjKUBIIAJz", "nips_2022_I3mLa12s_H", "nips_2022_I3mLa12s_H", "nips_2022_I3mLa12s_H", "nips_2022_I3mLa12s_H" ]
nips_2022_avJW5-PRzV
A Simple Approach to Automated Spectral Clustering
The performance of spectral clustering heavily relies on the quality of affinity matrix. A variety of affinity-matrix-construction (AMC) methods have been proposed but they have hyperparameters to determine beforehand, which requires strong experience and leads to difficulty in real applications, especially when the inter-cluster similarity is high and/or the dataset is large. In addition, we often need to choose different AMC methods for different datasets, which still depends on experience. To solve these two challenging problems, in this paper, we present a simple yet effective method for automated spectral clustering. First, we propose to find the most reliable affinity matrix via grid search or Bayesian optimization among a set of candidates given by different AMC methods with different hyperparameters, where the reliability is quantified by the \textit{relative-eigen-gap} of graph Laplacian introduced in this paper. Second, we propose a fast and accurate AMC method based on least squares representation and thresholding and prove its effectiveness theoretically. Finally, we provide a large-scale extension for the automated spectral clustering method, of which the time complexity is linear with the number of data points. Extensive experiments of natural image clustering show that our method is more versatile, accurate, and efficient than baseline methods.
Accept
The paper considers the problem of the choice of affinity matrix and associated hyperparameters for spectral clustering. The main contributions are a notion of relative eigen-gap to measure the quality of a clustering, the use of least squares representation with thresholding to construct the affinity matrix, and a novel method for large scale clustering. Extensive numerical results show the effectiveness of the proposed methods. In sum the paper provides an effective approach to automated hyperparameter selection for affinity-based spectral clustering.
train
[ "p7mGqUGQXgH", "U070WcIWhZn", "b5utJ9l3WMe", "gOOqMMBr3ws", "QwvkaypRJAJ", "exJG93jLv_", "4Zsc9A30vv_", "vjFB4K1EFpb", "q6ySom9PtJT", "ySzcHpmYXcS", "lwH9D9PVUPA", "HOYiGxYCZ6J", "BgrQU1eoSP", "KfDwPLgCBuJ", "-79ecSbdw2l", "qA-kRL2jXI1" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely appreciate your comments, suggestions, and increased score.", " The authors sincerely thank you for recognizing our work and increasing the score. You comments have made our paper stronger.", " I thank the authors for a solid and thorough rebuttal. I believe that this has made the contributions m...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5, 5 ]
[ "gOOqMMBr3ws", "b5utJ9l3WMe", "vjFB4K1EFpb", "lwH9D9PVUPA", "BgrQU1eoSP", "ySzcHpmYXcS", "q6ySom9PtJT", "HOYiGxYCZ6J", "qA-kRL2jXI1", "-79ecSbdw2l", "KfDwPLgCBuJ", "BgrQU1eoSP", "nips_2022_avJW5-PRzV", "nips_2022_avJW5-PRzV", "nips_2022_avJW5-PRzV", "nips_2022_avJW5-PRzV" ]
nips_2022_v_0F4IZJZw
Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models
Pre-trained language models (LMs) are shown to easily generate toxic language. In this work, we systematically explore domain-adaptive training to reduce the toxicity of language models. We conduct this study on three dimensions: training corpus, model size, and parameter efficiency. For the training corpus, we demonstrate that using self-generated datasets consistently outperforms the existing baselines across various model sizes on both automatic and human evaluations, even when it uses a 3 1 smaller training corpus. We then comprehensively study detoxifying LMs with parameter sizes ranging from 126M up to 530B (3× larger than GPT3), a scale that has never been studied before. We find that i) large LMs have similar toxicity levels as smaller ones given the same pre-training corpus, and ii) large LMs require more endeavor to unlearn the toxic content seen at pretraining. We also explore parameter-efficient training methods for detoxification. We demonstrate that adding and training adapter-only layers in LMs not only saves a lot of parameters but also achieves a better trade-off between toxicity and perplexity than whole model adaptation for large-scale models. Our code will be available at: https://github.com/NVIDIA/Megatron-LM/.
Accept
This paper explores domain-adaptive training (i.e., continued pretraining on special data to remove unwanted model behavior) with respect to the influence of model size, training corpus, and parameter efficiency. All technical reviewers lean toward accepting this paper. However, there are also some ethics concerns.
train
[ "CRgNWANeZAd", "lO55eFPI0a", "dMYS7fDge-Q", "Cq1avlqosV", "7PB9gypsrP", "o28rkcQTvAB", "CO4W9nvcnw", "6uMV4ca-tp", "20nV3rqqAiv", "0ySNOJs0ddJ", "3cXPSTf8o8C", "n4Pbr8HtvXic", "TbbEEMsArYf", "0uZqNWtKqXa", "KCReUHtPWSA", "Vr0gKaElOD", "JD23a2VrLvb", "b_EH9YC-VV6", "c1Pbua6b4Ro", ...
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", ...
[ " You did it! I will definitely raise my score.", " Dear Reviewer,\n\nThis is a continued reply with the empirical evidence that supports our claim in the previous clarification response:\n\n> The nontoxic OWTC 50K dataset, if it is filtered the same way as SGEAT (i.e., both of them have the same toxicity dataset...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "lO55eFPI0a", "Cq1avlqosV", "Cq1avlqosV", "7PB9gypsrP", "o28rkcQTvAB", "6uMV4ca-tp", "20nV3rqqAiv", "c1Pbua6b4Ro", "KCReUHtPWSA", "0uZqNWtKqXa", "m3gOKoHpHL", "nips_2022_v_0F4IZJZw", "nips_2022_v_0F4IZJZw", "U7k8JzKm8hf", "U7k8JzKm8hf", "d0dW7V_bTdH", "nips_2022_v_0F4IZJZw", "d0dW7...
nips_2022_PfStAhJ2t1g
A Variational Edge Partition Model for Supervised Graph Representation Learning
Graph neural networks (GNNs), which propagate the node features through the edges and learn how to transform the aggregated features under label supervision, have achieved great success in supervised feature extraction for both node-level and graph-level classification tasks. However, GNNs typically treat the graph structure as given and ignore how the edges are formed. This paper introduces a graph generative process to model how the observed edges are generated by aggregating the node interactions over a set of overlapping node communities, each of which contributes to the edges via a logical OR mechanism. Based on this generative model, we partition each edge into the summation of multiple community-specific weighted edges and use them to define community-specific GNNs. A variational inference framework is proposed to jointly learn a GNN-based inference network that partitions the edges into different communities, these community-specific GNNs, and a GNN-based predictor that combines community-specific GNNs for the end classification task. Extensive evaluations on real-world graph datasets have verified the effectiveness of the proposed method in learning discriminative representations for both node-level and graph-level classification tasks.
Accept
This paper has novel ideas and good experimentations, and is well written. In particular, the novelty is a highlight in this paper, as compared with many existing k-hop ideas. Overall, it is a good addition to the general GNN literature. There was some initial disagreement on the evaluation of the method, which was later addressed by the authors' detailed response.
train
[ "mPRzEgnFMbH", "9_QD4pKDBp", "ObFPRzOqqST", "17gwX2IPh3f", "hc0C-2cEJtc", "D1pvQJmTe-a", "_2ByAoQdNcr", "QDnIX4jP3H", "U6ihdHnSxL", "6UN-nbOseEj", "s0kp6GUcVvc2", "-UBwjrjdKgco", "zuhcEXjWiZe", "xAtY3XS1_xb", "a-S9StNil9z", "gTEFvVZOeVw", "buCVueKitgx", "n_cq8JemsJ", "K_pM9msIRZh...
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for further engaging with me. \n\nRegarding my suggestions on 1) more explanations on results of training from scratch and 2) more analyses on community detections, I hope the authors clearly include them in their paper, in the next revision.\n\nRegarding my concern about more large and recent benchmark...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 3, 4 ]
[ "17gwX2IPh3f", "hc0C-2cEJtc", "D1pvQJmTe-a", "_2ByAoQdNcr", "xAtY3XS1_xb", "zuhcEXjWiZe", "U6ihdHnSxL", "a-S9StNil9z", "6UN-nbOseEj", "K_pM9msIRZh", "-UBwjrjdKgco", "n_cq8JemsJ", "buCVueKitgx", "gTEFvVZOeVw", "nips_2022_PfStAhJ2t1g", "nips_2022_PfStAhJ2t1g", "nips_2022_PfStAhJ2t1g", ...
nips_2022_HZ20IYYAwah
Sparsity in Continuous-Depth Neural Networks
Neural Ordinary Differential Equations (NODEs) have proven successful in learning dynamical systems in terms of accurately recovering the observed trajectories. While different types of sparsity have been proposed to improve robustness, the generalization properties of NODEs for dynamical systems beyond the observed data are underexplored. We systematically study the influence of weight and feature sparsity on forecasting as well as on identifying the underlying dynamical laws. Besides assessing existing methods, we propose a regularization technique to sparsify ``input-output connections'' and extract relevant features during training. Moreover, we curate real-world datasets including human motion capture and human hematopoiesis single-cell RNA-seq data to realistically analyze different levels of out-of-distribution (OOD) generalization in forecasting and dynamics identification respectively. Our extensive empirical evaluation on these challenging benchmarks suggests that weight sparsity improves generalization in the presence of noise or irregular sampling. However, it does not prevent learning spurious feature dependencies in the inferred dynamics, rendering them impractical for predictions under interventions, or for inferring the true underlying dynamics. Instead, feature sparsity can indeed help with recovering sparse ground-truth dynamics compared to unregularized NODEs.
Accept
The paper proposes a new sparsity inducing regularization scheme for continuous-depth neural networks. The reviewers acknowledged the relevance of the proposed method and generally appreciated the results. The paper is nicely written and provides a range of interesting experiments demonstrating the effectiveness of the proposed method. I want to thank the authors for their detailed responses that helped in answering some of the reviewers' questions. (The reviewers have provided detailed feedback in their reviews, and we strongly encourage the authors to incorporate this feedback when preparing a revised version of the paper. I also would like to encourage the authors to carefully revise the related work section on continuous-depth neural nets to better acknowledge work that has appeared in the last 2 years.) In summary, the feedback of the reviewers is positive and thus I recommend accepting this paper.
train
[ "HJc1h64xaS", "7QCv4w28C-1", "VN4-7VTMKU8", "M9g3dtg56H", "zbFqaOtINj", "1GXgy29FAdD", "23--vOXVyCY", "8BjH5AVr5l2", "pmE7PZ5rVqK", "PrscPRZ63HO", "WWpCOKSqHCQS", "xxr1hpz3HDV", "g7u_qvQDjQl", "DLmRm-WWyZ5", "MELHWFTs6s", "m9h1BC0cSJ0", "tRfpodoZBQ", "aSsQGjEUusA", "klHaBn0psAt",...
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_...
[ " We just uploaded another revision of our manuscript. In particular, we added drafts of how we will address\n\n* (2) added runtimes in Appendix A\n* (4) caption Figure 3\n* (5) caption Figure 2\n* (6) Appendix C.2\n* (9) Appendix B.3\n\nbut will add more details and polish the writing for the final revision. We ha...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 2 ]
[ "23--vOXVyCY", "8BjH5AVr5l2", "pmE7PZ5rVqK", "23--vOXVyCY", "1GXgy29FAdD", "tRfpodoZBQ", "MELHWFTs6s", "g7u_qvQDjQl", "aSsQGjEUusA", "FxzZKnef0h7", "xaLxDhr2W5", "klHaBn0psAt", "DLmRm-WWyZ5", "xaLxDhr2W5", "m9h1BC0cSJ0", "MIBgVoNKmeQ", "klHaBn0psAt", "FxzZKnef0h7", "nips_2022_HZ2...
nips_2022_dSJuEcqmEIF
Invariant and Transportable Representations for Anti-Causal Domain Shifts
Real-world classification problems must contend with domain shift, the (potential) mismatch between the domain where a model is deployed and the domain(s) where the training data was gathered. Methods to handle such problems must specify what structure is held in common between the domains and what is allowed to vary. A natural assumption is that causal (structural) relationships are invariant in all domains. Then, it is tempting to learn a predictor for label $Y$ that depends only on its causal parents. However, many real-world problems are ``anti-causal'' in the sense that $Y$ is a cause of the covariates $X$---in this case, $Y$ has no causal parents and the naive causal invariance is useless. In this paper, we study representation learning under a particular notion of domain shift that both respects causal invariance and that naturally handles the ``anti-causal'' structure. We show how to leverage the shared causal structure of the domains to learn a representation that both admits an invariant predictor and that also allows fast adaptation in new domains. The key is to translate causal assumptions into learning principles that disentangle ``invariant'' and ``non-stable'' features. Experiments on both synthetic and real-world data demonstrate the effectiveness of the proposed learning algorithm.
Accept
In this paper, the authors propose an invariant representation learning method across multi-domain from the perspective of the anti-causal learning process. All the reviewers consider this paper is clearly written and novel.
train
[ "RV4njSRTu9e", "OrWDMwmo549", "SmnrHgOWdsg", "aB5KJPIhNzK", "hlO2PIkkNa", "yjWzk0MMQnc", "hMR3NtMBFra", "-roHZ163Z-M", "mdrjoFgVAr", "ns3pwSUar0" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for the detailed response and revising their manuscript. Authors solve most of my concerns. \n\nI agree with author's response for identifiability is beyond the scope of this paper. But still encourage authors to discuss it in future version, since identifiability is a property t...
[ -1, -1, -1, -1, -1, -1, -1, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "hMR3NtMBFra", "aB5KJPIhNzK", "hlO2PIkkNa", "yjWzk0MMQnc", "ns3pwSUar0", "mdrjoFgVAr", "-roHZ163Z-M", "nips_2022_dSJuEcqmEIF", "nips_2022_dSJuEcqmEIF", "nips_2022_dSJuEcqmEIF" ]
nips_2022_Tean8bBjlbB
Transition to Linearity of General Neural Networks with Directed Acyclic Graph Architecture
In this paper we show that feedforward neural networks corresponding to arbitrary directed acyclic graphs undergo transition to linearity as their ``width'' approaches infinity. The width of these general networks is characterized by the minimum in-degree of their neurons, except for the input and first layers. Our results identify the mathematical structure underlying transition to linearity and generalize a number of recent works aimed at characterizing transition to linearity or constancy of the Neural Tangent Kernel for standard architectures.
Accept
This is a challenging manuscript to make a final recommendation on accept or reject as there is a clear consensus amongst the reviewers that the manuscript is borderline between accept and reject. The main concern is the incremental nature of the results, extending the prior results of Reference: [A]: Liu, C., Zhu, L., & Belkin, M. (2022). Loss landscapes and optimization in over-parameterized non-linear systems and neural networks. Applied and Computational Harmonic Analysis, 59, 85-116. from fully connected networks to the more general setting of DAGs. The authors and reviewers point out that the DAG setting requires substantial adaptation of the technique used to prove the results in [A] which is the reason I have selected Accept over Reject. That said, the architecture the reviewers are most interested in is CNNs which were pointed out don't fall within the definition of the DAG and have been removed from the manuscript. Inclusion of shared weights, CNNs, would be a great addition to the manuscript which would make the decision for acceptance clearer and would also make the manuscript more compelling for readers who otherwise are unsure the benefit of the DAG setting.
val
[ "yN2-Uza4FoU", "bFUcC6zXSi", "9gtJjP8MBlY", "UxGCjI4EW0", "d2mkJE0jqGn", "jQT5wucqosc", "qAOmGmHg2ce", "7-fiwqhNUpKd", "Wk9u4K551rQ", "KvTjKnpAHPW", "F_FXbMVqMcb", "3fNLGp9L5Eu", "pJiipIBRbrV", "0nO0qM2hPv" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response. We are glad we clarified your concerns.\n\nAs for the exponent $L^2$ on the radius $R$ dependence, we agree that it is an important question and would like to address it as follows: \n\nIn the current submission, we mainly focused on proving the transition to linearity for DAG architectu...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4, 2 ]
[ "bFUcC6zXSi", "d2mkJE0jqGn", "qAOmGmHg2ce", "0nO0qM2hPv", "jQT5wucqosc", "pJiipIBRbrV", "7-fiwqhNUpKd", "3fNLGp9L5Eu", "F_FXbMVqMcb", "nips_2022_Tean8bBjlbB", "nips_2022_Tean8bBjlbB", "nips_2022_Tean8bBjlbB", "nips_2022_Tean8bBjlbB", "nips_2022_Tean8bBjlbB" ]
nips_2022_xatjGRWLRO
Self-Supervised Pretraining for Large-Scale Point Clouds
Pretraining on large unlabeled datasets has been proven to improve the down-stream task performance on many computer vision tasks, such as 2D object detection and video classification. However, for large-scale 3D scenes, such as outdoor LiDAR point clouds, pretraining is not widely used. Due to the special data characteristics of large 3D point clouds, 2D pretraining frameworks tend to not generalize well. In this paper, we propose a new self-supervised pretraining method that targets large-scale 3D scenes. We pretrain commonly used point-based and voxel-based model architectures and show the transfer learning performance on 3D object detection and also semantic segmentation. We demonstrate the effectiveness of our approach on both dense 3D indoor point clouds and also sparse outdoor lidar point clouds.
Accept
All three reviewers recommend acceptance after rebuttal, although they were not very confident. I also read the paper and agree it is well written, has good results and addresses a gap in the large-scale point cloud learning literature. However I also see that the paper selectively builds upon ideas from other domains, e.g. vision [19,3] while ignoring closest related work that also does self-supervised learning in images from large scenes, using also local or local+global contrastive learning. Papers that are highly related include: - Contrastive learning of global and local features for medical image segmentation with limited annotations. Chaitanya et al. - Efficient Visual Pretraining with Contrastive Detection, Henaff et al. - Point-Level Region Contrast for Object Detection Pre-Training. Bai et al. - SegContrast: 3D Point Cloud Feature Representation Learning Through Self-Supervised Segment Discrimination. Nunes et al. I strongly suggest explaining the differences to these papers. In an ideal world the first one should be a baseline to really show if there's something about the fit of the proposed method and point cloud data.
val
[ "VdsM8iup3cel", "8LVVGn6048i", "nORZqXUoy59", "wMzRe0BpA2i", "eqlU-Wzij3L", "gK9uUGKyXsV", "yJh9R7Wc2Iv" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The response clarified some of my questions. I have raised my initial score accordingly.", " Thank you for your insightful review! Please find below our responses to your questions and concerns.\n\n**Q1: As mentioned by the authors, the information in outdoor lidar scans is very sparse, \"important foreground o...
[ -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, 4, 3, 2 ]
[ "8LVVGn6048i", "yJh9R7Wc2Iv", "gK9uUGKyXsV", "eqlU-Wzij3L", "nips_2022_xatjGRWLRO", "nips_2022_xatjGRWLRO", "nips_2022_xatjGRWLRO" ]
nips_2022_X82LFUs6g5Z
Cooperative Distribution Alignment via JSD Upper Bound
Unsupervised distribution alignment estimates a transformation that maps two or more source distributions to a shared aligned distribution given only samples from each distribution. This task has many applications including generative modeling, unsupervised domain adaptation, and socially aware learning. Most prior works use adversarial learning (i.e., min-max optimization), which can be challenging to optimize and evaluate. A few recent works explore non-adversarial flow-based (i.e., invertible) approaches, but they lack a unified perspective and are limited in efficiently aligning multiple distributions. Therefore, we propose to unify and generalize previous flow-based approaches under a single non-adversarial framework, which we prove is equivalent to minimizing an upper bound on the Jensen-Shannon Divergence (JSD). Importantly, our problem reduces to a min-min, i.e., cooperative, problem and can provide a natural evaluation metric for unsupervised distribution alignment. We show empirical results on both simulated and real-world datasets to demonstrate the benefits of our approach. Code is available at https://github.com/inouye-lab/alignment-upper-bound.
Accept
This paper addresses the problem of the alignment of multiple distributions. The authors propose a new method for distribution alignment based on invertible flow models and a new training objective that gives an upper bound on the generalized Jensen-Shannon divergence (GJSD) between the distributions. The authors derive a variational upper bound on the GJSD and convert this upper bound to a loss function. They prove that global minima of the objective correspond to invertible mappings that align the distributions and conduct experiments on a few toy data. Most of the reviewers recognize the motivation, and effort of the paper. During the discussion, the authors also successfully addressed some of the reviewers' questions. However, concerns about the novelty as well as the lack of in-the-wild evaluation remain. Reviewers acknowledge the problem of domain alignment using normalizing flows is hard, and this paper does not make any significant breakthroughs in this matter. But, it seems it has made a small measurable step towards solving it, which is good enough to be recommended to accept.
train
[ "x4SeDbTxXL", "zuTb9sdRFBf", "XAKkPo76Tgj", "G_Fajc06Vy_", "Df3skxbrUPw", "k5XJD1u-D8j", "xgFe7rKY2cy", "msBJoIeqj_G", "XWopJIuW6JW", "hBTHkDHKDlM", "M_N2ShHBKtE", "yqP7nVyJqUe", "Tx8ctr8gB2k", "MnKc3HOoLOh", "sKUM22G-g1", "a9yBnxcAmZ2", "qRouVqQNc6j", "qRAJEXw0V0t", "tbq14XPRI1q...
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_r...
[ " We just finish revising the manuscript with more detailed arguments on the \"Relationship to Prior Works\" section from our discussions above along with suggestions from other reviewers. We also include our additional DA experiment in the supplementary material. \n\nIn addition, we add the vanishing gradient disc...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "XAKkPo76Tgj", "M_N2ShHBKtE", "yqP7nVyJqUe", "nips_2022_X82LFUs6g5Z", "XWopJIuW6JW", "a9yBnxcAmZ2", "yqP7nVyJqUe", "M_N2ShHBKtE", "MnKc3HOoLOh", "nips_2022_X82LFUs6g5Z", "qRouVqQNc6j", "qRAJEXw0V0t", "tbq14XPRI1q", "tbq14XPRI1q", "UWIEWq5UJbI", "UWIEWq5UJbI", "nips_2022_X82LFUs6g5Z",...
nips_2022_3s9IrEsjLyk
Diffusion-LM Improves Controllable Text Generation
Controlling the behavior of language models (LMs) without re-training is a major open problem in natural language generation. While recent works have demonstrated successes on controlling simple sentence attributes (e.g., sentiment), there has been little progress on complex, fine-grained controls (e.g., syntactic structure). To address this challenge, we develop a new non-autoregressive language model based on continuous diffusions that we call Diffusion-LM. Building upon the recent successes of diffusion models in continuous domains, Diffusion-LM iteratively denoises a sequence of Gaussian vectors into word vectors, yielding a sequence of intermediate latent variables. The continuous, hierarchical nature of these intermediate variables enables a simple gradient-based algorithm to perform complex, controllable generation tasks. We demonstrate successful control of Diffusion-LM for six challenging fine-grained control tasks, significantly outperforming prior work.
Accept
This paper introduces a new non-autoregressive language model based on continuous diffusion models, which have been quite successful in the vision area. All reviewers agree that the method is novel, this paper is well-written, and the experiments are convincing. Although there are concerns about the efficiency, this paper is worthy of being accepted by NeuRIPS.
train
[ "uTKLFLhiQwr", "6fZJq0Wzwzp", "Wpb-rBYBQ2sH", "tNkxF454TDl", "2gO8O6X3lx2", "ldIt-X7bA4E", "sLvzcIlEAJwP", "kpVfh0DxGxz", "23EDFY0qvCD", "Ndth2hOoKO", "Sn1JT7ucCHN", "_AawjCGFUu2" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for addressing my concerns and including the discussion on possible scaling directions. ", " Thank you for your time and effort in addressing my major concerns. I enjoy reading this paper. \n\nBest,\n\nReviewer C1jB", " We thank all the reviewers for their detailed feedback. We have incorporated su...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 8, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3, 4 ]
[ "2gO8O6X3lx2", "ldIt-X7bA4E", "nips_2022_3s9IrEsjLyk", "_AawjCGFUu2", "Sn1JT7ucCHN", "Ndth2hOoKO", "kpVfh0DxGxz", "23EDFY0qvCD", "nips_2022_3s9IrEsjLyk", "nips_2022_3s9IrEsjLyk", "nips_2022_3s9IrEsjLyk", "nips_2022_3s9IrEsjLyk" ]
nips_2022_JpZ5du_Kdh
The Stability-Efficiency Dilemma: Investigating Sequence Length Warmup for Training GPT Models
Recent works have demonstrated great success in pre-training large-scale autoregressive language models (e.g., GPT-3) on massive GPUs. To reduce the wall-clock training time, a common practice is to increase the batch size and learning rate. However, such practice is often brittle and leads to a so-called stability-efficiency dilemma: increasing the batch sizes and learning rates leads to better training efficiency but can also result in training instability, leading to poor generalization accuracy or failed runs. To better understand this phenomenon, we conduct an in-depth analysis on large-scale pre-training experiments replicating the GPT-2 model with public dataset. We find that there is a strong correlation between training instability and extreme values of gradient variance. We further identify that samples with long sequence lengths contribute to these extreme gradient variance values, especially at the beginning of the training, indicating that long sequence length can be a main source of training instability. Based on the analysis, we present a simple yet effective Sequence Length Warmup method that aims to solve the training stability-efficiency dilemma by avoiding extreme gradient variance values. Moreover, we present a lightweight tuning strategy that allows us to tune our method with just a small portion of the expensive full training. Experiments replicating GPT-2 models (117M and 1.5B) show that our approach enables stable training with 8x larger batch size and 4x larger learning rate, whereas the baseline approach struggles with training instability. To achieve the same or better zero-shot evaluation results, our method reduces the required number of training tokens and wall clock time by up to 2.2x and 3.7x, respectively. Experiments replicating GPT-3 model (125M) show that our approach enables stable training with 8x larger batch size and 40x larger learning rate, and retains 99\% of the zero-shot accuracy on 11 tasks using 10x less data and 17x less time compared to the original GPT-3 training recipe, while the baseline diverges under the same settings and only retain 95\% of accuracy under lower learning rate.
Accept
# Summary of the Paper This paper makes two contributions as far as I can tell: 1. The paper attempts to characterize training instability in large language models. 2. The paper presents Sequence Length Warmup (SLW), a technique that the authors claim to reduce training instability and that empirically makes training LLMs much more efficient. # Metareview The paper isn't perfect, but it's an obvious accept. Even in the most pessimistic assessment of the paper, the SLW technique is a clear win for efficient training regardless of how harshly you judge the analysis of training instability. The biggest weaknesses of the paper are (1) all of the methodological and scientific questions around trying to grapple with the phenomena around training stability and (2) claims of a connection between SLW and training instability. On the topic of instability in large model training (1), there has been plenty of griping on Twitter but very little scientific analysis of the phenomenon. This paper's analysis will hardly be the last word on that topic, but it's a reasonable first attempt at something that other researchers will doubtlessly build on and hone over the coming years. I share Reviewer c8jb's concerns that correlation with gradient norms is insufficient to make claims about training stability or SLW and that more rigorous definitions of training stability are necessary. However, even if, in the very worst-case scenario, this paper is one of the first attempts at characterizing a vexing phenomenon and inspires future researchers to tear apart the modes of analysis and findings in this paper to improve upon them, this paper will have been a worthwhile contribution to the scholarly literature. On the topic of claims of a connection between SLW and training instability (with the concerns most poignantly expressed by Reviewer S5Qt), this is really a byproduct of the scientific analysis of training instability. This connection (or lack thereof) will become clearer as time goes on and the science improves, and I simply ask that the authors acknowledge the uncertainty here. **If the above scenarios are the very worst possible outcome for this paper, it's still a major contribution to both science and practice. I therefore advocate for accepting this paper, and the reviewers seem to agree with that assessment (both the good and the bad). I urge the authors to prepare the camera-ready version of the paper by carefully incorporating the feedback of the reviewers and, in particular, Reviewer c8jb, who had some very thoughtful comments about ways to clarify the scientific aspects of the paper. To satisfy the reviewers and future readers, I highly recommend openly and frequently acknowledging the vast uncertainty we have about the scientific aspects of this paper as we strive to make sense of this strange training instability phenomenon.** The reviewers were enthusiastic and engaged, and the discussion was lively, which suggests to me that this is an exciting paper that deserves to be featured to the community via publication at NeurIPS. There were some other common comments that I urge the authors to address in order to produce the best and most influential possible version of this paper: * "The reason why instabilities lead to worse performance when they do not cause divergence was not discussed much.", "The reason why using longer sequences creates instability was not discussed much." The authors discussed this a bit during the discussion period, but it's worth emphasizing these as open questions so that other researchers know it's important to follow up on. * It's worth being crystal clear about truncation vs. just focusing on shorter sequences. It's an important experimental detail that may seem surprising or counterintuitive if it's not clearly laid out.
test
[ "ewsDpkAZ6J4", "FN_ErY8jke1", "Bg5NjHgXcSj", "H-zE9HbaJHf", "gsIcWy_-52", "vMofZOjPQMWK", "0fI0jwUzgD", "NTYiRt2Kdh9", "RR4tPCmCbn", "5vQjrElpVaP", "ZKep_KVWg-", "NdEOPgLFpvv", "lWU6G-an0pr" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for clarifying the \"pre-warmup quantities\". The learning rate warmup for GPT-2 pre-training is the first 3K steps, which is the first 1%/8% of training under batch size 512/4K. This phase does have some distinct properties, for example the gradient variance norm/max element continuously increase durin...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 3, 4 ]
[ "FN_ErY8jke1", "vMofZOjPQMWK", "lWU6G-an0pr", "lWU6G-an0pr", "NdEOPgLFpvv", "NdEOPgLFpvv", "ZKep_KVWg-", "ZKep_KVWg-", "5vQjrElpVaP", "nips_2022_JpZ5du_Kdh", "nips_2022_JpZ5du_Kdh", "nips_2022_JpZ5du_Kdh", "nips_2022_JpZ5du_Kdh" ]
nips_2022_jdJo1HIVinI
Mixture-of-Experts with Expert Choice Routing
Sparsely-activated Mixture-of-experts (MoE) models allow the number of parameters to greatly increase while keeping the amount of computation for a given token or a given sample unchanged. However, a poor expert routing strategy (e.g. one resulting in load imbalance) can cause certain experts to be under-trained, leading to an expert being under or over-specialized. Prior work allocates a fixed number of experts to each token using a top-k function regardless of the relative importance of different tokens. To address this, we propose a heterogeneous mixture-of-experts employing an expert choice method. Instead of letting tokens select the top-k experts, we have experts selecting the top-k tokens. As a result, each token can be routed to a variable number of experts and each expert can have a fixed bucket size. We systematically study pre-training speedups using the same computational resources of the Switch Transformer top-1 and GShard top-2 gating of prior work and find that our method improves training convergence time by more than 2×. For the same computational cost, our method demonstrates higher performance in fine-tuning 11 selected tasks in the GLUE and SuperGLUE benchmarks. For a smaller activation cost, our method outperforms the T5 dense model in 7 out of the 11 tasks.
Accept
This work introduces a new token routing strategy for MoE models. Instead of allocating a fixed number of experts to each token using a top-k function regardless of the relative importance of different tokens, the proposed strategy adopts a heterogeneous mixture-of-experts employing an expert choice method. The proposed method is simple, training efficient, and empirically effective. The paper is well-written and easy to follow. Overall, it is a good paper.
train
[ "YezJfS_QkE", "DOpaUtvGsxf", "4WI7HiiAOVm", "lNFnEcO2Puj", "lPloDys_qh1", "rd65kv8Tswd", "e2U6FvksgOP", "s6pJdJhOMS", "DQ_YY0luPrP", "RY4rj--ksS", "iW9N73F-mzg", "Xiv47yViofk", "9vWWDFBJGF", "u1Y8wOtF_cY", "J3yXBRk9UAX" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The answers clarified my questions.\nI will keep my rating Accept for the current revised version.\n\nI have just one follow-up question which doesn't necessarily need to be updated in the paper. In some of my experiments, I observed softmax on token dimension worked better. Have you done any experiments with tha...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 4 ]
[ "iW9N73F-mzg", "DQ_YY0luPrP", "lNFnEcO2Puj", "RY4rj--ksS", "rd65kv8Tswd", "s6pJdJhOMS", "nips_2022_jdJo1HIVinI", "J3yXBRk9UAX", "u1Y8wOtF_cY", "9vWWDFBJGF", "Xiv47yViofk", "nips_2022_jdJo1HIVinI", "nips_2022_jdJo1HIVinI", "nips_2022_jdJo1HIVinI", "nips_2022_jdJo1HIVinI" ]
nips_2022_peFP9Pl-6-_
Sampling in Constrained Domains with Orthogonal-Space Variational Gradient Descent
Sampling methods, as important inference and learning techniques, are typically designed for unconstrained domains. However, constraints are ubiquitous in machine learning problems, such as those on safety, fairness, robustness, and many other properties that must be satisfied to apply sampling results in real-life applications. Enforcing these constraints often leads to implicitly-defined manifolds, making efficient sampling with constraints very challenging. In this paper, we propose a new variational framework with a designed orthogonal-space gradient flow (O-Gradient) for sampling on a manifold $\mathcal{G}_0$ defined by general equality constraints. O-Gradient decomposes the gradient into two parts: one decreases the distance to $\mathcal{G}_0$ and the other decreases the KL divergence in the orthogonal space. While most existing manifold sampling methods require initialization on $\mathcal{G}_0$, O-Gradient does not require such prior knowledge. We prove that O-Gradient converges to the target constrained distribution with rate $\widetilde{O}(1/\text{the number of iterations})$ under mild conditions. Our proof relies on a new Stein characterization of conditional measure which could be of independent interest. We implement O-Gradient through both Langevin dynamics and Stein variational gradient descent and demonstrate its effectiveness in various experiments, including Bayesian deep neural networks.
Accept
All reviewers recommend accepting the paper, to various levels of enthusiasm. When preparing the final version, please take into the following considerations: - Several of the reviewers pointed out that the paper was unclear/sloppy in places, and that it was not written in a way that is accessible to a general ML audience. Take some time to fix this; people are more likely to read/appreciate/cite/build-on your work if it written in an accessible way, with clear motivation (understandable beyond people in a subfield), and the steps in the analysis are laid clearly in an easy-to-understand way.
train
[ "WEtuB4P3Vq", "NWzci1csdRA", "w6HkqBneTO", "oJ63SWDa1km", "ob24dc1aI54", "3wia-tyAEan", "sy8KHQAGUXP", "YQsyk2mOuZz", "NprasKLXOyS", "i3MZV_wxNNL", "1vFEYrqpefz", "nkB2f92wDU" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your reply and for raising your score. We will be sure to include the above information in the main text of the revision. ", " We are glad that you are happy with our changes. Thank you for raising your score on us. Your further suggestions are also very helpful.\n\nLemma A.1: We have implemented ...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, 2, 4, 4 ]
[ "w6HkqBneTO", "oJ63SWDa1km", "ob24dc1aI54", "3wia-tyAEan", "nkB2f92wDU", "1vFEYrqpefz", "i3MZV_wxNNL", "NprasKLXOyS", "nips_2022_peFP9Pl-6-_", "nips_2022_peFP9Pl-6-_", "nips_2022_peFP9Pl-6-_", "nips_2022_peFP9Pl-6-_" ]
nips_2022_HvJC_KsSx8S
Precise Learning Curves and Higher-Order Scalings for Dot-product Kernel Regression
As modern machine learning models continue to advance the computational frontier, it has become increasingly important to develop precise estimates for expected performance improvements under different model and data scaling regimes. Currently, theoretical understanding of the learning curves that characterize how the prediction error depends on the number of samples is restricted to either large-sample asymptotics ($m\to\infty$) or, for certain simple data distributions, to the high-dimensional asymptotics in which the number of samples scales linearly with the dimension ($m\propto d$). There is a wide gulf between these two regimes, including all higher-order scaling relations $m\propto d^r$, which are the subject of the present paper. We focus on the problem of kernel ridge regression for dot-product kernels and present precise formulas for the mean of the test error, bias, and variance, for data drawn uniformly from the sphere with isotropic random labels in the $r$th-order asymptotic scaling regime $m\to\infty$ with $m/d^r$ held constant. We observe a peak in the learning curve whenever $m \approx d^r/r!$ for any integer $r$, leading to multiple sample-wise descent and nontrivial behavior at multiple scales.
Accept
The paper studies kernel ridge regression and characterizes its performance theoretically, which are interesting and highly relevant to machine learning venues. The paper is technically sound and the authors have done a good job in the rebuttal period. The paper is worth publishing in NeurIPS.
train
[ "-GTLIVrDsGp", "KzuZ5oSoA8v", "_FlxNItg_5q", "mjfy5za9tUV", "MBM5eZOEJfu", "4-XD_3GAqPh", "782Ero-YJRM", "-xbMKtHjW6-", "kVEeuubW7ji", "GTBQccVvtpUW", "Lm350KeoW_-", "8RrwheWhfDM", "TDSQZeEZcLE", "69as4uBeuL6", "cOOOnCFZLH0", "k40j6A2wuig" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear authors,\n\nThanks for the clarifications and the response.\n\n* I agree that the results can readily apply to the case of NTKs of networks with dense layers, but as the authors suggested, the kernels corresponding to networks involving more complex layers like convolution is not subject to rigorous analysis...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 2, 3, 4 ]
[ "KzuZ5oSoA8v", "_FlxNItg_5q", "kVEeuubW7ji", "MBM5eZOEJfu", "-xbMKtHjW6-", "cOOOnCFZLH0", "69as4uBeuL6", "k40j6A2wuig", "TDSQZeEZcLE", "8RrwheWhfDM", "nips_2022_HvJC_KsSx8S", "nips_2022_HvJC_KsSx8S", "nips_2022_HvJC_KsSx8S", "nips_2022_HvJC_KsSx8S", "nips_2022_HvJC_KsSx8S", "nips_2022_...
nips_2022_l7aekTjF6CO
Is Sortition Both Representative and Fair?
Sortition is a form of democracy built on random selection of representatives. Two of the key arguments in favor of sortition are that it provides representation (a random panel reflects the composition of the population) and fairness (everyone has a chance to participate). Uniformly random selection is perfectly fair, but is it representative? Towards answering this question, we introduce the notion of a representation metric on the space of individuals, and assume that the cost of an individual for a panel is determined by the $q$-th closest representative; the representation of a (random) panel is measured by the ratio between the (expected) sum of costs of the optimal panel for the individuals and that of the given panel. For $k/2 < q \le k-\Omega(k)$, where $k$ is the panel size, we show that uniform random selection is indeed representative by establishing a constant lower bound on this ratio. By contrast, for $q \leq k/2$, no random selection algorithm that is almost fair can give such a guarantee. We therefore consider relaxed fairness guarantees and develop a new random selection algorithm that sheds light on the tradeoff between representation and fairness.
Accept
Reviewers like the problem of fair sortition for its importance as well as its fitness to NeurIPS. Reviewers also liked the solid theoretical results. Minor concerns were raised about the technical depth and uninformative experiments, but the overall sentiment is quite positive.
train
[ "1eflUGta1Iv", "Pkt5Zy-pKsf", "sX30gZIX5Jq", "6daZj7uDhWG", "5V76cXFeRpC", "-sRlGB6VM-1", "09vr4ECAf3p7", "MOYgn8tciTM", "D5Fnuivvxf2", "q1VmeoMfFjE" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We greatly appreciate your attention to our rebuttal and revision.", " We greatly appreciate your attention to our rebuttal and revision.", " Thank you for the answer. While I remain somewhat unenthusiastic, I certainly appreciate the revision and I am adjusting my scores.", " I greatly appreciate the effor...
[ -1, -1, -1, -1, -1, -1, -1, 5, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "6daZj7uDhWG", "sX30gZIX5Jq", "09vr4ECAf3p7", "5V76cXFeRpC", "q1VmeoMfFjE", "D5Fnuivvxf2", "MOYgn8tciTM", "nips_2022_l7aekTjF6CO", "nips_2022_l7aekTjF6CO", "nips_2022_l7aekTjF6CO" ]
nips_2022_pIYYJflkhZ
SoftPatch: Unsupervised Anomaly Detection with Noisy Data
Although mainstream unsupervised anomaly detection (AD) algorithms perform well in academic datasets, their performance is limited in practical application due to the ideal experimental setting of clean training data. Training with noisy data is an inevitable problem in real-world anomaly detection but is seldom discussed. This paper considers label-level noise in image sensory anomaly detection for the first time. To solve this problem, we proposed a memory-based unsupervised AD method, SoftPatch, which efficiently denoises the data at the patch level. Noise discriminators are utilized to generate outlier scores for patch-level noise elimination before coreset construction. The scores are then stored in the memory bank to soften the anomaly detection boundary. Compared with existing methods, SoftPatch maintains a strong modeling ability of normal data and alleviates the overconfidence problem in coreset. Comprehensive experiments in various noise scenes demonstrate that SoftPatch outperforms the state-of-the-art AD methods on the MVTecAD and BTAD benchmarks and is comparable to those methods under the setting without noise.
Accept
The paper proposes a memory-based unsupervised anomaly detection approach that efficiently denoises the data at the patch level. This paper has quite diverse evaluations. Some reviewers are concerned that the novelty of the proposed approach is low, and the paper conducts inappropriate experiments. On the other hand, some reviewers appreciate that the proposed approach is reasonable and solves a practical problem. Since the paper could be a pioneering work in the field, I recommend to accept the paper.
train
[ "rLK3EqzBk3S", "dn7GGlXYJkI", "kv53-0BI-tu", "rDoLR10HutE", "mFrcHRx4ta_", "MNiYkDWIKRl", "qd63eDb0J0-", "JAUhXK_dVm3", "wp0YpEvofBG", "7BUtE-Qroad", "u9MVVmxfjlt" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer MPDh,\n\nSince the period of discussion ends soon and we would like to be able to respond to any remaining concerns you may have, we would like to kindly ask you to let us know if there is anything we could further clarify.\n\nSince novelty is the main concern in your review, we want to discuss **Q2...
[ -1, -1, -1, -1, -1, -1, -1, 4, 3, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 5, 4, 4 ]
[ "mFrcHRx4ta_", "MNiYkDWIKRl", "u9MVVmxfjlt", "7BUtE-Qroad", "wp0YpEvofBG", "JAUhXK_dVm3", "JAUhXK_dVm3", "nips_2022_pIYYJflkhZ", "nips_2022_pIYYJflkhZ", "nips_2022_pIYYJflkhZ", "nips_2022_pIYYJflkhZ" ]
nips_2022_052QkenIdSI
Identifying good directions to escape the NTK regime and efficiently learn low-degree plus sparse polynomials
A recent goal in the theory of deep learning is to identify how neural networks can escape the “lazy training,” or Neural Tangent Kernel (NTK) regime, where the network is coupled with its first order Taylor expansion at initialization. While the NTK is minimax optimal for learning dense polynomials (Ghorbani et al, 2021), it cannot learn features, and hence has poor sample complexity for learning many classes of functions including sparse polynomials. Recent works have thus aimed to identify settings where gradient based algorithms provably generalize better than the NTK. One such example is the “QuadNTK” approach of Bai & Lee (2020), which analyzes the second-order term in the Taylor expansion. Bai & Lee (2020) show that the second-order term can learn sparse polynomials efficiently; however, it sacrifices the ability to learn general dense polynomials. In this paper, we analyze how gradient descent on a two-layer neural network can escape the NTK regime by utilizing a spectral characterization of the NTK (Montanari & Zhong, 2020) and building on the QuadNTK approach. We first expand upon the spectral analysis to identify “good” directions in parameter space in which we can move without harming generalization. Next, we show that a wide two-layer neural network can jointly use the NTK and QuadNTK to fit target functions consisting of a dense low-degree term and a sparse high-degree term -- something neither the NTK nor the QuadNTK can do on their own. Finally, we construct a regularizer which encourages the parameter vector to move in the “good" directions, and show that gradient descent on the regularized loss will converge to a global minimizer, which also has low test error. This yields an end to end convergence and generalization guarantee with provable sample complexity improvement over both the NTK and QuadNTK on their own.
Accept
This paper studies the learning dynamics of two-layer neural networks beyond the NTK regime for learning low-degree plus sparse polynomials. The author response and discussion have addressed most of the reviewers’ questions and concerns. While some reviewers think the polynomial regression setting is a bit limited, all the reviewers agree that the results are interesting and significant. Therefore, I recommend acceptance.
train
[ "h3AeTCrx5p", "qgY4a31hE57", "zA3GyYzU56e", "ec1MsHzGWU", "6stGdpkYcMB", "eh4zPYONs0E", "fHSuf3P5Izc", "BpCaK2Q-GZ1", "ixCwUzRyyJ", "mGJjauhtWp3", "C8Ix0FEFgP", "YwBBjUE10Oj", "Kfd6QxTzW23", "exTXLk1PEB" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " 1) Thank you very much for performing this additional experiment. This makes perfect sense to me now and nicely illustrates that the (at first sight artificial) task of studying sparse + dense polynomial regression is very interesting since QuadNTK and NTK both cannot learn it while finite NNs can. This convinces...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "ec1MsHzGWU", "nips_2022_052QkenIdSI", "6stGdpkYcMB", "fHSuf3P5Izc", "BpCaK2Q-GZ1", "C8Ix0FEFgP", "ixCwUzRyyJ", "exTXLk1PEB", "mGJjauhtWp3", "Kfd6QxTzW23", "YwBBjUE10Oj", "nips_2022_052QkenIdSI", "nips_2022_052QkenIdSI", "nips_2022_052QkenIdSI" ]
nips_2022_M34VHvEU4NZ
Weighted Distillation with Unlabeled Examples
Distillation with unlabeled examples is a popular and powerful method for training deep neural networks in settings where the amount of labeled data is limited: A large “teacher” neural network is trained on the labeled data available, and then it is used to generate labels on an unlabeled dataset (typically much larger in size). These labels are then utilized to train the smaller “student” model which will actually be deployed. Naturally, the success of the approach depends on the quality of the teacher’s labels, since the student could be confused if trained on inaccurate data. This paper proposes a principled approach for addressing this issue based on a “debiasing" reweighting of the student’s loss function tailored to the distillation training paradigm. Our method is hyper-parameter free, data-agnostic, and simple to implement. We demonstrate significant improvements on popular academic datasets and we accompany our results with a theoretical analysis which rigorously justifies the performance of our method in certain settings.
Accept
This paper proposes to learn to reweight data samples in the distillation process to deal with potential noisy labels from the teacher. The writing is clear, and the empirical and theoretical results are satisfactory in general. At the beginning, the paper receives a mixture of positive and negative scores. The reviewers raise a number of questions, mostly about empirical comparisons with related works. The authors didi a good job in the rebuttal by providing many comparisons requested by the reviewers. The rebuttal resolves most of the concerns from the reviewers. Finally, the strong supporter remains strong support and the most negative one agrees that his concerns are also resolved, and urges that the authors should carefully revise the paper by incorporating all the results from the rebuttal into the paper. I standby the reviewers. Authors, please make sure to incorporate the extra results and clarifications into the final revision.
train
[ "qeAAqGsJFCl", "0F04nbCh6te", "F9YY0QsA2eB", "hxrjDhhLlrG", "UvkgsZKbpEYf", "njToqJPjBeX", "go1-byI24o-M", "8gI3fHtOCWD", "cCEjeaxJzlN", "5BmyG_yo0YC", "6JXeoFhQ6Ow", "8rFX-gVp5E", "zzxiqp_LHU6", "zcK13qm9Ql", "XLMv228DvG", "RXc2SE7dX1-", "QGflXliYrZ", "Z75DP7x0gGm", "5DoQSBadcgL...
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer,\n\nsince the end of discussion period is approaching, we would like to ask you whether our response helped in clarifying things and/or whether you have any other questions.\n\nOnce again, thank you for your time and effort.\n\nBest Regards,\n\n The authors", " We greatly appreciate your feedback ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 3, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 2 ]
[ "8gI3fHtOCWD", "F9YY0QsA2eB", "8rFX-gVp5E", "5BmyG_yo0YC", "njToqJPjBeX", "cCEjeaxJzlN", "nips_2022_M34VHvEU4NZ", "5DoQSBadcgL", "Z75DP7x0gGm", "6JXeoFhQ6Ow", "QGflXliYrZ", "zzxiqp_LHU6", "RXc2SE7dX1-", "XLMv228DvG", "nips_2022_M34VHvEU4NZ", "nips_2022_M34VHvEU4NZ", "nips_2022_M34VHv...
nips_2022_Ix37FJYDkBp
SemMAE: Semantic-Guided Masking for Learning Masked Autoencoders
Recently, significant progress has been made in masked image modeling to catch up to masked language modeling. However, unlike words in NLP, the lack of semantic decomposition of images still makes masked autoencoding (MAE) different between vision and language. In this paper, we explore a potential visual analogue of words, i.e., semantic parts, and we integrate semantic information into the training process of MAE by proposing a Semantic-Guided Masking strategy. Compared to widely adopted random masking, our masking strategy can gradually guide the network to learn various information, i.e., from intra-part patterns to inter-part relations. In particular, we achieve this in two steps. 1) Semantic part learning: we design a self-supervised part learning method to obtain semantic parts by leveraging and refining the multi-head attention of a ViT-based encoder. 2) Semantic-guided MAE (SemMAE) training: we design a masking strategy that varies from masking a portion of patches in each part to masking a portion of (whole) parts in an image. Extensive experiments on various vision tasks show that SemMAE can learn better image representation by integrating semantic information. In particular, SemMAE achieves 84.5% fine-tuning accuracy on ImageNet-1k, which outperforms the vanilla MAE by 1.4%. In the semantic segmentation and fine-grained recognition tasks, SemMAE also brings significant improvements and yields the state-of-the-art performance.
Accept
Authors present a method attempting to perform Masked Auto-Encoding (MAE) using semantic knowledge, to try to better approximate the semantic MAE seen in language domain. To do this, they leverage an iBOT framework and add some embeddings of the class token to create "part tokens", which are then compared to patch tokens from iBOT to produce attention maps. The objective for this process is a StyleGAN-based image reconstruction. Once the part attention maps training is done, the network is then used to guide semantic based part masking based on the generated attention maps, for semantic MAE. SemMAE pretrained networks are then compared against other forms of SSL pretraining on ImageNet 1k, iNa, CUB, Cars, and ADE-20K, demonstrating improvements in all domains. Pros: - [R] Idea is interesting / novel - [R] Well written - [AC/R] Results improve over baselines Cons: - [AC/R] Pipeline is complicated. - [R] What about starting from random MAE and then adapting based on parts knowledge? Authors respond that this is future work. - [AC/R] Not convinced parts are visual analog of words. Authors provide benchmark improvements in performance, and qualitative visualization of the parts. However, there is no quantitative assessment of the parts and whether they have true semantic meaning. - [AC/R] Some improvements are marginal. Authors respond that although marginal in some cases, they are consistent. - [R] Paper does not discuss more how to deal with background. Authors respond that this is future work. Overall, all reviewers have changed their assessments to accept, including the one reject reviewer. AC recommends accept, though would be preferable if quantitative assessment of the quality of the semantic parts (for example, by segmentation masks) could be provided. AC Rating: Accept
train
[ "aVnwLBVL5NS", "stNJtN8T6KQ", "L487gTybEC3", "vkRS7DyiBL", "-rNNMZUlGwk", "KejWhP0gcJ5b", "X8ZyOQbY_Xf", "2L8gcfeIiTI", "Yhdmh_xC1WH", "WzS1vJHIS-s" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the authors' reply.\n\nI would keep my original rating as accept (7)\n\n\nBest,", " Dear Reviewer, please remember to respond to the author's rebuttal. Thank you.", " Dear Reviewer KRLr,\n\nWe appreciate your time and valuable comments on our paper. Based on your comments, we did our best to answer...
[ -1, -1, -1, -1, -1, -1, -1, 7, 3, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "2L8gcfeIiTI", "-rNNMZUlGwk", "Yhdmh_xC1WH", "WzS1vJHIS-s", "KejWhP0gcJ5b", "Yhdmh_xC1WH", "2L8gcfeIiTI", "nips_2022_Ix37FJYDkBp", "nips_2022_Ix37FJYDkBp", "nips_2022_Ix37FJYDkBp" ]
nips_2022_bQCOA4dq_T
Counterfactual Neural Temporal Point Process for Estimating Causal Influence of Misinformation on Social Media
Recent years have witnessed the rise of misinformation campaigns that spread specific narratives on social media to manipulate public opinions on different areas, such as politics and healthcare. Consequently, an effective and efficient automatic methodology to estimate the influence of the misinformation on user beliefs and activities is needed. However, existing works on misinformation impact estimation either rely on small-scale psychological experiments or can only discover the correlation between user behaviour and misinformation. To address these issues, in this paper, we build up a causal framework that model the causal effect of misinformation from the perspective of temporal point process. To adapt the large-scale data, we design an efficient yet precise way to estimate the \textbf{Individual Treatment Effect} (ITE) via neural temporal point process and gaussian mixture models. Extensive experiments on synthetic dataset verify the effectiveness and efficiency of our model. We further apply our model on a real-world dataset of social media posts and engagements about COVID-19 vaccines. The experimental results indicate that our model recognized identifiable causal effect of misinformation that hurts people's subjective emotions toward the vaccines.
Accept
This work addresses the interesting problem of measuring the causal effect of exposure to misinformation from observational data culled from social media. Reviewers agreed on the importance and timeliness of this problem. The motivation for causal analysis here is also readily apparent (SxLq), and the proposed point process model seems novel (at least in its application here) and appropriate. The main weaknesses raised concerned presentation issues (5Zb3,vdd1) and some concerns about experimental details and setup (vdd1,SxLq). The former constitute relatively minor issues which can be readily addressed in revision, and the latter issues were mostly satisfactorily addressed during the response period. Reviewer 5Zb3 raises a concern about the sentiment model used as part of the evaluation, which the authors should discuss in future iterations of the work. Overall, while the evaluation suggests only modest empirical gains over baselines (which were themselves introduced in this work, for want of alternative existing methods), the work is novel in its investigation of causal methods for understanding the potential influence of misinformation on social media.
train
[ "mXnN6FAwvOR", "sAGLERavvAL", "G4l7B7SFpgL", "3pOcBopSLl", "mn9wNqv7J8", "RZ84S0ai6PM", "ZKBJI45xyC", "cRZ-8uBVJuZ", "eayLuhqLzpb", "x_ZYZMXivVb", "G1OT5n9i9pX", "xhEM_adEubt", "3hMbm0tjStQ4", "sV6b_pDkfWC", "Jl61yiAyAWl_", "70L3qA3q11K", "QxJqHYu9niP", "5YoWHqdwens" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We value your comments and hopefully we can be more specific in discussing the key points that are confusing to you, which we believe would be definitely helpful for our paper!\n\nWe used textblob, a widely-used general-purpose sentiment analysis package in Python. Since we do not have the ground-truth of the sub...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "sAGLERavvAL", "x_ZYZMXivVb", "cRZ-8uBVJuZ", "RZ84S0ai6PM", "ZKBJI45xyC", "G1OT5n9i9pX", "5YoWHqdwens", "3hMbm0tjStQ4", "nips_2022_bQCOA4dq_T", "70L3qA3q11K", "xhEM_adEubt", "Jl61yiAyAWl_", "sV6b_pDkfWC", "QxJqHYu9niP", "5YoWHqdwens", "nips_2022_bQCOA4dq_T", "nips_2022_bQCOA4dq_T", ...
nips_2022_ZEQ5Gf8DiD
Compositional Generalization in Unsupervised Compositional Representation Learning: A Study on Disentanglement and Emergent Language
Deep learning models struggle with compositional generalization, i.e. the ability to recognize or generate novel combinations of observed elementary concepts. In hopes of enabling compositional generalization, various unsupervised learning algorithms have been proposed with inductive biases that aim to induce compositional structure in learned representations (e.g. disentangled representation and emergent language learning). In this work, we evaluate these unsupervised learning algorithms in terms of how well they enable \textit{compositional generalization}. Specifically, our evaluation protocol focuses on whether or not it is easy to train a simple model on top of the learned representation that generalizes to new combinations of compositional factors. We systematically study three unsupervised representation learning algorithms - $\beta$-VAE, $\beta$-TCVAE, and emergent language (EL) autoencoders - on two datasets that allow directly testing compositional generalization. We find that directly using the bottleneck representation with simple models and few labels may lead to worse generalization than using representations from layers before or after the learned representation itself. In addition, we find that the previously proposed metrics for evaluating the levels of compositionality are not correlated with actual compositional generalization in our framework. Surprisingly, we find that increasing pressure to produce a disentangled representation (e.g. increasing $\beta$ in the $\beta$-VAE) produces representations with worse generalization, while representations from EL models show strong compositional generalization. Motivated by this observation, we further investigate the advantages of using EL to induce compositional structure in unsupervised representation learning, finding that it shows consistently stronger generalization than disentanglement models, especially when using less unlabeled data for unsupervised learning and fewer labels for downstream tasks. Taken together, our results shed new light onto the compositional generalization behavior of different unsupervised learning algorithms with a new setting to rigorously test this behavior, and suggest the potential benefits of developing EL learning algorithms for more generalizable representations.
Accept
This paper investigates compositional generalisation (CG) of the representations learned from unsupervised learning through the lens of disentanglement and emergent language. They argue that models that are primed to disentangle do not learn representations that do well on CG and that models of emergent language do indeed perform well on CG. They further explore utility of EL by finding improved performance on plain generalisation and in learning from fewer labels. The reviewers agree that the paper tackles an interesting and relevant problem, and the question and derived insights are valuable, and the experiments are quite thorough. Where the reviewers raised valid concerns about evaluation, these were addressed by the authors in their rebuttal along with clarification on separability vs CG and additional experiments. The only concerns that I believe could still be argued for here are that questions about the quality of representations could extend to models such as SimCLR/BYOL/MAE and that it would strengthen the paper a good deal if these were also discussed compared against. To tighten the implied brief of the paper, it might be useful to rejig the title a bit---it currently comes across as exploring a spectrum between disentanglement and EL, whereas it's likely simpler to just state 'EL shows better CG than disentanglement as an objective' or something similar. Overall, I believe this is a good paper, and should be accepted.
train
[ "5m-1WInTUn3", "kFVbHztyOEB", "e6jzWIUqG3R", "1QKgNt6ecfiL", "PD9xtr9JK1xA", "dzX81eky624", "XXzS8nwqVt", "ojPOBlmuhGJ", "x6IoH-YQFAF", "CMXxIvO-07", "HNPsC5B6Ivp", "_nbEBNqZM5J", "zaXlrc9KUCK" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hi Reviewer mvHF, thanks very much for responding. We appreciate the point that other representation learning methods that do not explicitly target disentanglement or compositional generalization could perform well in our evaluation framework. Given that considering the full spectrum of representation learning te...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 2, 4 ]
[ "e6jzWIUqG3R", "1QKgNt6ecfiL", "PD9xtr9JK1xA", "dzX81eky624", "zaXlrc9KUCK", "_nbEBNqZM5J", "HNPsC5B6Ivp", "HNPsC5B6Ivp", "CMXxIvO-07", "nips_2022_ZEQ5Gf8DiD", "nips_2022_ZEQ5Gf8DiD", "nips_2022_ZEQ5Gf8DiD", "nips_2022_ZEQ5Gf8DiD" ]
nips_2022_jwVZZzzNKkW
Redundancy-Free Message Passing for Graph Neural Networks
Graph Neural Networks (GNNs) resemble the Weisfeiler-Lehman (1-WL) test, which iteratively update the representation of each node by aggregating information from WL-tree. However, despite the computational superiority of the iterative aggregation scheme, it introduces redundant message flows to encode nodes. We found that the redundancy in message passing prevented conventional GNNs from propagating the information of long-length paths and learning graph similarities. In order to address this issue, we proposed Redundancy-Free Graph Neural Network (RFGNN), in which the information of each path (of limited length) in the original graph is propagated along a single message flow. Our rigorous theoretical analysis demonstrates the following advantages of RFGNN: (1) RFGNN is strictly more powerful than 1-WL; (2) RFGNN efficiently propagate structural information in original graphs, avoiding the over-squashing issue; and (3) RFGNN could capture subgraphs at multiple levels of granularity, and are more likely to encode graphs with closer graph edit distances into more similar representations. The experimental evaluation of graph-level prediction benchmarks confirmed our theoretical assertions, and the performance of the RFGNN can achieve the best results in most datasets.
Accept
This work focuses on a well-known (inductive) bias of message-passing: the fact that information from far away neighbors is dilluted and taken less into account than those of nearby ones. The paper proposes an innovative and rational way to overcome the oversquashing bias. It also provides compelling theoretical evidence for the proposed approach's merit. The numerical results are promising, though not sufficiently thorough to establish this as being close to state-of-the-art. Nevertheless, the reviewers agree that the work is valuable to the community and should be published.
train
[ "Y_ldsNt1Y2T", "br0_-lWb2Ew", "ZDRiSQ4rE_I", "W_Eydah_PD4", "-TypFGM64Ye", "xh5zYt2VFUSr", "yLMApD2sQe", "DnLzv52_nzy", "s0jy-3nd6mH", "VJgU3PMJyjj", "Ba73zsiovfH" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " After reading the authors' responses, I agree that the KDD paper focuses on another issue. If the page limit is enough, I would suggest briefly discriminating the work. Besides, given the updated results in Table 1 and the theoretical contribution of the work, I'm satisfied and have raised my rating.", " Thank ...
[ -1, -1, -1, -1, -1, -1, -1, 5, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "xh5zYt2VFUSr", "yLMApD2sQe", "xh5zYt2VFUSr", "Ba73zsiovfH", "DnLzv52_nzy", "s0jy-3nd6mH", "VJgU3PMJyjj", "nips_2022_jwVZZzzNKkW", "nips_2022_jwVZZzzNKkW", "nips_2022_jwVZZzzNKkW", "nips_2022_jwVZZzzNKkW" ]
nips_2022_PIXGY1WgU-S
Few-Shot Audio-Visual Learning of Environment Acoustics
Room impulse response (RIR) functions capture how the surrounding physical environment transforms the sounds heard by a listener, with implications for various applications in AR, VR, and robotics. Whereas traditional methods to estimate RIRs assume dense geometry and/or sound measurements throughout the environment, we explore how to infer RIRs based on a sparse set of images and echoes observed in the space. Towards that goal, we introduce a transformer-based method that uses self-attention to build a rich acoustic context, then predicts RIRs of arbitrary query source-receiver locations through cross-attention. Additionally, we design a novel training objective that improves the match in the acoustic signature between the RIR predictions and the targets. In experiments using a state-of-the-art audio-visual simulator for 3D environments, we demonstrate that our method successfully generates arbitrary RIRs, outperforming state-of-the-art methods and---in a major departure from traditional methods---generalizing to novel environments in a few-shot manner. Project: http://vision.cs.utexas.edu/projects/fs_rir
Accept
Reviewers are in agreement that the paper should be accepted, and the authors were able to address concerns leading to an increase in score from one of the reviewers.
train
[ "VsBYq1vS5oj", "zsSy9pYkXPS", "bFCOzqZM0ih", "xfRXEGabR5R", "ADjqJnE-tS2", "MaNjCZtU2l", "ryA09R6-wx7", "yu3tBmqfNS3", "xOCzyZoINDT", "pJTIb0hY9H9" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **Q3. Can the authors explain the applicability of this precise method? Is there a specific setup where this way of solving the problem is optimal compared to image -> RIR, specifically where you might need to model source/emitter positions with such fine grained accuracy? A more detailed discussion of the advant...
[ -1, -1, -1, -1, -1, -1, 8, 5, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "zsSy9pYkXPS", "pJTIb0hY9H9", "xOCzyZoINDT", "yu3tBmqfNS3", "ryA09R6-wx7", "nips_2022_PIXGY1WgU-S", "nips_2022_PIXGY1WgU-S", "nips_2022_PIXGY1WgU-S", "nips_2022_PIXGY1WgU-S", "nips_2022_PIXGY1WgU-S" ]
nips_2022_Haj8_Rwqq_H
Incrementality Bidding via Reinforcement Learning under Mixed and Delayed Rewards
Incrementality, which measures the causal effect of showing an ad to a potential customer (e.g. a user in an internet platform) versus not, is a central object for advertisers in online advertising platforms. This paper investigates the problem of how an advertiser can learn to optimize the bidding sequence in an online manner \emph{without} knowing the incrementality parameters in advance. We formulate the offline version of this problem as a specially structured episodic Markov Decision Process (MDP) and then, for its online learning counterpart, propose a novel reinforcement learning (RL) algorithm with regret at most $\widetilde{O}(H^2\sqrt{T})$, which depends on the number of rounds $H$ and number of episodes $T$, but does not depend on the number of actions (i.e., possible bids). A fundamental difference between our learning problem from standard RL problems is that the realized reward feedback from conversion incrementality is \emph{mixed} and \emph{delayed}. To handle this difficulty we propose and analyze a novel pairwise moment-matching algorithm to learn the conversion incrementality, which we believe is of independent interest.
Accept
The paper studies sequential bidding problems by formulating them as MDPs (previous attempts made simplifying assumptions that yielded bandit formulations). The reviewers found the arguments about the causal effects of incrementality requiring richer modeling very convincing, and appreciated the difficulty of the problem due to delayed and mixed rewards. This necessitated a new combination of pairwise moment matching with optimism in the face of uncertainty. The authors clarified the reveiwers' technical questions about the PAMM algorithm during the feedback phase, and reviewers reached consensus that the paper's contributions are novel, interesting, and likely to lead to future research and real-world applications.
train
[ "MiYJTb-6tz", "st9jqPSeb-c", "IraQHw9PUOq", "_FCp_JMfUkg", "iOolfk0MlHl", "rO2_RVzFHdj", "cXS5nCjmidG", "sa_u7ZTzZia", "Fx_78eqZHVo", "JmMzXiyt3MO", "ao62zXSWKw5", "kUL5buDIRqp" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer ZwJW,\n\nSince today is the final day for author-reviewer discussion period and we haven't heard back from you, please let us know if our response makes sense to you and we are happy to answer your questions at the last time.", " Thanks for the detailed response! I believe this is interesting work...
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 4, 3 ]
[ "cXS5nCjmidG", "iOolfk0MlHl", "rO2_RVzFHdj", "kUL5buDIRqp", "ao62zXSWKw5", "JmMzXiyt3MO", "Fx_78eqZHVo", "nips_2022_Haj8_Rwqq_H", "nips_2022_Haj8_Rwqq_H", "nips_2022_Haj8_Rwqq_H", "nips_2022_Haj8_Rwqq_H", "nips_2022_Haj8_Rwqq_H" ]
nips_2022_k4KHXS6_zOV
Implicit Bias of Gradient Descent on Reparametrized Models: On Equivalence to Mirror Descent
As part of the effort to understand implicit bias of gradient descent in overparametrized models, several results have shown how the training trajectory on the overparametrized model can be understood as mirror descent on a different objective. The main result here is a complete characterization of this phenomenon under a notion termed commuting parametrization, which encompasses all the previous results in this setting. It is shown that gradient flow with any commuting parametrization is equivalent to continuous mirror descent with a related mirror map. Conversely, continuous mirror descent with any mirror map can be viewed as gradient flow with a related commuting parametrization. The latter result relies upon Nash's embedding theorem.
Accept
The paper characterizes how reparametrized/overparametrized models can be understood as a version of mirror descent on a different objective through a notion they call commuting parameterization. The reviews are all positive and agree that the paper improves our understanding of this phenomenon.
test
[ "cuZbRbDRfxK", "yq6JxHJVarm", "v7bCZG5OQka", "xES935Ymr8", "Pw2jhzpS_wY", "hVc3slxkhCD", "oLwnD2FiMrI", "U17bThc-TQ-", "sZBd5hCHhRg", "EW2HoALRXZU" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the positive feedback! As suggested by the reviewer, we will add the discussion on the necessary condition and the restrictiveness of the injective setting to the main text in the future version.\n\nAlso, we kindly remind the reviewer that the rating seems still unchanged in the system."...
[ -1, -1, -1, -1, -1, -1, -1, 5, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "yq6JxHJVarm", "oLwnD2FiMrI", "U17bThc-TQ-", "nips_2022_k4KHXS6_zOV", "EW2HoALRXZU", "sZBd5hCHhRg", "U17bThc-TQ-", "nips_2022_k4KHXS6_zOV", "nips_2022_k4KHXS6_zOV", "nips_2022_k4KHXS6_zOV" ]
nips_2022_x_HUcWi1aF1
Provably Efficient Offline Multi-agent Reinforcement Learning via Strategy-wise Bonus
This paper considers offline multi-agent reinforcement learning. We propose the strategy-wise concentration principle which directly builds a confidence interval for the joint strategy, in contrast to the point-wise concentration principle which builds a confidence interval for each point in the joint action space. For two-player zero-sum Markov games, by exploiting the convexity of the strategy-wise bonus, we propose a computationally efficient algorithm whose sample complexity enjoys a better dependency on the number of actions than the prior methods based on the point-wise bonus. Furthermore, for offline multi-agent general-sum Markov games, based on the strategy-wise bonus and a novel surrogate function, we give the first algorithm whose sample complexity only scales $\sum_{i=1}^m A_i$ where $A_i$ is the action size of the $i$-th player and $m$ is the number of players. In sharp contrast, the sample complexity of methods based on the point-wise bonus would scale with the size of the joint action space $\Pi_{i=1}^m A_i$ due to the curse of multiagents. Lastly, all of our algorithms can naturally take a pre-specified strategy class $\Pi$ as input and output a strategy that is close to the best strategy in $\Pi$. In this setting, the sample complexity only scales with $\log |\Pi|$ instead of $\sum_{i=1}^m A_i$.
Accept
Reviewers appreciate the paper's contribution to a novel intersection of fields: offline and multi-agent RL. While feasibility of the results is limited to cases where prior knowledge allows strategy-wise decomposition, it is nonetheless an interesting step in this field. Reviewers are concerned that the above substantial limitation of the work has not been sufficiently discussed in the paper, and the authors are asked to clarify this aspect in a subsequent revision.
train
[ "ZfZtWrHG-5O9", "mvlT4xvGI7C", "MtDGadI-81B", "hhzqtWrIzDOA", "1kG1jbnjeIk", "Bm6mmUWxQR3", "hkMpx7N42M", "X64_8GxD4i", "OgcrHRq7dCk" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks again for your review. We hope our answers could increase your confidence. As the discussion period is close to the end and we have not yet heard back from you, we would be glad to see if our rebuttal response has addressed your concerns questions/concerns.\nWe are more than happy to discuss further if you...
[ -1, -1, -1, -1, -1, -1, 6, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 5 ]
[ "OgcrHRq7dCk", "X64_8GxD4i", "hkMpx7N42M", "OgcrHRq7dCk", "X64_8GxD4i", "hkMpx7N42M", "nips_2022_x_HUcWi1aF1", "nips_2022_x_HUcWi1aF1", "nips_2022_x_HUcWi1aF1" ]
nips_2022_xp5VOBxTxZ
Understanding the Generalization Benefit of Normalization Layers: Sharpness Reduction
Normalization layers (e.g., Batch Normalization, Layer Normalization) were introduced to help with optimization difficulties in very deep nets, but they clearly also help generalization, even in not-so-deep nets. Motivated by the long-held belief that flatter minima lead to better generalization, this paper gives mathematical analysis and supporting experiments suggesting that normalization (together with accompanying weight-decay) encourages GD to reduce the sharpness of loss surface. Here ``sharpness'' is carefully defined given that the loss is scale-invariant, a known consequence of normalization. Specifically, for a fairly broad class of neural nets with normalization, our theory explains how GD with a finite learning rate enters the so-called Edge of Stability (EoS) regime, and characterizes the trajectory of GD in this regime via a continuous sharpness-reduction flow.
Accept
This papers demonstrates how normalization with weight decay forces GD to converge to flater solutions. Overall, the reviewers find that the paper does bring some new insights, perhaps with a limited impact but still valuable to the community. There were also a few concerns left after the discussion period, e.g. - Dicussion of prior work is rather superficial. Some key works are missing or not discussed in appropriate depth - Limited empirical evaluation Given the discussion with the reviewers, the first concern seems easilly fixable in the camera-ready version. Regarding the second concern, the paper does however compensate with a solid technical contribution. I therefore recommend acceptance but I strongly encourage the authors to update their manuscript to address the above concerns. The writing quality is sometimes poor and should be improved in the final version.
train
[ "C4GXI-4oQRp", "1e15x-1zKB", "oKKd3ZDDe8J", "lGJsIoiu27fL", "pp_1j0sgxfk", "dRpIXEJKWuO", "lX53HrdRvr8", "oZwY-1Wc61", "TzIdg3w4IOm", "4SzSycmKWrn", "SVkmFWDjQYL", "ws6lYnYb11U", "pN0qsQN_-Ie", "W_J8vRWR1qR", "JO_W4HEsLfo", "Pghv4wbV_f8" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the response. Below we would like to explain more about our efforts in response to the reviewer's request to add more discussion on the related works.\n\n1. We have cited all the papers the reviewer mentioned, and we promise to include a complete discussion on how people understand norma...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 3, 1, 3 ]
[ "pp_1j0sgxfk", "lX53HrdRvr8", "TzIdg3w4IOm", "oZwY-1Wc61", "4SzSycmKWrn", "Pghv4wbV_f8", "JO_W4HEsLfo", "W_J8vRWR1qR", "pN0qsQN_-Ie", "ws6lYnYb11U", "nips_2022_xp5VOBxTxZ", "nips_2022_xp5VOBxTxZ", "nips_2022_xp5VOBxTxZ", "nips_2022_xp5VOBxTxZ", "nips_2022_xp5VOBxTxZ", "nips_2022_xp5VOB...
nips_2022_Qh89hwiP5ZR
Tree Mover's Distance: Bridging Graph Metrics and Stability of Graph Neural Networks
Understanding generalization and robustness of machine learning models fundamentally relies on assuming an appropriate metric on the data space. Identifying such a metric is particularly challenging for non-Euclidean data such as graphs. Here, we propose a pseudometric for attributed graphs, the Tree Mover's Distance (TMD), and study its relation to generalization. Via a hierarchical optimal transport problem, TMD reflects the local distribution of node attributes as well as the distribution of local computation trees, which are known to be decisive for the learning behavior of graph neural networks (GNNs). First, we show that TMD captures properties relevant for graph classification: a simple TMD-SVM can perform competitively with standard GNNs. Second, we relate TMD to generalization of GNNs under distribution shifts, and show that it correlates well with performance drop under such shifts.
Accept
This paper proposes a new similarity measure between graphs, based on computing optimal transport between distributions of trees extracted from graphs. The method benefit from the fast solvers of OT between trees and the proposed metric has been shown to be interesting for computing a "Lipshitz" constant related to the generalization of message passing GNN. The experiments were appreciated but lack of comparison with existing graph distances and GNN was noted by the reviewers on the graph classification experiment. The authors did a very good reply to the reviewers which was much appreciated. For instance the new experiments are very interesting and should be included in the paper or supplementary. The fact that the performance does not depend too much on the classifier (SVM VS KNN) is also interesting. During discussion the consensus was that the paper deserves to be published at NeurIPS but that the authors are requested to include the new results and discussions/clarifications in the paper and supp.
train
[ "rOpQcySQdo", "JTsvelAr0T", "65bAwoAYFTD", "ak3cGJ6r85q", "XJ6__EaW1ZE", "IQLIhtPri4P", "qbEXY4uWTsb", "jLf2i2oS414", "yrk-GTlP8F6", "xSbDmLPbPyw", "7fdciPm0uDy", "4pJei-RM_sE", "DiVioYu8vS", "x0unAcOz-ke" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response! In the appendix, we also demonstrate the applications of TMD in graph clustering (C.1) and t-SNE visualization of graphs (C.2). Without supervision, TMD can still capture meaningful structure of graphs. In summarization, we demonstrate the applicability of TMD in terms of graph classi...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "65bAwoAYFTD", "XJ6__EaW1ZE", "qbEXY4uWTsb", "IQLIhtPri4P", "xSbDmLPbPyw", "jLf2i2oS414", "x0unAcOz-ke", "DiVioYu8vS", "4pJei-RM_sE", "7fdciPm0uDy", "nips_2022_Qh89hwiP5ZR", "nips_2022_Qh89hwiP5ZR", "nips_2022_Qh89hwiP5ZR", "nips_2022_Qh89hwiP5ZR" ]
nips_2022_FlrQGoHPcvo
Quantifying Statistical Significance of Neural Network-based Image Segmentation by Selective Inference
Although a vast body of literature relates to image segmentation methods that use deep neural networks (DNNs), less attention has been paid to assessing the statistical reliability of segmentation results. In this study, we interpret the segmentation results as hypotheses driven by DNN (called DNN-driven hypotheses) and propose a method to quantify the reliability of these hypotheses within a statistical hypothesis testing framework. To this end, we introduce a conditional selective inference (SI) framework---a new statistical inference framework for data-driven hypotheses that has recently received considerable attention---to compute exact (non-asymptotic) valid p-values for the segmentation results. To use the conditional SI framework for DNN-based segmentation, we develop a new SI algorithm based on the homotopy method, which enables us to derive the exact (non-asymptotic) sampling distribution of DNN-driven hypothesis. We conduct several experiments to demonstrate the performance of the proposed method.
Accept
This paper proposes a conditional selective inference based technique for quantifying the statistical significance of an image segmentation generated by a DNN. Estimating the statistical significance of a DNN generated image segmentation is an important problem in medical image analysis, and the technique proposed extends the applicability of selective inference beyond the previously applicable domains of thresholding or graph partitioning based segmentation methods to segmentations generated by DNNs with piecewise linear activation functions. While all reviewers found the proposed method to be of value to the community at large, they commented on the limited evaluation of the method. The method has only been applied to simple segmentation networks, and not state of the art architectures, further it has only been applied to a limited set set of datasets, suggesting the practical challenges with scaling this technique to realistic networks and datasets. Regardless, the reviewers unanimously recommend acceptance, and I hope this work will stimulate further practical progress in this important area.
train
[ "9KDsPfwImg9", "6I11nEc7O9i", "kT6oWfUEvMEt", "I3ndgR0BJoj", "hQ5r2NJAF_O", "2br2DvZjNaK", "5YSq0MYLpKu", "Y5kKv13nTGj" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your additional suggestion. In the revised paper, we will additionally provide the following discussion:\n\n\"At step 4 of Algorithm 2, we apply a trained DNN to the parametrized data $\\boldsymbol x(z_t)$ at any breakpoints $z_t$. Therefore, the number of forward passes of the network is equal to t...
[ -1, -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "6I11nEc7O9i", "I3ndgR0BJoj", "Y5kKv13nTGj", "5YSq0MYLpKu", "2br2DvZjNaK", "nips_2022_FlrQGoHPcvo", "nips_2022_FlrQGoHPcvo", "nips_2022_FlrQGoHPcvo" ]
nips_2022_l5UNyaHqFdO
Adam Can Converge Without Any Modification On Update Rules
Ever since \citet{reddi2019convergence} pointed out the divergence issue of Adam, many new variants have been designed to obtain convergence. However, vanilla Adam remains exceptionally popular and it works well in practice. Why is there a gap between theory and practice? We point out there is a mismatch between the settings of theory and practice: \citet{reddi2019convergence} pick the problem after picking the hyperparameters of Adam, i.e., $(\beta_1,\beta_2)$; while practical applications often fix the problem first and then tune $(\beta_1,\beta_2)$. Due to this observation, we conjecture that the empirical convergence can be theoretically justified, only if we change the order of picking the problem and hyperparameter. In this work, we confirm this conjecture. We prove that, when the 2nd-order momentum parameter $\beta_2$ is large and 1st-order momentum parameter $\beta_1 < \sqrt{\beta_2}<1$, Adam converges to the neighborhood of critical points. The size of the neighborhood is propositional to the variance of stochastic gradients. Under an extra condition (strong growth condition), Adam converges to critical points. It is worth mentioning that our results cover a wide range of hyperparameters: as $\beta_2$ increases, our convergence result can cover any $\beta_1 \in [0,1)$ including $\beta_1=0.9$, which is the default setting in deep learning libraries. To our knowledge, this is the first result showing that Adam can converge {\it without any modification} on its update rules. Further, our analysis does not require assumptions of bounded gradients or bounded 2nd-order momentum. When $\beta_2$ is small, we further point out a large region of $(\beta_1,\beta_2)$ combinations where Adam can diverge to infinity. Our divergence result considers the same setting (fixing the optimization problem ahead) as our convergence result, indicating that there is a phase transition from divergence to convergence when increasing $\beta_2$. These positive and negative results provide suggestions on how to tune Adam hyperparameters: for instance, when Adam does not work well, we suggest tuning up $\beta_2$ and trying $\beta_1< \sqrt{\beta_2}$.
Accept
This paper proves that the vanilla Adam algorithm can converge to a stationary point with properly chosen $\beta_1$ and $\beta_2$ if the number of samples $n$ is fixed, in contrast to the well-known non-convergence example by Reddi et al. This result has a clear conceptual message, which justifies the use of vanilla Adam, and it's worth sharing the result with the ML community. The reviewers find the presentation and the analysis to be of high quality as well.
train
[ "-9LuFTVCNc4", "ZAtuH-NPf_2", "uhAS8RJpkAO", "PZAwFJeVid9", "LTwOsu85BZY", "9Jt5OrVmcE", "LZfy4_IGCq-", "_IbBje-VNkG", "2FRfFXxXdjc", "075DHTByVfJY", "cR1bslxjRqI", "2mwHhctYeoY", "NemvYW82BLLN", "9cGpcNBL4itT", "BsmLTHeGRTG", "g6J4d9nSOQ5", "7FePaQmNb-", "OzLbehhEgES", "AtMKyAEW...
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank the reviewer for the positive feedback. Thanks again for your effort and time in reviewing our paper. ", " Thanks for the authors' effort to reply my confusion and questions. I am statisfied with the authors's clarification as well as the revision of paper. I would adjust the rating accor...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 4, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5, 4 ]
[ "ZAtuH-NPf_2", "_IbBje-VNkG", "2FRfFXxXdjc", "LTwOsu85BZY", "9cGpcNBL4itT", "nips_2022_l5UNyaHqFdO", "g6J4d9nSOQ5", "g6J4d9nSOQ5", "AtMKyAEWYwD", "OzLbehhEgES", "OzLbehhEgES", "OzLbehhEgES", "OzLbehhEgES", "7FePaQmNb-", "7FePaQmNb-", "nips_2022_l5UNyaHqFdO", "nips_2022_l5UNyaHqFdO", ...
nips_2022_S-Vig7pTRXq
When are Offline Two-Player Zero-Sum Markov Games Solvable?
We study what dataset assumption permits solving offline two-player zero-sum Markov games. In stark contrast to the offline single-agent Markov decision process, we show that the single strategy concentration assumption is insufficient for learning the Nash equilibrium (NE) strategy in offline two-player zero-sum Markov games. On the other hand, we propose a new assumption named unilateral concentration and design a pessimism-type algorithm that is provably efficient under this assumption. In addition, we show that the unilateral concentration assumption is necessary for learning an NE strategy. Furthermore, our algorithm can achieve minimax sample complexity without any modification for two widely studied settings: dataset with uniform concentration assumption and turn-based Markov games. Our work serves as an important initial step towards understanding offline multi-agent reinforcement learning.
Accept
This paper is a clear accept, with very positive reviews overall. I trust that the reviewers will address the minor concerns raised by the reviewers in their final version.
train
[ "zbrF2Gi-rgQ", "VpCkya_Cqoi", "KkzGq90uEeG", "bMGd-oFGDL", "thUMdWSeSWf", "QYe45mMJMw", "6CEa4XSCW-Sf", "QM2yCRxv01N", "k-1Kz2v6vxC", "GJ1tUql_bTa", "V5GzH2L7Zh8", "wXntofXCrn1", "_Ilx5fWRZH1" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the reply. I have no further question at this point.", " Thanks again for your review. We hope our answers could increase your confidence. As the discussion period is close to the end and we have not yet heard back from you, we would be glad to see if our rebuttal response has addressed your concerns...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 2, 3 ]
[ "QM2yCRxv01N", "V5GzH2L7Zh8", "GJ1tUql_bTa", "thUMdWSeSWf", "_Ilx5fWRZH1", "wXntofXCrn1", "V5GzH2L7Zh8", "GJ1tUql_bTa", "nips_2022_S-Vig7pTRXq", "nips_2022_S-Vig7pTRXq", "nips_2022_S-Vig7pTRXq", "nips_2022_S-Vig7pTRXq", "nips_2022_S-Vig7pTRXq" ]
nips_2022_WljzqTo9xzw
Optimal Neural Network Approximations of Wasserstein Gradient Direction via Convex Optimization
The computation of Wasserstein gradient direction is essential for posterior sampling problems and scientific computing. The approximation of the Wasserstein gradient with finite samples requires solving a variational problem. We study the variational problem in the family of two-layer networks with squared-ReLU activations, towards which we derive a semi-definite programming (SDP) relaxation. This SDP can be viewed as an approximation of the Wasserstein gradient in a broader function family including two-layer networks. By solving the convex SDP, we obtain the optimal approximation of the Wasserstein gradient direction in this class of functions. Numerical experiments including PDE-constrained Bayesian inference and parameter estimation in COVID-19 modeling demonstrate the effectiveness of the proposed method.
Reject
This work proposes a SDP approach to computing Wasserstein gradient direction for 2-layers NNs, without the need of training the underlying NN. To compute the gradient direction, the authors construct a least-square regression problem, and add a polynomial regularization term. Then, they show that the (relaxed) dual is an SDP problem. _Pros_ - The idea of casting the Wasserstein gradient direction as an SDP is novel, and interesting. It also paves the way to more general formulations - The obtained optima is *global* _Cons_ - The exposition is lacking some motivation at some points. I think the authors could have move some technical discussion (for instance after Thm 1) to give a better insight on the motivations. Some previous works is also sometimes not put in context (for instance regarding the dimensionality reduction). - For a (mostly) theoretical paper, the statements are sometimes not precise enough, e.g., what is an "equivalent" problem: having the same minimum? the same argmin? In Prop 1, what properties are required on the function space? Don't you need hypothesis on $\psi$? etc. - From a pratical point of view -- but I don't think that practicality is the core aspect of the paper -- the computational cost is totally prohibitive as discussed by all referees. - It does not seems that there is a strong practical improvements with respect to training directly the NN after the parameterization of the Wasserstein gradient. I believe the idea of casting the Wasserstein gradient direction as an SDP problem is interesting, but with respect to the ratio of pros/cons above, and the lack of a strong positive opinion on this work, I recommend to reject this submission in its current state. I encourage the authors to revise the manuscript in the light of the comments by all reviewers and my own comments for a future submission. In particular, the revision should include the discussion with reviewer RwmS which better highlights your work.
test
[ "Wz9OrFKqX5w", "SfTEsf5Z5H", "bP9c-OAjxXt", "8uV0ueabS_L", "WfQA-ec0vjI", "ijjLViyXJg2", "INt9LNB_o-g", "sb8csEQx5XC", "3wcAw0Fqvqp", "vmDMDdCJPQy", "0sxO_ZyxkHg", "hYZWXy735gR", "7ueaq2ZC3zC", "q6anjbZffoY", "dmGhFdfJr6e", "ucxOzDdvZ9j", "NUHwEyM94cq" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have updated my score accordingly.", " Maybe I am missing something and misunderstood your method. So, what happens if the target distribution is a uniform distribution? ", " Dear Reviewer XtC2,\n\nWe thank you for your decision to follow the concensus of other reviews, with all ratings to accept the paper....
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "bP9c-OAjxXt", "8uV0ueabS_L", "ijjLViyXJg2", "WfQA-ec0vjI", "0sxO_ZyxkHg", "3wcAw0Fqvqp", "vmDMDdCJPQy", "nips_2022_WljzqTo9xzw", "NUHwEyM94cq", "ucxOzDdvZ9j", "dmGhFdfJr6e", "q6anjbZffoY", "nips_2022_WljzqTo9xzw", "nips_2022_WljzqTo9xzw", "nips_2022_WljzqTo9xzw", "nips_2022_WljzqTo9xz...
nips_2022_mhQLcMjWw75
Federated Learning from Pre-Trained Models: A Contrastive Learning Approach
Federated Learning (FL) is a machine learning paradigm that allows decentralized clients to learn collaboratively without sharing their private data. However, excessive computation and communication demands pose challenges to current FL frameworks, especially when training large-scale models. To prevent these issues from hindering the deployment of FL systems, we propose a lightweight framework where clients jointly learn to fuse the representations generated by multiple fixed pre-trained models rather than training a large-scale model from scratch. This leads us to a more practical FL problem by considering how to capture more client-specific and class-relevant information from the pre-trained models and jointly improve each client's ability to exploit those off-the-shelf models. Here, we design a Federated Prototype-wise Contrastive Learning (FedPCL) approach which shares knowledge across clients through their class prototypes and builds client-specific representations in a prototype-wise contrastive manner. Sharing prototypes rather than learnable model parameters allows each client to fuse the representations in a personalized way while keeping the shared knowledge in a compact form for efficient communication. We perform a thorough evaluation of the proposed FedPCL in the lightweight framework, measuring and visualizing its ability to fuse various pre-trained models on popular FL datasets.
Accept
To train large-scale models for federated learning settings, this paper presents a Federated Prototype-wise Contrastive Learning (FedPCL) approach. FedPCL aims to solve federated learning problems utilizing the representations of multiple fixed pre-trained models. Specifically, it computes the class prototypes of each client, and their average. Then it sends them to each client, and builds contrastive loss function based on them. Experiments show effectiveness of FedPCL under the proposed lightweight framework. However, this paper suffers from several limitations. Firstly, the algorithm violates the privacy principle of federated learning, since all the prototypes of each client are sent to all the clients. In addition, the t-test indicates that the difference between FedPCL and FedProto are not statistically significant for many cases in table 2.
train
[ "ijQwt_qioS", "xd2Z3TXZYMI", "sRMnXIezD2", "JeyxonASJ0U", "l9QEtxegGH6", "TTKBIHbi0q4", "gBu0dAQJQzp", "jPSwP9bfmWX", "JvKRLxK0hga", "deyRuLrrNdg", "8BwySeLYYWr", "G2JOJ4kJlu", "s1W9GunBrhA", "5RED7_epHbt", "hoJanweX8JV", "LE68BouOkhj" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " After reading the authors response I will keep my score unchanged.", " The author addressed most of my concerns. Thus, I tend to raise my score.", " Dear reviewers,\n\nThank you again for reading our rebuttal. We have tried our best to address most if not all concerns raised by you. Please let us know if you ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 8, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5, 4 ]
[ "l9QEtxegGH6", "JvKRLxK0hga", "jPSwP9bfmWX", "jPSwP9bfmWX", "TTKBIHbi0q4", "8BwySeLYYWr", "deyRuLrrNdg", "nips_2022_mhQLcMjWw75", "s1W9GunBrhA", "5RED7_epHbt", "LE68BouOkhj", "hoJanweX8JV", "nips_2022_mhQLcMjWw75", "nips_2022_mhQLcMjWw75", "nips_2022_mhQLcMjWw75", "nips_2022_mhQLcMjWw7...
nips_2022_Xm9iN3UsdpH
Lower Bounds and Nearly Optimal Algorithms in Distributed Learning with Communication Compression
Recent advances in distributed optimization and learning have shown that communication compression is one of the most effective means of reducing communication. While there have been many results for convergence rates with compressed communication, a lower bound is still missing. Analyses of algorithms with communication compression have identified two abstract properties that guarantee convergence: the unbiased property or the contractive property. They can be applied either unidirectionally (compressing messages from worker to server) or bidirectionally. In the smooth and non-convex stochastic regime, this paper establishes a lower bound for distributed algorithms whether using unbiased or contractive compressors in unidirection or bidirection. To close the gap between this lower bound and the best existing upper bound, we further propose an algorithm, NEOLITHIC, that almost reaches our lower bound (except for a logarithm factor) under mild conditions. Our results also show that using contractive compressors in bidirection can yield iterative methods that converge as fast as those using unbiased compressors unidirectionally. We report experimental results that validate our findings.
Accept
The authors have addressed many if not most of the issues raised, as evidenced by the increase in scores. All reviewers agree that the paper is worthy of acceptance. The analysis is insightful. The authors are suggested to contextualized with other gradient quantization schemes such as Gandikota et al, 2021, or lower bounds Mayekar et al., 2020; and also to motivate the "sup over compressors" measure.
train
[ "5pufEtOGKx8", "okLW8Nulc-P", "pZYBqi3lOR_", "bGv1FTK4t6M", "n4UUY_eUyAR", "ehoe4U4P-Ze", "JT7x68_xgu", "QDB9-jlaebT", "4cKQ940jhn3", "oJvUNUR6m5p", "aVzVr_yJgY_", "UgziQIp9CCP", "mq--qPbQ1Jx", "c5r5kI-raM3", "yn8lAWEJUW", "px1aKIdpw8", "bskWzb7Tg-h", "NaqfKjDC1sG", "Nv3Tgz60Yu" ...
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \\\nDear reviewer wUMh,\n\n\\\nWe really appreciate your acknowledgement of our clarifications. We will definitely include these clarification details and the additional plots in our later revision.\n\n\\\nBest, \n\nThe authors of paper 8019", " Dear Authors,\n\nMany thanks for your detailed and comprehensive a...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "okLW8Nulc-P", "mq--qPbQ1Jx", "bskWzb7Tg-h", "JT7x68_xgu", "JT7x68_xgu", "aVzVr_yJgY_", "QDB9-jlaebT", "Nv3Tgz60Yu", "nips_2022_Xm9iN3UsdpH", "NaqfKjDC1sG", "NaqfKjDC1sG", "bskWzb7Tg-h", "c5r5kI-raM3", "bskWzb7Tg-h", "Nv3Tgz60Yu", "Nv3Tgz60Yu", "nips_2022_Xm9iN3UsdpH", "nips_2022_X...
nips_2022_b6to5kfFhQh
Left Heavy Tails and the Effectiveness of the Policy and Value Networks in DNN-based best-first search for Sokoban Planning
Despite the success of practical solvers in various NP-complete domains such as SAT and CSP as well as using deep reinforcement learning to tackle two-player games such as Go, certain classes of PSPACE-hard planning problems have remained out of reach. Even carefully designed domain-specialized solvers can fail quickly due to the exponential search space on hard instances. Recent works that combine traditional search methods, such as best-first search and Monte Carlo tree search, with Deep Neural Networks' (DNN) heuristics have shown promising progress and can solve a significant number of hard planning instances beyond specialized solvers. To better understand why these approaches work, we studied the interplay of the policy and value networks of DNN-based best-first search on Sokoban and show the surprising effectiveness of the policy network, further enhanced by the value network, as a guiding heuristic for the search. To further understand the phenomena, we studied the cost distribution of the search algorithms and found that Sokoban instances can have heavy-tailed runtime distributions, with tails both on the left and right-hand sides. In particular, for the first time, we show the existence of \textit{left heavy tails} and propose an abstract tree model that can empirically explain the appearance of these tails. The experiments show the critical role of the policy network as a powerful heuristic guiding the search, which can lead to left heavy tails with polynomial scaling by avoiding exploring exponentially sized subtrees. Our results also demonstrate the importance of random restarts, as are widely used in traditional combinatorial solvers, for DNN-based search methods to avoid left and right heavy tails.
Accept
The reviewers found this empirical paper's observations and explanations regarding search guided by a DNN-based heuristic insightful and definitely worth publishing, despite the fact that the study was done on only one benchmark, Sokoban.
test
[ "MVBkVwukveb", "PFsRGewofnZ", "jkTbxN00ieQ", "NsAMJAifnxh", "nyYGwU0y-yS", "KHp5tHuW8ur", "ZXQT4z_Q54x", "-zR845f0fvi", "OHsfZHpx-Da", "2xIBQHtPNDX" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " > In this paper, do you consider planning problems as finding a satisficing plan?\n\nIn this paper, yes. But finding shorter plans instead of an arbitrary one is a way more important research question. Not only because shorter plans are combinatorially harder to find, but also training on shorter plans can exploi...
[ -1, -1, -1, -1, -1, -1, 8, 5, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "PFsRGewofnZ", "KHp5tHuW8ur", "2xIBQHtPNDX", "OHsfZHpx-Da", "-zR845f0fvi", "ZXQT4z_Q54x", "nips_2022_b6to5kfFhQh", "nips_2022_b6to5kfFhQh", "nips_2022_b6to5kfFhQh", "nips_2022_b6to5kfFhQh" ]
nips_2022_sOVNpUEgKMp
Learning Generalizable Models for Vehicle Routing Problems via Knowledge Distillation
Recent neural methods for vehicle routing problems always train and test the deep models on the same instance distribution (i.e., uniform). To tackle the consequent cross-distribution generalization concerns, we bring the knowledge distillation to this field and propose an Adaptive Multi-Distribution Knowledge Distillation (AMDKD) scheme for learning more generalizable deep models. Particularly, our AMDKD leverages various knowledge from multiple teachers trained on exemplar distributions to yield a light-weight yet generalist student model. Meanwhile, we equip AMDKD with an adaptive strategy that allows the student to concentrate on difficult distributions, so as to absorb hard-to-master knowledge more effectively. Extensive experimental results show that, compared with the baseline neural methods, our AMDKD is able to achieve competitive results on both unseen in-distribution and out-of-distribution instances, which are either randomly synthesized or adopted from benchmark datasets (i.e., TSPLIB and CVRPLIB). Notably, our AMDKD is generic, and consumes less computational resources for inference.
Accept
All the reviewers are in agreement to accept the paper. The paper tackles vehicle route planning via a knowledge distillation framework using student teacher models. The ideas in the paper are appreciated by all the reviewers. There are minor criticisms, especially regarding scalability with the problem size, selection of teachers, and the fact that the knowledge distillation is relatively known. Given the minor criticisms from the original three reviewers, I asked for another expert in the field to look at it and the reviewer agreed with the rest of the pool. Additionally, I took a good look at the paper as well and I am happy to recommend accepting the manuscript.
train
[ "VaKuKLc4rW", "4Pb2euw_veZ", "kct7zH0hvl8", "jyqAYdzmLG", "0x4vkdnYfP1", "i-WZzoI_BZt", "ZDJt5QOed3T", "w_hREOSztF5W", "ld-CRETwPJM", "dSJW1coIRyd", "J4OmrmdfpPO", "u5Ee-kYWtEg", "kYJoECqxXb", "4CpmjA4o5KE" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The paper focuses on a class of NP-hard combinatorial optimization problems called Vehicle Routing Problems (VRP) which is of wide practical interest. In the deep learning community there is a wide interest in replacing the domain and expert dependent heuristics with data-driven methods like reinforcement learnin...
[ 7, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5 ]
[ 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 1 ]
[ "nips_2022_sOVNpUEgKMp", "ld-CRETwPJM", "0x4vkdnYfP1", "0x4vkdnYfP1", "J4OmrmdfpPO", "nips_2022_sOVNpUEgKMp", "kYJoECqxXb", "4CpmjA4o5KE", "4CpmjA4o5KE", "kYJoECqxXb", "u5Ee-kYWtEg", "nips_2022_sOVNpUEgKMp", "nips_2022_sOVNpUEgKMp", "nips_2022_sOVNpUEgKMp" ]
nips_2022_N-PiuVbkEpp
Online Algorithms for the Santa Claus Problem
The Santa Claus problem is a fundamental problem in {\em fair division}: the goal is to partition a set of {\em heterogeneous} items among {\em heterogeneous} agents so as to maximize the minimum value of items received by any agent. In this paper, we study the online version of this problem where the items are not known in advance and have to be assigned to agents as they arrive over time. If the arrival order of items is arbitrary, then no good assignment rule exists in the worst case. However, we show that, if the arrival order is random, then for $n$ agents and any $\varepsilon > 0$, we can obtain a competitive ratio of $1-\varepsilon$ when the optimal assignment gives value at least $\Omega(\log n / \varepsilon^2)$ to every agent (assuming each item has at most unit value). We also show that this result is almost tight: namely, if the optimal solution has value at most $C \ln n / \varepsilon$ for some constant $C$, then there is no $(1-\varepsilon)$-competitive algorithm even for random arrival order.
Accept
The Santa Claus problem is a well-known problem in fair division and the online setting considered in this paper very natural and interesting paper. The reviewers found the paper well-written and interesting. We encourage the authors to incorporate the reviewers comments into their final version. Further, given this problem is less studied in NeurIPS community, please think of motivating the problem for broader NeurIPS audience.
train
[ "uTWF7A1s5i", "TBD2zK4uw57", "-IrGM9WBGmY", "1KPMjk896hR", "JFsfyjmDX-r", "Jg_iNS61HOP", "lBSVNdd7JY6", "EYi1TdqI_kl", "TrQ8oY9j7l", "REr-6x36F5t", "nZgMmQdUeLV" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response. I would like to point out that at least part of the example papers that you list have a connection to ML via their use of machine learned predictions, which is not the case with the current paper. Still, as I wrote this is a good paper and I am not opposed to accepting it.", " I thank t...
[ -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "JFsfyjmDX-r", "-IrGM9WBGmY", "nZgMmQdUeLV", "REr-6x36F5t", "TrQ8oY9j7l", "EYi1TdqI_kl", "nips_2022_N-PiuVbkEpp", "nips_2022_N-PiuVbkEpp", "nips_2022_N-PiuVbkEpp", "nips_2022_N-PiuVbkEpp", "nips_2022_N-PiuVbkEpp" ]
nips_2022_Y6xuQZP7t3
Invariance Learning based on Label Hierarchy
Deep Neural Networks inherit spurious correlations embedded in training data and hence may fail to predict desired labels on unseen domains (or environments), which have different distributions from the domain to provide training data. Invariance Learning (IL) has been developed recently to overcome this shortcoming; using training data in many domains, IL estimates such a predictor that is invariant to a change of domain. However, the requirement of training data in multiple domains is a strong restriction of using IL, since it demands expensive annotation. We propose a novel IL framework to overcome this problem. Assuming the availability of data from multiple domains for a higher level of classification task, for which the labeling cost is lower, we estimate an invariant predictor for the target classification task with training data gathered in a single domain. Additionally, we propose two cross-validation methods for selecting hyperparameters of invariance regularization, which has not been addressed properly in existing IL methods. The effectiveness of the proposed framework, including the cross-validation, is demonstrated empirically. Theoretical analysis reveals that our framework can estimate the desirable invariant predictor with a hyperparameter fixed correctly, and that such a preferable hyperparameter is chosen by the proposed CV methods under some conditions.
Accept
This paper targets the invariant learning problem for out-of-distribution generalization. A new framework is proposed, which enables invariant predictors to be learned in single domain data with the help of additional data from multiple domains. Both theoretical analysis and empirical evaluations are proposed to verify the effectiveness of the framework. All reviewers are positive about this paper. They highlight (a) the idea is interesting; (b) the theoretical analysis is detailed to support the paper's claims, which can contribute to the research community; (c) the writing and organization are overall great. Major concerns raised in the review process are also addressed. The meta-reviewer is happy to recommend an acceptance and suggests the author carefully merge the rebuttals in the final version.
train
[ "_W1vr0AApDA", "YqcIyqrK97", "Rl0mwpdLYSE", "TBZsfFdWeqj", "kvR3zyXJhz9", "E6Y5AgTjJs", "pKhHKroeDki", "3Mvnw8Ff78X", "iGNRuWrqV0M", "ulJS24E8LQw", "8gzI2UyOkOT", "usVCDX55VIa", "s7Fr3kAYOpM", "p3g6AVzhgie", "8d9oWWfllk-", "aAoQjxkEFe", "R-K6Wg9857f", "o4Mt4NR5jYE", "TBtdciueNgv"...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_r...
[ " \nWe would like to thank the reviewer for reading our response and giving additional suggestions about our submission.\n\nQ. Theorems should be self-contained even without context. \\\nA. To make the statement of the theorems self-contained, we specify the assumption at each theorem. \n\nQ. The corresponding disc...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "YqcIyqrK97", "E6Y5AgTjJs", "TBZsfFdWeqj", "R-K6Wg9857f", "pKhHKroeDki", "TBtdciueNgv", "8d9oWWfllk-", "PCvgOv-apTX", "ulJS24E8LQw", "usVCDX55VIa", "nips_2022_Y6xuQZP7t3", "aAoQjxkEFe", "TBtdciueNgv", "TBtdciueNgv", "PCvgOv-apTX", "5b_1xkJ082T", "o4Mt4NR5jYE", "nips_2022_Y6xuQZP7t3...
nips_2022_tUH1Or4xblM
Segmenting Moving Objects via an Object-Centric Layered Representation
The objective of this paper is a model that is able to discover, track and segment multiple moving objects in a video. We make four contributions: First, we introduce an object-centric segmentation model with a depth-ordered layer representation. This is implemented using a variant of the transformer architecture that ingests optical flow, where each query vector specifies an object and its layer for the entire video. The model can effectively discover multiple moving objects and handle mutual occlusions; Second, we introduce a scalable pipeline for generating multi-object synthetic training data via layer compositions, that is used to train the proposed model, significantly reducing the requirements for labour-intensive annotations, and supporting Sim2Real generalisation; Third, we conduct thorough ablation studies, showing that the model is able to learn object permanence and temporal shape consistency, and is able to predict amodal segmentation masks; Fourth, we evaluate our model, trained only on synthetic data, on standard video segmentation benchmarks, DAVIS, MoCA, SegTrack, FBMS-59, and achieve state-of-the-art performance among existing methods that do not rely on any manual annotations. With test-time adaptation, we observe further performance boosts.
Accept
This paper uses synthetic data to train a CNN + transformer architecture for amodal object segmentation from optical flow input. The model architecture can be viewed as an adaptation of DETR [12] to a different task. Reviewer ratings lean positive, although there are concerns about experimental validation, as the combination of training regime (using synthetic data) and input modality (optical flow) does not match that of other methods tested on the same datasets; the proposed OCLR system outperforms self-supervised methods, but falls behind the state-of-the-art systems trained on real data, while using different training resources than either class. The author response partially alleviates this ambiguity, with an additional ablation study comparing to an optical flow based Mask R-CNN model trained on synthetic data.
train
[ "LLvGLKt3Sjh", "K-RY3hy7Ji", "Y9MeYpQr2pV", "0KeMYlyF36H", "PbNtfWIDlPM", "3lSpXeIFjA", "gbu9Ktx_DKl", "vyfSVcYj0Jf", "KSvSm7mbR9d", "e1XT1k7hHi", "NRtoGO-4-7r", "yzWUioelOY", "p7UDmTcBFpN" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the rebuttal! \nMy concern regarding the test-time adaptation is well-addressed. Also, I agree with the authros that mutual occlusion is a challenging problem, and it is not necessary to be solved in this work as the current contributions are sufficient. Therefore, I maintain my positive rating.\n", ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "3lSpXeIFjA", "Y9MeYpQr2pV", "gbu9Ktx_DKl", "p7UDmTcBFpN", "p7UDmTcBFpN", "yzWUioelOY", "NRtoGO-4-7r", "e1XT1k7hHi", "nips_2022_tUH1Or4xblM", "nips_2022_tUH1Or4xblM", "nips_2022_tUH1Or4xblM", "nips_2022_tUH1Or4xblM", "nips_2022_tUH1Or4xblM" ]
nips_2022_n4wnZAdBavx
Efficient Multi-agent Communication via Self-supervised Information Aggregation
Utilizing messages from teammates can improve coordination in cooperative Multi-agent Reinforcement Learning (MARL). To obtain meaningful information for decision-making, previous works typically combine raw messages generated by teammates with local information as inputs for policy. However, neglecting the aggregation of multiple messages poses great inefficiency for policy learning. Motivated by recent advances in representation learning, we argue that efficient message aggregation is essential for good coordination in MARL. In this paper, we propose Multi-Agent communication via Self-supervised Information Aggregation (MASIA), with which agents can aggregate the received messages into compact representations with high relevance to augment the local policy. Specifically, we design a permutation invariant message encoder to generate common information aggregated representation from raw messages and optimize it via reconstructing and shooting future information in a self-supervised manner. Each agent would utilize the most relevant parts of the aggregated representation for decision-making by a novel message extraction mechanism. Empirical results demonstrate that our method significantly outperforms strong baselines on multiple cooperative MARL tasks for various task settings.
Accept
The reviewers agree that the main strengths are the generality of the approach, as well as the experimental results (especially after the rebuttal which answered most questions). The overall approach of choosing the attention focus is also well-motivated by specific experiments and makes sense. A weakness of the paper that has been discussed is the assumption of full broadcast despite considering a decentralized execution scenario, even though in typical applications there are constraints on communication. The high computation cost of the method has similarly been mentioned (and acknowledged by the authors). While these are limitations of the paper, the discussion still ended up in favor of acceptance for the reasons above.
train
[ "Kt_gDDUwDo", "ei57XgWjOaP", "zv49mChneuD", "NnigDHJs44m", "Vh7jf8-OQXv", "rC_-Zwn7xkr", "Itt8I0TfkT8", "u7x6walX0aX", "-690BLyD9DS", "EJm1XdCg7Fn", "Q1hfz8AcWFm", "BHJYyqqE2n", "qgZLU_aC7qK", "s7XhDpQlq2r", "41O29dNbcl", "fg6KeMUgaE_", "HwnmvITIuzB", "RDfD_DvueQG" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate that the value here is in generality and not necessarily enhanced performance. The other reviewers note the generality and do rate this quality very highly. However, I still wonder about the approach, which requires global information (broadcast), and clearly will not scale and might actually only ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "s7XhDpQlq2r", "qgZLU_aC7qK", "NnigDHJs44m", "HwnmvITIuzB", "rC_-Zwn7xkr", "Itt8I0TfkT8", "RDfD_DvueQG", "HwnmvITIuzB", "fg6KeMUgaE_", "RDfD_DvueQG", "RDfD_DvueQG", "HwnmvITIuzB", "fg6KeMUgaE_", "fg6KeMUgaE_", "nips_2022_n4wnZAdBavx", "nips_2022_n4wnZAdBavx", "nips_2022_n4wnZAdBavx",...
nips_2022_DdxNka9tMRd
Global Convergence of Federated Learning for Mixed Regression
This paper studies the problem of model training under Federated Learning when clients exhibit cluster structure. We contextualize this problem in mixed regression, where each client has limited local data generated from one of $k$ unknown regression models. We design an algorithm that achieves global convergence from any initialization, and works even when local data volume is highly unbalanced -- there could exist clients that contain $O(1)$ data points only. Our algorithm first runs moment descent on a few anchor clients (each with $\tilde{\Omega}(k)$ data points) to obtain coarse model estimates. Then each client alternately estimates its cluster labels and refines the model estimates based on FedAvg or FedProx. A key innovation in our analysis is a uniform estimate on the clustering errors, which we prove by bounding the VC dimension of general polynomial concept classes based on the theory of algebraic geometry.
Accept
This is mainly a theoretical piece of work studying the problem of clustered federated learning under a mixed regression setting. The authors establish convergence guarantees related to the statistical error incurred by the method; the results include eigengap-free bounds on subspace estimation and VC dimension analyses of certain classes of polynomials. The reviewers highlighted many positive attributes of the work, including: - The core contribution of the paper (as it is claimed in the paper) is novel and exciting: Rigorous guarantees on clustered federated learning are few and far between, and I believe that trying to attain this is a significant strength of the paper. - I think the authors do a good job of motivating this type of approach and giving a rigorous taxonomy of the related literature. - At a high level, I believe that the paper's theoretical results are significant and enough to warrant acceptance. It is clear that there is a lot of interesting facets to the analysis, and the authors mainly do a good job of blending together a variety of disparate analytical viewpoints. In particular, the strategy outlined to prove Theorems 1 & 2 seems promising and mainly correct. I have not verified every detail though, but the portions I have looked at all seem correct. In particular, I especially appreciated the application of the Milnor-Thom theorem (though there is a slight, fixable issue with that, see the suggestions below). - I think this is a theoretically strong paper that studies an important topic in federated learning. - The method and theory are both novel to me. Typically, in Phase 1, the authors utilize clients with low data volume to help reduce the sample complexity of estimating the parameters of anchor clients, which is clever. Besides, the theoretical results are highly nontrivial. - The methodology and theoretical analysis are of good quality. Especially, the development of the theoretical results is highly nontrivial. - The paper is in general clear. I can follow the main ideas of the paper. - Both methodologies and theories are significant. Besides these words of praise, some criticism was mentioned. The remaining criticism (after rebuttal and discussion) was not very major, however. **I therefore, and with pleasure, propose to accept this paper.** I would wish to stress that it is important to properly address all issues in the camera-ready version of the paper. Best regards, AC
test
[ "IIaSttYyv8e", "Lfx8ErQTwL", "WqDipn_ZIw", "AJWf7sSpNYn", "8ZGCAmgrWTB", "4qu559KkJ0z", "_IDGiJLcDsH", "MO39OGGj6i", "qidCnleJySm", "zsejYJZ2p1K", "Ql_n2KoIuil", "Zf42SgzYnNm" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your positive feedback! We have managed to compare the performance of our two-phase algorithm with existing FL algorithms. We presented the results in our supplementary materials as a new subsection in Appendix H. We will continue polishing up our codes. \n\nIn the added subsection, we compare the p...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "Lfx8ErQTwL", "AJWf7sSpNYn", "qidCnleJySm", "4qu559KkJ0z", "_IDGiJLcDsH", "MO39OGGj6i", "Ql_n2KoIuil", "Zf42SgzYnNm", "zsejYJZ2p1K", "nips_2022_DdxNka9tMRd", "nips_2022_DdxNka9tMRd", "nips_2022_DdxNka9tMRd" ]
nips_2022_5Ce7l5e_aGl
Decentralized, Communication- and Coordination-free Learning in Structured Matching Markets
We study the problem of online learning in competitive settings in the context of two-sided matching markets. In particular, one side of the market, the agents, must learn about their preferences over the other side, the firms, through repeated interaction while competing with other agents for successful matches. We propose a class of decentralized, communication- and coordination-free algorithms that agents can use to reach to their stable match in structured matching markets. In contrast to prior works, the proposed algorithms make decisions based solely on an agent's own history of play and requires no foreknowledge of the firms' preferences. Our algorithms are constructed by splitting up the statistical problem of learning one's preferences, from noisy observations, from the problem of competing for firms. We show that under realistic structural assumptions on the underlying preferences of the agents and firms, the proposed algorithms incur a regret which grows at most logarithmically in the time horizon. However, we note that in the worst case, it may grow exponentially in the size of the market.
Accept
Executive Summary: The paper studies two-sided matching markets, in the stable matching variety. On one side there are the workers and on the other there are firms. Each worker has a ranking over all firms, and each firm has a ranking over the workers. The goal is to find a stable matching where no two (worker, firm) pairs would prefer to split and re-match. The paper studies a learning variant of this where the workers initially down know their preferences (which are encoded as non-negative real numbers) but the firms do. Workers can propose to firms. When a firm receives more than one proposal it myopically chooses the worker it likes best. Matched workers learn a noisy version of their value for the firm. The high-level question the authors ask is: Does there exist a decentralized and coordination-free algorithm that is based only on local history of interactions which provably converges to stable matching? The main result is affirmative for alpha-reducible markets (Definition 2), which implies the existence of a unique stable matching. It also enables a partition of the market into submarkets (Remark 3). By combining statistical and adversarial learning techniques, they obtain an algorithm in which a worker in submarket M_i experiences a regret of O(C_i |W| |F| log(T)/ Delta^2 ) against the stable matching. Here W is the set of workers, F is the set of firms, T is the number of rounds, and Delta is the minimum gap in cardinal utility between any two candidates. Crucially, C_k = O(k\theta^k) where k refers to the submarket and \theta is some constant. So when k is of the order of n the "constant" C is actually exponential in n. The authors conclude that (citing from the abstract) that their main result shows that "competition need not drastically affect the performance of decentralized, communication and coordination free online learning algorithms". --- Discussion: Despite the extremely good and unanimously positive scores, I also see very low confidence scores (going as low as "educated guess"). Personally, I am less excited about the paper than the reviewers. I think what's nice about the paper is that they identify (an established) notion of a structure matching market (alpha-reducibility), and present a decentralized/communication- and coordination-free learning algorithm for it whose regret is parametrized by the number K of submarkets. A major weakness in my eyes is that the regret bound depends exponentially on the number of submarkets. If I am not mistaken then this means that even for simple markets (such as serial dictatorship markets) the dependence of the regret on the number of agents might be exponential. There is no discussion of why this form of dependence is necessary. I still think that this is a nice and non-trivial result, but I feel that the authors slightly overstate the implications (see the last sentence of the abstract), and should better discuss these aspects. Weak accept. --- Comments: ** Please specify everywhere (including the abstract and introduction) that the worse case dependency is exponential in the number of AGENTS (and not only the number of sub-markets). ** Please include examples of alpha-reducible markets.
train
[ "i7Wi37yk9YU", "hWViXMmhmkH", "SqYT87Ez6W2", "r6OTgQpEMAT", "pbHQ3PBY202", "2E0z3Ao1WVL", "lQR0Dv2Y25w", "tfiniwh6BaZ", "uAqzPUv5PI", "uKUWJp8_2Rw" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank reviewer for their reply and increasing the score. We shall definitely incorporate the suggested changes in the final submission. \n\n", " Thanks for the explanation about the relationship between the \\alpha-reducible and other uniqueness conditions. I think it would be better if you could add this di...
[ -1, -1, -1, -1, -1, -1, 7, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, 1, 2, 4, 4 ]
[ "hWViXMmhmkH", "r6OTgQpEMAT", "lQR0Dv2Y25w", "uKUWJp8_2Rw", "uAqzPUv5PI", "tfiniwh6BaZ", "nips_2022_5Ce7l5e_aGl", "nips_2022_5Ce7l5e_aGl", "nips_2022_5Ce7l5e_aGl", "nips_2022_5Ce7l5e_aGl" ]
nips_2022_eK8Z4Ydt2_b
Robust On-Policy Sampling for Data-Efficient Policy Evaluation in Reinforcement Learning
Reinforcement learning (RL) algorithms are often categorized as either on-policy or off-policy depending on whether they use data from a target policy of interest or from a different behavior policy. In this paper, we study a subtle distinction between on-policy data and on-policy sampling in the context of the RL sub-problem of policy evaluation. We observe that on-policy sampling may fail to match the expected distribution of on-policy data after observing only a finite number of trajectories and this failure hinders data-efficient policy evaluation. Towards improved data-efficiency, we show how non-i.i.d., off-policy sampling can produce data that more closely matches the expected on-policy data distribution and consequently increases the accuracy of the Monte Carlo estimator for policy evaluation. We introduce a method called Robust On-Policy Sampling and demonstrate theoretically and empirically that it produces data that converges faster to the expected on-policy distribution compared to on-policy sampling. Empirically, we show that this faster convergence leads to lower mean squared error policy value estimates.
Accept
The paper seeks to improve the efficiency of policy evaluation in reinforcement learning settings. Monte-Carlo sampling is commonly used for on-policy evaluation, but the paper eloquently shows that non-iid off-policy sampling can be a better strategy. The reviewers agreed that the technique was novel, interesting and promising to share with the community. Some reviewers questioned the theoretical analysis that the authors clarified during the feedback phase. To strengthen the paper, the authors should experiment with ROS in richer problem domains (e.g., with function approximators) to identify realistic regimes where ROS helps empirically. For instance, if ROS is made easy enough to use with DeepRL policies in complex domains, it could be a very significant contribution to on-policy evaluations that are currently plagued by high variance from Monte-Carlo sampling.
train
[ "jdcuJ2grZRc", "DQkjRey7Jd", "aRUvlDUf1kK", "Fiq-Q9zRkaQ", "fSYZgQ_xoZ5", "xZSl2kCuYE", "aJ1OjBc0cH", "2dQwowrDcxQ", "dclYYmvVwsn", "7F99C-aazkJ", "P9j-kHiBKBm", "MAhVDoVyXE5", "8Ifj9HhnNG", "lhxWQ5g7u-k", "rtdjn_jVBS4" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for taking the time to read our response and seek clarification.\n\n> It would be helpful if the steps/episode for each domain were mentioned here, rather than in the supplemental material.\n\nWe agree and will update the paper with these details.\n\n> Could you clarify what you mean by Theorem 2 charac...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 3 ]
[ "DQkjRey7Jd", "aJ1OjBc0cH", "Fiq-Q9zRkaQ", "fSYZgQ_xoZ5", "xZSl2kCuYE", "2dQwowrDcxQ", "rtdjn_jVBS4", "lhxWQ5g7u-k", "8Ifj9HhnNG", "MAhVDoVyXE5", "nips_2022_eK8Z4Ydt2_b", "nips_2022_eK8Z4Ydt2_b", "nips_2022_eK8Z4Ydt2_b", "nips_2022_eK8Z4Ydt2_b", "nips_2022_eK8Z4Ydt2_b" ]
nips_2022_dsxuTEf01d5
Dual-Curriculum Contrastive Multi-Instance Learning for Cancer Prognosis Analysis with Whole Slide Images
The multi-instance learning (MIL) has advanced cancer prognosis analysis with whole slide images (WSIs). However, current MIL methods for WSI analysis still confront unique challenges. Previous methods typically generate instance representations via a pre-trained model or a model trained by the instances with bag-level annotations, which, however, may not generalize well to the downstream task due to the introduction of excessive label noises and the lack of fine-grained information across multi-magnification WSIs. Additionally, existing methods generally aggregate instance representations as bag ones for prognosis prediction and have no consideration of intra-bag redundancy and inter-bag discrimination. To address these issues, we propose a dual-curriculum contrastive MIL method for cancer prognosis analysis with WSIs. The proposed method consists of two curriculums, i.e., saliency-guided weakly-supervised instance encoding with cross-scale tiles and contrastive-enhanced soft-bag prognosis inference. Extensive experiments on three public datasets demonstrate that our method outperforms state-of-the-art methods in this field. The code is available at https://github.com/YuZhang-SMU/Cancer-Prognosis-Analysis/tree/main/DC_MIL%20Code.
Accept
This is a clear accept. Congratulations!
val
[ "L7p8ILmyjf", "voDEdo2SNzl", "e7hZBLeobM9", "mVkaFdpKPX5", "VzOCkkPhc9u", "oBP1y8f3NKY", "P5FV9_u8mf_", "40DOkVEJ9I", "CiMndjaRvxW", "jhit0ZMfWhU", "1iJzr7IRMqV", "7XHUAYIhiqG", "a3yUjirZ_A", "3r8quKT6IO", "zLeV1zwq5M", "TgeDjxWIuEp", "XD2xxCFl8sz", "enaIGDtSht" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for spending a huge amount of time on our manuscript. And we greatly appreciate your insightful comments to help improve the manuscript.", " We greatly appreciate your insightful comments to help improve the manuscript. And thanks again for spending a huge amount of time on our manuscript.",...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 3 ]
[ "TgeDjxWIuEp", "e7hZBLeobM9", "1iJzr7IRMqV", "XD2xxCFl8sz", "enaIGDtSht", "enaIGDtSht", "enaIGDtSht", "enaIGDtSht", "enaIGDtSht", "enaIGDtSht", "XD2xxCFl8sz", "XD2xxCFl8sz", "XD2xxCFl8sz", "TgeDjxWIuEp", "TgeDjxWIuEp", "nips_2022_dsxuTEf01d5", "nips_2022_dsxuTEf01d5", "nips_2022_ds...
nips_2022_l1WlfNaRkKw
A Theory of PAC Learnability under Transformation Invariances
Transformation invariances are present in many real-world problems. For example, image classification is usually invariant to rotation and color transformation: a rotated car in a different color is still identified as a car. Data augmentation, which adds the transformed data into the training set and trains a model on the augmented data, is one commonly used technique to build these invariances into the learning process. However, it is unclear how data augmentation performs theoretically and what the optimal algorithm is in presence of transformation invariances. In this paper, we study PAC learnability under transformation invariances in three settings according to different levels of realizability: (i) A hypothesis fits the augmented data; (ii) A hypothesis fits only the original data and the transformed data lying in the support of the data distribution; (iii) Agnostic case. One interesting observation is that distinguishing between the original data and the transformed data is necessary to achieve optimal accuracy in setting (ii) and (iii), which implies that any algorithm not differentiating between the original and transformed data (including data augmentation) is not optimal. Furthermore, this type of algorithms can even ``harm'' the accuracy. In setting (i), although it is unnecessary to distinguish between the two data sets, data augmentation still does not perform optimally. Due to such a difference, we propose two combinatorial measures characterizing the optimal sample complexity in setting (i) and (ii)(iii) and provide the optimal algorithms.
Accept
All reviewers are positive. Clear accept.
train
[ "qHxAOaSzN1", "IZ8dEOzYOFB", "tusoi_4fh9s", "wZbNb7tArcg", "VfllXxWfpbF", "j1NRNMgqX9n", "YpXpsLlFUz", "cxbZ2R7z2YE" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " My concerns are addressed. I have increased my score by one. Looking forward to seeing the revised version.", " Oh right, PAC learning from augmented data may fail without transformation invariance assumption.\n\nI am satisfied with your answers to my questions. I will update my score.\n\nThank you for the deta...
[ -1, -1, -1, -1, -1, 7, 8, 7 ]
[ -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "VfllXxWfpbF", "tusoi_4fh9s", "cxbZ2R7z2YE", "YpXpsLlFUz", "j1NRNMgqX9n", "nips_2022_l1WlfNaRkKw", "nips_2022_l1WlfNaRkKw", "nips_2022_l1WlfNaRkKw" ]
nips_2022_R2XFXfK0SVe
Private Graph All-Pairwise-Shortest-Path Distance Release with Improved Error Rate
Releasing all pairwise shortest path (APSP) distances between vertices on general graphs under weight Differential Privacy (DP) is known as a challenging task. In previous work, to achieve DP with some fixed budget, with high probability the maximal absolute error among all published pairwise distances is roughly O(n) where n is the number of nodes. It was shown that this error could be reduced for some special graphs, which, however, is hard for general graphs. Therefore, whether the approximation error can be reduced to sublinear is posted as an interesting open problem. In this paper, we break the linear barrier on the distance approximation error of previous result, by proposing an algorithm that releases a constructed synthetic graph privately. Computing all pairwise distances on the constructed graph only introduces O(n^{1/2}) error in answering all pairwise shortest path distances for fixed privacy parameter. Our method is based on a novel graph diameter (link length) augmentation via constructing ``shortcuts'' for the paths. By adding a set of shortcut edges to the original graph, we show that any node pair has a shortest path with link length O(n^{1/2}). Then by adding noises with some positive mean to the edge weights, the new graph is differentially private and can be published to answer all pairwise shortest path distances with O(n^{1/2}) approximation error using standard APSP computation. Numerical examples are also provided. Additionally, we also consider the graph with small feedback vertex set number. A feedback vertex set (FVS) of a graph is a set of vertices whose removal leaves a graph without cycles, and the feedback vertex set number of a graph, k, is the size of a smallest feedback vertex set. We propose a DP algorithm with error rate O(k), which improves the error of general graphs provided k=o(n^{1/2}).
Accept
The paper makes an important contribution on DP graph optimization literature. Most reviewers found the paper well written with no serious doubts regarding the correctness. We hope authors incorporate the comments from the reviewers in their final revision to improve the presentation.
train
[ "wA2gQ7qFiov", "Vzv6q1uc0Sy", "T-1vYIKQPDa", "KxyMZyh2CjE", "8j1fXq2Cggy", "ZfdFtCtBsDK", "Ebg4sevH-Fe", "Ffei-zH2daW", "b0tX4Yo1AxJ", "Zx68A39A_U2", "XxZ8fegmvMW", "A79r-Wt8Sak", "B0qtiJZd09G", "ALho227tbkR" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer,\n\nThank you for the suggestion. Very glad to hear that our rebuttal is helpful. Certainly, we will be happy to move the content from Appendix to the main paper as you kindly suggested. \n\nRegards,\n\nAuthors", " Thank you very much for your response, and for describing some examples for practi...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "Vzv6q1uc0Sy", "b0tX4Yo1AxJ", "KxyMZyh2CjE", "Zx68A39A_U2", "ZfdFtCtBsDK", "Ebg4sevH-Fe", "ALho227tbkR", "B0qtiJZd09G", "A79r-Wt8Sak", "XxZ8fegmvMW", "nips_2022_R2XFXfK0SVe", "nips_2022_R2XFXfK0SVe", "nips_2022_R2XFXfK0SVe", "nips_2022_R2XFXfK0SVe" ]
nips_2022_z64kN1h1-rR
Why So Pessimistic? Estimating Uncertainties for Offline RL through Ensembles, and Why Their Independence Matters
Motivated by the success of ensembles for uncertainty estimation in supervised learning, we take a renewed look at how ensembles of $Q$-functions can be leveraged as the primary source of pessimism for offline reinforcement learning (RL). We begin by identifying a critical flaw in a popular algorithmic choice used by many ensemble-based RL algorithms, namely the use of shared pessimistic target values when computing each ensemble member's Bellman error. Through theoretical analyses and construction of examples in toy MDPs, we demonstrate that shared pessimistic targets can paradoxically lead to value estimates that are effectively optimistic. Given this result, we propose MSG, a practical offline RL algorithm that trains an ensemble of $Q$-functions with independently computed targets based on completely separate networks, and optimizes a policy with respect to the lower confidence bound of predicted action values. Our experiments on the popular D4RL and RL Unplugged offline RL benchmarks demonstrate that on challenging domains such as antmazes, MSG with deep ensembles surpasses highly well-tuned state-of-the-art methods by a wide margin. Additionally, through ablations on benchmarks domains, we verify the critical significance of using independently trained $Q$-functions, and study the role of ensemble size. Finally, as using separate networks per ensemble member can become computationally costly with larger neural network architectures, we investigate whether efficient ensemble approximations developed for supervised learning can be similarly effective, and demonstrate that they do not match the performance and robustness of MSG with separate networks, highlighting the need for new efforts into efficient uncertainty estimation directed at RL.
Accept
The paper identifies a common flaw in pessimistic algorithms related to the use of shared targets, and propose an alternative based on independent targets that mitigate the overly-optimistic estimates. The rebuttal has addressed a number of concerns raised by the reviewers, and in particular, the negative reviewer qbbN acknowledged that > ... the proposed idea here would make an existing algorithm that uses e.g. double Q networks (which is quite common) and also other main pessimism (like value penalty or closeness to behavior policy) to perform better. Thus, the insight here can be quite useful in practice. That said, the reviewer is still concerned about the framing of the work > the paper does not provide sufficient evidence (theoretically or empirically) that the proposed pessimistic estimate based on Independent Training "alone" is sufficient to design a SoTA offline RL algorithm [which the paper claims to]... I think that the paper needs to provide stronger evidences or changes the framing. Given the strong support from other reviewers, the AC is leaning towards acceptance, but strongly recommend that the authors change the framing of the paper to honestly reflect the contributions of the work.
train
[ "kkahwGPNO6", "PfUnMXEC7Ln", "xe8aFmsVfOZ", "fFZkgyogUzw", "JMWcL9rVtjb", "-7iuZwdKgr", "gYaq--j1wn", "QaQMAWdtB29i", "lCx0WRhq4Q", "rmW7MALlIn1", "2HnlZclgCOq", "wFmK_g_zBM7", "lTUpc7B3Hgf", "UlP1Hzi4r60", "AMNsJ3ddHe7", "IGHaWAKM0An", "oYIO1OIzbb", "hH7DWQUSSTQ" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " - Q1) We completely agree we your statement. What we wanted point at is that in the NTK setting of Theorem 3.1, the outputs of the value functions are normally distributed. Since we did not make additional assumptions, such as limiting the bounds of the function class, an $\\inf$ formulation would have resulted i...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "xe8aFmsVfOZ", "nips_2022_z64kN1h1-rR", "fFZkgyogUzw", "JMWcL9rVtjb", "-7iuZwdKgr", "hH7DWQUSSTQ", "QaQMAWdtB29i", "oYIO1OIzbb", "rmW7MALlIn1", "2HnlZclgCOq", "IGHaWAKM0An", "lTUpc7B3Hgf", "UlP1Hzi4r60", "AMNsJ3ddHe7", "nips_2022_z64kN1h1-rR", "nips_2022_z64kN1h1-rR", "nips_2022_z64k...
nips_2022_kgT6D7Z4Xv9
Path Independent Equilibrium Models Can Better Exploit Test-Time Computation
Designing networks capable of attaining better performance with an increased inference budget is important to facilitate generalization to harder problem instances. Recent efforts have shown promising results in this direction by making use of depth-wise recurrent networks. In this work, we reproduce the performance of the prior art using a broader class of architectures called equilibrium models, and find that stronger generalization performance on harder examples (which require more iterations of inference to get correct) strongly correlates with the path independence of the system—its ability to converge to the same attractor (or limit cycle) regardless of initialization, given enough computation. Experimental interventions made to promote path independence result in improved generalization on harder (and thus more compute-hungry) problem instances, while those that penalize it degrade this ability. Path independence analyses are also useful on a per-example basis: for equilibrium models that have good in-distribution performance, path independence on out-of-distribution samples strongly correlates with accuracy. Thus, considering equilibrium models and path independence jointly leads to a valuable new viewpoint under which we can study the generalization performance of these networks on hard problem instances.
Accept
This paper finds that there is a strong correlation between generalization to harder examples and path independence (PI), for equilibrium models. The paper proposes a simple-to-compute metric called the Fixed Point Alignment (FPA) score, to measure the level of PI. During the rebuttal/revision phase, the paper was significantly improved, including clarifying Algorithm 1 and more thorough experimentation. Although some skepticism remains regarding the FPA score and the experiments, the reviewers agree that this paper has made clear and interesting contributions. Therefore it's worth having this paper presented at the conference.
train
[ "UnrKLo1FtBJ", "YXcIAwunTMH", "D4biWFPoRKJ", "OFZBSNCa3MK", "3juR_Fq2yN", "48Xh2cSHYVO", "GvIbNmJhnKP", "NUj3dDp5sHq", "AqSb6kBcyY-", "oi0La8AdINfO", "K5YGxvrO_cL", "Acp6KG9vPK2", "0P2_kQQXw-a", "SPFY3jqp2r", "BXV0dqdrnn-", "XHusBbc688Kv", "O0jJ9mo37s7", "er9namd6D8W", "VqG__tPmO...
[ "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_...
[ " We thank the reviewers for following up. \n\nWe’ve further improved our submission by adding new experiments, doing sanity checks and improving the writing. Most notable additions are 1) **new results on the Matrix Inversion task that confirm our findings** (this task is proposed by Du et. al. [1] which is releas...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 3, 3 ]
[ "oi0La8AdINfO", "XHusBbc688Kv", "OFZBSNCa3MK", "Acp6KG9vPK2", "GvIbNmJhnKP", "NUj3dDp5sHq", "er9namd6D8W", "O0jJ9mo37s7", "SPFY3jqp2r", "nips_2022_kgT6D7Z4Xv9", "acoQhVPJODk", "acoQhVPJODk", "cYf9xCaEnSD", "cYf9xCaEnSD", "VqG__tPmOW5", "VqG__tPmOW5", "AE565YlhMdj", "AE565YlhMdj", ...
nips_2022_Qi4vSM7sqZq
Surprising Instabilities in Training Deep Networks and a Theoretical Analysis
We empirically demonstrate numerical instabilities in training standard deep networks with SGD. Specifically, we show numerical error (on the order of the smallest floating point bit) induced from floating point arithmetic in training deep nets can be amplified significantly and result in significant test accuracy variance, comparable to the test accuracy variance due to stochasticity in SGD. We show how this is likely traced to instabilities of the optimization dynamics that are localized over iterations and regions of the weight tensor space. We do this by presenting a theoretical framework using numerical analysis of partial differential equations (PDE), and analyzing the gradient descent PDE of a one-layer convolutional neural network, which is sufficient to illustrate these instabilities. We show that it is stable only under certain conditions on the learning rate and weight decay. We reproduce the localized instabilities in the PDE for the one-layer network, which arise when the conditions are violated.
Accept
The paper studies the instabilities of neural networks. Most of the reviewers recommend acceptance. I also think the paper seems interesting.
val
[ "TQER2RSkOU", "NW49MdVxFQi", "6F6htLojflG", "FoXA5BlovY", "WeolBGdwDL", "HtfjbXibcLk", "WV42qOUib7X", "5fwxASlfdUm", "IRw9oKPg_ID", "hysmqGssCz6", "tsYwKqrCNS", "gS6WUWJYn4" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Two of my most concerning issues are well responded to. Hence, I flip my attitude to positive. \n\nHowever, for point 3, I recommend more DNN-related results rather than the numerical simulation of PDE. I do not quite capture the relation between the response and my suggestion.", " We continued to reflect on th...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 3 ]
[ "WV42qOUib7X", "WV42qOUib7X", "FoXA5BlovY", "gS6WUWJYn4", "tsYwKqrCNS", "hysmqGssCz6", "5fwxASlfdUm", "IRw9oKPg_ID", "nips_2022_Qi4vSM7sqZq", "nips_2022_Qi4vSM7sqZq", "nips_2022_Qi4vSM7sqZq", "nips_2022_Qi4vSM7sqZq" ]
nips_2022_eNlaFpjpZf
Beyond the Return: Off-policy Function Estimation under User-specified Error-measuring Distributions
Off-policy evaluation often refers to two related tasks: estimating the expected return of a policy and estimating its value function (or other functions of interest, such as density ratios). While recent works on marginalized importance sampling (MIS) show that the former can enjoy provable guarantees under realizable function approximation, the latter is only known to be feasible under much stronger assumptions such as prohibitively expressive discriminators. In this work, we provide guarantees for off-policy function estimation under only realizability, by imposing proper regularization on the MIS objectives. Compared to commonly used regularization in MIS, our regularizer is much more flexible and can account for an arbitrary user-specified distribution, under which the learned function will be close to the groundtruth. We provide exact characterization of the optimal dual solution that needs to be realized by the discriminator class, which determines the data-coverage assumption in the case of value-function learning. As another surprising observation, the regularizer can be altered to relax the data-coverage requirement, and completely eliminate it in the ideal case with strong side information.
Accept
The authors provide slow rates for Q-function estimation based on minimax objectives. The contribution is technically solid, but seems somewhat incremental and even though the authors provided responses to all major reviewer concerns, there is still concern by reviewers of the applicability of their result and the incrementality. Despite this it seems a solid contribution to the RL literature.
train
[ "__ksA6bFIAB", "28cl0cvUPwk", "XqxmnMe9hez", "fr1OKhYkfon", "Xre8tlDLQT", "BZiGvN2b_WY", "WR81SnR89AN", "zZDlzWuJMIh", "gi7MbxQYkeo", "WjRQ5aTTz3tZ", "AtypjWAcwjD", "lyHGsZyq8F2F", "E_kKjFKClKs", "P_o2h70kvqO", "zWa-FChRXfe", "rdjm0IlQuAi" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Understood. Thanks again for carefully considering our responses. Have a good night (or morning/whatever time in your time zone)!\n\nbest,\n\nAuthors", " You're very welcome! I agree, open dialogue is good, and I am frequently frustrated with poor review processes with opaque judgements and no engagement. Here'...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "28cl0cvUPwk", "XqxmnMe9hez", "fr1OKhYkfon", "gi7MbxQYkeo", "BZiGvN2b_WY", "rdjm0IlQuAi", "zZDlzWuJMIh", "P_o2h70kvqO", "WjRQ5aTTz3tZ", "zWa-FChRXfe", "E_kKjFKClKs", "E_kKjFKClKs", "nips_2022_eNlaFpjpZf", "nips_2022_eNlaFpjpZf", "nips_2022_eNlaFpjpZf", "nips_2022_eNlaFpjpZf" ]
nips_2022_LSKlp_aceOC
Merging Models with Fisher-Weighted Averaging
Averaging the parameters of models that have the same architecture and initialization can provide a means of combining their respective capabilities. In this paper, we take the perspective that this "merging" operation can be seen as choosing parameters that approximately maximize the joint likelihood of the posteriors of the models' parameters. Computing a simple average of the models' parameters therefore corresponds to making an isotropic Gaussian approximation to their posteriors. We develop an alternative merging procedure based on the Laplace approximation where we approximate each model's posterior as a Gaussian distribution whose precision matrix corresponds to its Fisher information. We first show that our "Fisher merging" technique provides a performance boost in settings where simple parameter averaging is currently used -- specifically, robust fine-tuning and model ensembling. Then, we compare merging to standard gradient-based transfer learning and demonstrate that merging enables a fundamentally different method for transferring capabilities across models. Specifically, we show that Fisher merging is competitive with gradient-based transfer learning approaches (while being significantly cheaper) in intermediate-task training and domain-adaptive pre-training. We also show that our merging procedure makes it possible to combine models in previously unexplored ways. We release our code to facilitate future research into methods for merging models.
Accept
Most reviewers agree that the paper proposes some interesting novel ideas and that its strengths overcome some of its weaknesses (eg lack on theoretical guarantees). As such, we are think that the paper is worth publishing and expect the authors to improve the manuscript in accordance with some of the reviewers comment.
train
[ "ieQE9BgzbDU", "PL1sAbQ79SV", "DnrZ2nwc3f", "GHwF4EdDtJ7", "UUab4cdNv22", "n8ESEb08h85", "0nisTl0EfP", "Ha-f4ELrxB2", "DDrJJbBGLLr", "fNQbWIuhnE", "6u6xrhW0hJ", "EarQILFwDze", "xH74EfETOf3" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks to the authors for such a good response to my questions. To be honest, I liked very much the paper while reading it, but considering the points indicated now in the rebuttal, I perceive that it is even better. Particularly, I remark that if the computation of the Fisher matrix is that cheap, no data is rev...
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4, 5 ]
[ "n8ESEb08h85", "GHwF4EdDtJ7", "nips_2022_LSKlp_aceOC", "xH74EfETOf3", "EarQILFwDze", "6u6xrhW0hJ", "fNQbWIuhnE", "DDrJJbBGLLr", "nips_2022_LSKlp_aceOC", "nips_2022_LSKlp_aceOC", "nips_2022_LSKlp_aceOC", "nips_2022_LSKlp_aceOC", "nips_2022_LSKlp_aceOC" ]
nips_2022_ttC9p-CtYT
Continuous Deep Q-Learning in Optimal Control Problems: Normalized Advantage Functions Analysis
One of the most effective continuous deep reinforcement learning algorithms is normalized advantage functions (NAF). The main idea of NAF consists in the approximation of the Q-function by functions quadratic with respect to the action variable. This idea allows to apply the algorithm to continuous reinforcement learning problems, but on the other hand, it brings up the question of classes of problems in which this approximation is acceptable. The presented paper describes one such class. We consider reinforcement learning problems obtained by the discretization of certain optimal control problems. Based on the idea of NAF, we present a new family of quadratic functions and prove its suitable approximation properties. Taking these properties into account, we provide several ways to improve NAF. The experimental results confirm the efficiency of our improvements.
Accept
I am happy to recommend accepting this paper. I would argue that the stated contribution of this paper - an analysis of when NAF is a good approximation - is somewhat minor. But the analysis in the paper has a nice additional benefit in that it becomes possible to include domain-specific knowledge (e.g., on the reward function, which I agree is often known) in a straightforward and effective way into the algorithm. Furthermore, the paper is fairly well executed. I would encourage the authors to have another careful look at all the comments by all the reviewers, which could make the paper better. I would particularly agree that an analysis (empirical or theoretical or both) of what happens when the assumptions are violated would make the paper better.
train
[ "vSMIOBPm0mY", "HUyD5K9glPD", "XcgVCYslLeY", "NGso8mffcmu", "mE9e-D_No9y", "nViftNiKGPs", "bcL7lfV1MLW", "YJN9u-OMXck", "I7GV-Lue19O", "GGxi-uW1uOQ", "K6NAHW6koGA", "LD6W-OdcmFu" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much. We are glad that we could clarify these details for you and make the paper better.", " Thank you for your clarification with respect to my concerns on the boundedness of the controls. It is clear that this assumption is necessary to prove the theorems in the paper. Moreover, you also consid...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "HUyD5K9glPD", "YJN9u-OMXck", "mE9e-D_No9y", "nViftNiKGPs", "bcL7lfV1MLW", "I7GV-Lue19O", "LD6W-OdcmFu", "K6NAHW6koGA", "GGxi-uW1uOQ", "nips_2022_ttC9p-CtYT", "nips_2022_ttC9p-CtYT", "nips_2022_ttC9p-CtYT" ]
nips_2022_AUJT3rj2F5U
Distributed Distributionally Robust Optimization with Non-Convex Objectives
Distributionally Robust Optimization (DRO), which aims to find an optimal decision that minimizes the worst case cost over the ambiguity set of probability distribution, has been applied in diverse applications, e.g., network behavior analysis, risk management, etc. However, existing DRO techniques face three key challenges: 1) how to deal with the asynchronous updating in a distributed environment; 2) how to leverage the prior distribution effectively; 3) how to properly adjust the degree of robustness according to difference scenarios. To this end, we propose an asynchronous distributed algorithm, named Asynchronous Single-looP alternatIve gRadient projEction (ASPIRE) algorithm with the itErative Active SEt method (EASE) to tackle the distributed distributionally robust optimization (DDRO) problem. Furthermore, a new uncertainty set, i.e., constrained $D$-norm uncertainty set, is developed to effectively leverage the prior distribution and flexibly control the degree of robustness. Finally, our theoretical analysis elucidates that the proposed algorithm is guaranteed to converge and the iteration complexity is also analyzed. Extensive empirical studies on real-world datasets demonstrate that the proposed method can not only achieve fast convergence, remain robust against data heterogeneity and malicious attacks, but also tradeoff robustness with performance.
Accept
Reviewers generally recommend (weak) acceptance however there are still quite a few issues that should be addressed. I concur.
train
[ "m2CPZdzeqsZ", "32zOR0X9PCi", "oU5DZNd0XYz", "j49Eg0W4TQX", "J7ZFnXoMrrd", "3UdW40uLrKZ", "lwAOE093qkp", "CPSEN3mTjQB", "4bHaVFmWctj", "RaIL4xTM4o", "NarFrea4qJ", "E0oPbZ0OZe-", "d0qrDCCOQ60", "ZfBa06hCWAK", "fL-qjfk8Ecp", "AY4LdSoOQoj" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the constructive suggestions from all reviewers. Since the author-reviewer discussion will end very soon, we summarize the main concerns from reviewers and our corresponding replies as follows to facilitate further discussions.\n\n$\\textbf{(Reviewer X3TP)}$\n\n$\\textbf{(Q)}$ Impact of ${\\bf{1}^ \\to...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "nips_2022_AUJT3rj2F5U", "fL-qjfk8Ecp", "ZfBa06hCWAK", "J7ZFnXoMrrd", "3UdW40uLrKZ", "lwAOE093qkp", "CPSEN3mTjQB", "AY4LdSoOQoj", "AY4LdSoOQoj", "AY4LdSoOQoj", "fL-qjfk8Ecp", "ZfBa06hCWAK", "ZfBa06hCWAK", "nips_2022_AUJT3rj2F5U", "nips_2022_AUJT3rj2F5U", "nips_2022_AUJT3rj2F5U" ]
nips_2022_GwwC16ECrM5
Energy-Based Contrastive Learning of Visual Representations
Contrastive learning is a method of learning visual representations by training Deep Neural Networks (DNNs) to increase the similarity between representations of positive pairs (transformations of the same image) and reduce the similarity between representations of negative pairs (transformations of different images). Here we explore Energy-Based Contrastive Learning (EBCLR) that leverages the power of generative learning by combining contrastive learning with Energy-Based Models (EBMs). EBCLR can be theoretically interpreted as learning the joint distribution of positive pairs, and it shows promising results on small and medium-scale datasets such as MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100. Specifically, we find EBCLR demonstrates from $\times 4$ up to $\times 20$ acceleration compared to SimCLR and MoCo v2 in terms of training epochs. Furthermore, in contrast to SimCLR, we observe EBCLR achieves nearly the same performance with $254$ negative pairs (batch size $128$) and $30$ negative pairs (batch size $16$) per positive pair, demonstrating the robustness of EBCLR to small numbers of negative pairs. Hence, EBCLR provides a novel avenue for improving contrastive learning methods that usually require large datasets with a significant number of negative pairs per iteration to achieve reasonable performance on downstream tasks. Code: https://github.com/1202kbs/EBCLR
Accept
The paper connects contrastive learning and energy-based models and proposes a new variant of contrastive learning based on SGLD. All of the reviewers believe the paper is a good fit for NeurIPS, and I recommend acceptance. That said, as reviewers point out, results on ImageNet are also expected.
train
[ "d56U2EjxXX5", "s7jwMq0GVZh", "dkhDsW2yWR", "Dm7fsH48G-6", "fZlpohQJpYM", "E4lVvSTf_iS", "-no5IO6IJcb", "rphtL7FE42W", "E7a0fJSB6Io", "nSOZYcSIvOi", "w7QxoDTXgCu", "tjzqa8oYb9", "9ROh2wF-N-6", "l5Y5VAaUZFm", "XX_71j6PEq1", "4jQYSDriSTG", "5qpQRFR16S", "Ha3n_bIXJDI", "0--umxdz6ti"...
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer",...
[ " Thank you for the reconsideration of our paper and raising the score! Your comments have helped us improve the presentation of our paper and better relate our work to previous studies. Regarding the minor concerns, we have provided additional feedback in the reply to your comment “The connection between contrasti...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "dkhDsW2yWR", "fZlpohQJpYM", "E7a0fJSB6Io", "4jQYSDriSTG", "XX_71j6PEq1", "-no5IO6IJcb", "l5Y5VAaUZFm", "5KIsqJEfeBv", "UoZVuUdOYt", "Wj9mt30oWCd", "5KIsqJEfeBv", "5KIsqJEfeBv", "5KIsqJEfeBv", "5KIsqJEfeBv", "UoZVuUdOYt", "UoZVuUdOYt", "UoZVuUdOYt", "Wj9mt30oWCd", "Wj9mt30oWCd", ...
nips_2022_65eqtvEShR8
On the Statistical Efficiency of Reward-Free Exploration in Non-Linear RL
We study reward-free reinforcement learning (RL) under general non-linear function approximation, and establish sample efficiency and hardness results under various standard structural assumptions. On the positive side, we propose the RFOLIVE (Reward-Free OLIVE) algorithm for sample-efficient reward-free exploration under minimal structural assumptions, which covers the previously studied settings of linear MDPs (Jin et al., 2020b), linear completeness (Zanette et al., 2020b) and low-rank MDPs with unknown representation (Modi et al., 2021). Our analyses indicate that the explorability or reachability assumptions, previously made for the latter two settings, are not necessary statistically for reward-free exploration. On the negative side, we provide a statistical hardness result for both reward-free and reward-aware exploration under linear completeness assumptions when the underlying features are unknown, showing an exponential separation between low-rank and linear completeness settings.
Accept
Despite a few concerns about the novelty of the paper (mostly from an algorithmic perspective), I think the investigation of the minimal structural assumptions for reward-free RL is interesting for the community. This work provides a first step in the direction of obtaining a better understanding about reward-free RL outside the tabular setting.
train
[ "YkWAozQbKNa", "bRCUbKtC55a", "MM0xfeCGqV9", "cnJoBcIEu9a", "7SXG4Ls7iVk", "p53LZb-N5y6", "UZSyredHJMB", "QeHCQVthsMp1", "5_nFkgSeugR", "CwiIkk3s7Y7", "T9KTLWrIMd", "h0oS0ljRwJc", "sdnO5EzMWXY", "tWx3_HmHpKc" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for the detailed response. It clarified all my doubts. I will keep my initial score.", " Dear Area Chair Tbh5 and all reviewers,\n\nWe would like to thank the AC for the reminder and thank you all again for your time and suggestions!\n\nAs suggested by the AC, we have prepared a rebuttal rev...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "p53LZb-N5y6", "cnJoBcIEu9a", "QeHCQVthsMp1", "nips_2022_65eqtvEShR8", "tWx3_HmHpKc", "sdnO5EzMWXY", "h0oS0ljRwJc", "T9KTLWrIMd", "CwiIkk3s7Y7", "nips_2022_65eqtvEShR8", "nips_2022_65eqtvEShR8", "nips_2022_65eqtvEShR8", "nips_2022_65eqtvEShR8", "nips_2022_65eqtvEShR8" ]
nips_2022_2ge7_pORL_n
BiMLP: Compact Binary Architectures for Vision Multi-Layer Perceptrons
This paper studies the problem of designing compact binary architectures for vision multi-layer perceptrons (MLPs). We provide extensive analysis on the difficulty of binarizing vision MLPs and find that previous binarization methods perform poorly due to limited capacity of binary MLPs. In contrast with the traditional CNNs that utilizing convolutional operations with large kernel size, fully-connected (FC) layers in MLPs can be treated as convolutional layers with kernel size $1\times1$. Thus, the representation ability of the FC layers will be limited when being binarized, and places restrictions on the capability of spatial mixing and channel mixing on the intermediate features. To this end, we propose to improve the performance of binary MLP (BiMLP) model by enriching the representation ability of binary FC layers. We design a novel binary block that contains multiple branches to merge a series of outputs from the same stage, and also a universal shortcut connection that encourages the information flow from the previous stage. The downsampling layers are also carefully designed to reduce the computational complexity while maintaining the classification performance. Experimental results on benchmark dataset ImageNet-1k demonstrate the effectiveness of the proposed BiMLP models, which achieve state-of-the-art accuracy compared to prior binary CNNs. The MindSpore code is available at \url{https://gitee.com/mindspore/models/tree/master/research/cv/BiMLP}.
Accept
Four reviewers provided feedback on this paper. The authors provided a response to the reviews and I appreciate the authors' detailed comments and clarifications, specifically addressing each reviewer's comments/questions. The authors did not upload a revised version of the paper. After the two discussion periods, three of the four reviewers suggest to accept the paper (with varying scores) while one reviewer (orUW) rated the paper as "borderline reject", so not strongly opposing acceptance. (Also, reviewer orUW chose to not engage in the discussion, nor did they acknowledge the authors' response, so their opinion should carry slightly less weight in the overall decision.) After considering the reviewers' and authors' comments, I believe that the paper should be accepted to NeurIPS. Weaknesses include: * The approach is only validated on one dataset. * Concerns regarding missing related work. (Partially addressed in the response.) * It would be great to see also concrete runtime or throughput numbers, not only OPs counts. Strengths include: * Novel approach to binary networks for image classification. Interesting discussion of binary networks in the context of CNN/MLP approaches. * New SOTA on ImageNet for binary networks (using fewer OPs). * Experiments support claims. I expect and hope that the authors will thoroughly address the reviewers' comments in the camera ready version of the paper. Minor points (not affecting this decision, but potentially useful to authors when preparing the final revision): * "sign function is non-differentiable almost everywhere" - it seems to me that the sign-function is actually differentiable almost everywhere, but has a zero gradient, maybe that is what is meant? * typo: "Dowmsampling" (291)
train
[ "j0xSlomDUSe", "KBmpimZpkQD", "O4eCZ0ts34E", "7YIm8EUxdoe", "o-GXsbaz4g7V", "On4nehmBRL9", "a8fm-KsTj4Y", "b1XM6KmTbqd", "PDLVUcRKcB3", "lMqMcmUp-1O", "Smfma6uKX7F", "PqJqC5LZPj0", "-oEEWQ_UcZ5", "6pUfagLgEs9" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response. I don't have more questions and will maintain my rating.", " Thanks authors for the detailed response. It addressed all my concerns. I will maintain my original score.", " Dear Reviewer, \n\ncould you please indicate that you have considered the authors' rebuttal? (E.g. by replying t...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "lMqMcmUp-1O", "PDLVUcRKcB3", "Smfma6uKX7F", "b1XM6KmTbqd", "Smfma6uKX7F", "Smfma6uKX7F", "Smfma6uKX7F", "6pUfagLgEs9", "-oEEWQ_UcZ5", "PqJqC5LZPj0", "nips_2022_2ge7_pORL_n", "nips_2022_2ge7_pORL_n", "nips_2022_2ge7_pORL_n", "nips_2022_2ge7_pORL_n" ]
nips_2022_71ICQGB92Yz
Multi-block Min-max Bilevel Optimization with Applications in Multi-task Deep AUC Maximization
In this paper, we study multi-block min-max bilevel optimization problems, where the upper level is non-convex strongly-concave minimax objective and the lower level is a strongly convex objective, and there are multiple blocks of dual variables and lower level problems. Due to the intertwined multi-block min-max bilevel structure, the computational cost at each iteration could be prohibitively high, especially with a large number of blocks. To tackle this challenge, we present two single-loop randomized stochastic algorithms, which require updates for only a constant number of blocks at each iteration. Under some mild assumptions on the problem, we establish their sample complexity of $\mathcal{O}(1/\epsilon^4)$ for finding an $\epsilon$-stationary point. This matches the optimal complexity order for solving stochastic nonconvex optimization under a general unbiased stochastic oracle model. Moreover, we provide two applications of the proposed method in multi-task deep AUC (area under ROC curve) maximization. Experimental results validate our theory and demonstrate the effectiveness of our method.
Accept
This paper propose novel algorithm for a class of minimax problems. The iteration complexity is established. The proposed algorithm is applied to AUC maximization -- a very important problem in machine learning. Considering the contributions in both theory and practice, this is a solid work to the machine learning community.
train
[ "SohkJIBDJPV", "CUNpajw-QUi", "CbLS2UMUJPv", "4VWsZcj2EED", "AwCxmmknBVF", "mx8dQktx_C", "RdWq9iC3Da", "IlnjlkzW7ND", "7zptasszJIs", "fJWEahtPTym", "gL_wlIO6yfFu", "R0MoMUePSXT", "2Z15_gdKz_x", "p9TjLrWAtw", "AhMKuxi13nu", "51zXsw8COEb", "lfuGSuSzN2a", "qaSKO2hGBCp", "yVKfsWG-AYr...
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", ...
[ " The Lipschitz continuous assumption mentioned in our response earlier for deriving a faster rate refers to the Lipschitz continuous condition of the stochastic gradient, stochastic Jacobian and stochastic Hessian. We give an example of Yang et al, e.g., $|\\nabla F(x, y, \\xi) - \\nabla F(x', y', \\xi)|\\leq L\\|...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 5 ]
[ "4VWsZcj2EED", "AwCxmmknBVF", "mx8dQktx_C", "2Z15_gdKz_x", "p9TjLrWAtw", "R0MoMUePSXT", "IlnjlkzW7ND", "7zptasszJIs", "fJWEahtPTym", "lfuGSuSzN2a", "yVKfsWG-AYr", "yVKfsWG-AYr", "yVKfsWG-AYr", "yVKfsWG-AYr", "PzQEYwU3Ywb", "z7V3YQxlkcw", "z7V3YQxlkcw", "AifE_ENayp9", "nips_2022_7...
nips_2022_hPfJut2PeLa
Rethinking the Reverse-engineering of Trojan Triggers
Deep Neural Networks are vulnerable to Trojan (or backdoor) attacks. Reverse-engineering methods can reconstruct the trigger and thus identify affected models. Existing reverse-engineering methods only consider input space constraints, e.g., trigger size in the input space. Expressly, they assume the triggers are static patterns in the input space and fail to detect models with feature space triggers such as image style transformations. We observe that both input-space and feature-space Trojans are associated with feature space hyperplanes. Based on this observation, we design a novel reverse-engineering method that exploits the feature space constraint to reverse-engineer Trojan triggers. Results on four datasets and seven different attacks demonstrate that our solution effectively defends both input-space and feature-space Trojans. It outperforms state-of-the-art reverse-engineering methods and other types of defenses in both Trojaned model detection and mitigation tasks. On average, the detection accuracy of our method is 93%. For Trojan mitigation, our method can reduce the ASR (attack success rate) to only 0.26% with the BA (benign accuracy) remaining nearly unchanged. Our code can be found at https://github.com/RU-System-Software-and-Security/FeatureRE.
Accept
This paper proposes a new reverse-engineering method for trojan attack detection. The idea is to focus on feature representation space so that the detection is more robust to dynamic / input-dependent attacks and other feature-based attacks. The reviewers consider the idea generally novel and effective, and the experiments thorough. Some reviewers hope to see more visual analysis that can provide better insights into the effectiveness of the method.
train
[ "iBlmltaezT", "6FlfWAlkCB", "flUaLHRJoeO", "j68PzJarJLL", "odsILFxFxM1", "FtSYnkAYAH", "bvrjgFA8F_Q", "57JrsuqTIVK", "N0Zev82lbhu", "GKAwISHD31fR", "03kC3hu7AWH", "vgk579XcenC", "ble8O2sG1YF", "RwvtKGZSSiB", "r-GPgTErGP", "CfDKuyOU_VR", "5YYmV228HVQ" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer VKAc,\n\nThanks again for your valuable comments.\nWe genuinely hope you could have a look at the new results and clarifications and kindly let us know if they have addressed your concerns. \nWe would appreciate the opportunity to engage further if needed.\n\n", " Dear Reviewer BzAx,\n\nThank you ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "5YYmV228HVQ", "flUaLHRJoeO", "03kC3hu7AWH", "odsILFxFxM1", "N0Zev82lbhu", "nips_2022_hPfJut2PeLa", "57JrsuqTIVK", "5YYmV228HVQ", "CfDKuyOU_VR", "r-GPgTErGP", "vgk579XcenC", "RwvtKGZSSiB", "nips_2022_hPfJut2PeLa", "nips_2022_hPfJut2PeLa", "nips_2022_hPfJut2PeLa", "nips_2022_hPfJut2PeLa...
nips_2022_EZZsnke1kt
Agreement-on-the-line: Predicting the Performance of Neural Networks under Distribution Shift
Recently, Miller et al. showed that a model's in-distribution (ID) accuracy has a strong linear correlation with its out-of-distribution (OOD) accuracy, on several OOD benchmarks, a phenomenon they dubbed ``accuracy-on-the-line''. While a useful tool for model selection (i.e., the model most likely to perform the best OOD is the one with highest ID accuracy), this fact does not help to estimate the actual OOD performance of models without access to a labeled OOD validation set. In this paper, we show a similar surprising phenomena also holds for the agreement between pairs of neural network classifiers: whenever accuracy-on-the-line holds, we observe that the OOD agreement between the predictions of any two pairs of neural networks (with potentially different architectures) also observes a strong linear correlation with their ID agreement. Furthermore, we observe that the slope and bias of OOD vs ID agreement closely matches that of OOD vs ID accuracy. This phenomenon which we call agreement-on-the-line, has important practical applications: without any labeled data, we can predict the OOD accuracy of classifiers, since OOD agreement can be estimated with just unlabeled data. Our prediction algorithm outperforms previous methods both in shifts where agreement-on-the-line holds and, surprisingly, when accuracy is not on the line. This phenomenon also provides new insights into neural networks: unlike accuracy-on-the-line, agreement-on-the-line only appears to hold for neural network classifiers.
Accept
This work addresses the “agreement-on-the-line phenomenon” by extensively analyzing a phenomenon relating to the agreement and accuracy of models on in-distribution and out-of-distribution data. In particular, one of the findings is that when there is a linear correlation between in-distribution test accuracy and out-of-distribution test accuracy across a set of distinct models trained on this data, there is also a linear correlation between the agreements of pairs of models trained on that data. Agreement-on-the-line can be estimated solely from unlabelled data and can be used to predict potential OOD performance. The main advantages of their method over previous works are not having to rely on assumptions about data shift magnitude, and the ability to aggregate information from many pairs of models. The paper proposes both a pair-model-based assessment as well as a multi-model-based assessment which allows for noise and potential bias reductions. Empirical results based on CIFAR-10, ImageNet, and WILDS OOD data show good OOD performance predictions. The paper convinces in all four categories (originality, quality, clarity, and significance), and the reviewers all agree on accepting this work for publication. For the camera-ready version, it would be great if the authors could include a short description of the baseline methods and briefly discuss the reasoning behind choosing the R2 threshold.
train
[ "Tlks8h7KLh", "vg3vCInoIrZ", "nQSAyCH12HwH", "UdpsTpdLFsy", "MQIW-B0HeIIg", "oPDV4JPqSY", "ufyKa5jU_L2k", "TuHaeI-kZlt", "RBA98P9DdRO", "FcBXQlIF-1M", "K8dX4EP1FFTJ", "mRpz9nQ7Sg7", "H2oYFQxScD", "53bpi5m3wI", "23SIP-BTD_9", "KOU0CB_74vi", "5jEoRIC3Gf", "W9fSQIUTuME" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you again for the suggestions for improving the paper. Since today is the last day of the discussion period, we were wondering if all of your concerns have been addressed. If not, we would be happy to continue the discussion and/or revise the paper.", " Hello,\n\nThank you for your patience. We have updat...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 8, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "5jEoRIC3Gf", "MQIW-B0HeIIg", "RBA98P9DdRO", "oPDV4JPqSY", "mRpz9nQ7Sg7", "53bpi5m3wI", "5jEoRIC3Gf", "nips_2022_EZZsnke1kt", "H2oYFQxScD", "5jEoRIC3Gf", "5jEoRIC3Gf", "W9fSQIUTuME", "KOU0CB_74vi", "23SIP-BTD_9", "nips_2022_EZZsnke1kt", "nips_2022_EZZsnke1kt", "nips_2022_EZZsnke1kt",...
nips_2022_boItpVtQ14K
A Statistical Online Inference Approach in Averaged Stochastic Approximation
In this paper we propose a general framework to perform statistical online inference in a class of constant step size stochastic approximation (SA) problems, including the well-known stochastic gradient descent (SGD) and Q-learning. Regarding a constant step size SA procedure as a time-homogeneous Markov chain, we establish a functional central limit theorem (FCLT) for it under weaker conditions, and then construct confidence intervals for parameters via random scaling. To leverage the FCLT results in the Markov chain setting, an alternative condition that is more applicable for SA problems is established. We conduct experiments to perform inference with both random scaling and other traditional inference methods, and finds that the former has a more accurate and robust performance.
Accept
This paper provides an online framework for stochastic approximation algorithms with fixed step size. Several reviewers commended the work, and I agree that the contributions are sound and relevant to the NeurIPS community. The more negative review provides very constructive criticism, and the knowledgeable reviewer points out flaws that should be addressed in the final version. The authors have done a good job responding to reviewer comments, and have added additional experimental details to bolster the results. Some discussion on the limitations of restricting to constant step size regimes (including some of the points laid out in the discussion/responses to reviewers) will also help readers contextualize and clarify the scope
train
[ "kfJ8ad7p95", "9rblQFyduAA", "ULOMlx3TQV9", "ik6XdisK_sM", "u0kw4R_SSR", "E3neLYIYGGA", "9iVJzU_Lq6T", "hcNeX4a9R3h", "KAxwmkh1zB5", "9FV8GRl7wbq", "5Q20Yn9Tlis", "qguOQOHYnd", "9z8yFqfjK_" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " OK. Thank you.", " Yes. All replication code for experiments will be uploaded in the camera ready version (in case there may be additional experiments before that) if accepted. ", " I appreciate the responses from the authors. I have no further concern and would maintain my current rating. ", " I appreciate...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 5 ]
[ "9rblQFyduAA", "ik6XdisK_sM", "9iVJzU_Lq6T", "9FV8GRl7wbq", "hcNeX4a9R3h", "KAxwmkh1zB5", "9z8yFqfjK_", "KAxwmkh1zB5", "qguOQOHYnd", "5Q20Yn9Tlis", "nips_2022_boItpVtQ14K", "nips_2022_boItpVtQ14K", "nips_2022_boItpVtQ14K" ]
nips_2022_I59qJ0sJ2nh
A Ranking Game for Imitation Learning
We propose a new framework for imitation learning---treating imitation as a two-player ranking-based game between a policy and a reward. In this game, the reward agent learns to satisfy pairwise performance rankings between behaviors, while the policy agent learns to maximize this reward. In imitation learning, near-optimal expert data can be difficult to obtain, and even in the limit of infinite data cannot imply a total ordering over trajectories as preferences can. On the other hand, learning from preferences alone is challenging as a large number of preferences are required to infer a high-dimensional reward function, though preference data is typically much easier to collect than expert demonstrations. The classical inverse reinforcement learning (IRL) formulation learns from expert demonstrations but provides no mechanism to incorporate learning from offline preferences and vice versa. We instantiate the proposed ranking-game framework with a novel ranking loss giving an algorithm that can simultaneously learn from expert demonstrations and preferences, gaining the advantages of both modalities. Our experiments show that the proposed method achieves state-of-the-art sample efficiency and can solve previously unsolvable tasks in the Learning from Observation (LfO) setting.
Reject
I went through the paper, reviews and responses. This is a borderline paper. The negative to neutral reviews are more detailed and convincing.
train
[ "x3yGLzfG8sT", "HKR7b7VTc3F", "8pj8WONky4O", "bINuik13NU30", "Bm48pcyGpX2l", "wVVO3MI5cjL", "zW8TiqXwzZe", "IZuqmM1Mjh8", "Jc52WPgq7m8", "NMxsn0uZnrh", "97u2qh469fz5", "DM8EAUvwEnF", "QntWLnLZul1", "PUPh_wvCh-n", "NSqjUFrRrBi", "SgK1L2cdV2Z", "Kws7IQp6vjl", "O0hdJSbCt7Q" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We hope the responses below answer the questions and concerns raised in the reviews. Since the author-reviewer discussion period is about to end soon, please let us know if there are any follow-up questions for the paper.\n\nWe hope that our explanation resolves your concerns, and would appreciate it if you would...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 2, 4 ]
[ "nips_2022_I59qJ0sJ2nh", "NMxsn0uZnrh", "zW8TiqXwzZe", "Bm48pcyGpX2l", "wVVO3MI5cjL", "O0hdJSbCt7Q", "IZuqmM1Mjh8", "Kws7IQp6vjl", "SgK1L2cdV2Z", "NSqjUFrRrBi", "PUPh_wvCh-n", "QntWLnLZul1", "nips_2022_I59qJ0sJ2nh", "nips_2022_I59qJ0sJ2nh", "nips_2022_I59qJ0sJ2nh", "nips_2022_I59qJ0sJ2...
nips_2022_diV1PpaP33
Beyond Not-Forgetting: Continual Learning with Backward Knowledge Transfer
By learning a sequence of tasks continually, an agent in continual learning (CL) can improve the learning performance of both a new task and `old' tasks by leveraging the forward knowledge transfer and the backward knowledge transfer, respectively. However, most existing CL methods focus on addressing catastrophic forgetting in neural networks by minimizing the modification of the learnt model for old tasks. This inevitably limits the backward knowledge transfer from the new task to the old tasks, because judicious model updates could possibly improve the learning performance of the old tasks as well. To tackle this problem, we first theoretically analyze the conditions under which updating the learnt model of old tasks could be beneficial for CL and also lead to backward knowledge transfer, based on the gradient projection onto the input subspaces of old tasks. Building on the theoretical analysis, we next develop a ContinUal learning method with Backward knowlEdge tRansfer (CUBER), for a fixed capacity neural network without data replay. In particular, CUBER first characterizes the task correlation to identify the positively correlated old tasks in a layer-wise manner, and then selectively modifies the learnt model of the old tasks when learning the new task. Experimental studies show that CUBER can even achieve positive backward knowledge transfer on several existing CL benchmarks for the first time without data replay, where the related baselines still suffer from catastrophic forgetting (negative backward knowledge transfer). The superior performance of CUBER on the backward knowledge transfer also leads to higher accuracy accordingly.
Accept
This paper develops a method to improve backward transfer in continual learning without replay. In particular, upon learning a new task the proposed method also updates parts of the model responsible for the performance of the old tasks. The initial reviews for this paper were quite mixed. All reviewers agreed that parts of the method were novel, but reviewers also found important limitations of the initial manuscript. In the discussion, the authors were able to clarify many questions/criticisms notably with respect to related work as welll as to the exact experimental setup. The updated version of the manuscript now provides much needed details (which should help with reproducibility). After the discussions, some questions regarding the significance of the results and the differences with respect to GEM remain. Regarding significance of the results, reviewer nsyh outlines that while improving backward transfer is significant, the level of improvements shown in the paper are modest and could be a result of overfitting. In particular, the method seems to outperform baselines, but mostly does not achieve positive backward transfer. The problem of backward transfer seems difficult and it is unclear if positive backward transfer is even achievable in most cases (e.g., it seems to rely on having sufficiently high similarity between tasks). In that sense, I find the results to be reasonable. Regarding GEM, there is some disagreement about the level of similarity of this work compared to GEM and whether a simple extension of GEM could compare favourably to this method. Studying this extension of GEM would be improve this paper, but I don't see it as a requirement. This perspectivee is shared by two reviewers. Overall, this remains a very borderline paper, one that studies an important problem, but for which reviewers disagree on significance and even to a certain extent novelty given existing work. Nonetheless, methods for improving backward transfer have not been studied much in continual learning and so this paper might spark additional work in the area. Further, the paper has no major weaknesses. Based on this I am happy to recommend this paper be accepted. I strongly encourage the authors to incorporate elements of the discussion you had with the reviewers into account when preparing the camera-ready version of their work. In particular, it would be useful to provide more intuitions regarding the method especially in light of related works (Saha et al.'21, Lin et al.'22, Kao et al.'21, and GEM Lopez-Paz et al.'17 + A-GEM Chaudhry et al.'19). Considering the extension of GEM would also provide an interesting baseline.
train
[ "044B45kDnUK", "b4uhtNWgCNJ", "H8efuIaUWhz", "FCO0fwsqC79", "-iOV1KtBCCk", "5k7vjQ_KNrC", "cx2hVegmDO", "MI8c0-p-sWB", "5phDoAbZQAO", "GWk2hf3yMzV", "MFRdyyZOSb", "ugn8Kmd4j1c", "AfBh3PjbNM2", "HbehjI3vAsv", "Skp4gxYV7gAJ", "aOqgizd49jlw", "7BwhCX5LO_1", "Et4OR9oTks-", "alYqQWSH0...
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_...
[ " Thank you so much for your reply. We have the following clarifications:\n\n- As the reviewer mentioned earlier, one can store and use the gradient of the previous tasks w.r.t. the previous models to estimate the gradient of previous tasks w.r.t. the current model, so as to evaluate the constraint $\\langle \\frac...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 2, 3 ]
[ "b4uhtNWgCNJ", "cx2hVegmDO", "FCO0fwsqC79", "-iOV1KtBCCk", "MI8c0-p-sWB", "5phDoAbZQAO", "GWk2hf3yMzV", "Skp4gxYV7gAJ", "Et4OR9oTks-", "m_CJSS5-yhV", "ugn8Kmd4j1c", "Et4OR9oTks-", "-NFF8DZNKx5", "-NFF8DZNKx5", "gW-v6RQddAs", "yxFe0mtN1NL", "yxFe0mtN1NL", "yxFe0mtN1NL", "nPndzLes-...
nips_2022_prQkA_NjuuB
Neural Conservation Laws: A Divergence-Free Perspective
We investigate the parameterization of deep neural networks that by design satisfy the continuity equation, a fundamental conservation law. This is enabled by the observation that any solution of the continuity equation can be represented as a divergence-free vector field. We hence propose building divergence-free neural networks through the concept of differential forms, and with the aid of automatic differentiation, realize two practical constructions. As a result, we can parameterize pairs of densities and vector fields that always satisfy the continuity equation by construction, foregoing the need for extra penalty methods or expensive numerical simulation. Furthermore, we prove these models are universal and so can be used to represent any divergence-free vector field. Finally, we experimentally validate our approaches by computing neural network-based solutions to fluid equations, solving for the Hodge decomposition, and learning dynamical optimal transport maps.
Accept
This paper presents two parameterizations for divergence-free vector fields as outputs of neural networks. These parameterizations allow for modeling of compressible fluids and Maxwell's equations, as they automatically enforce the continuity equations. In contrast, existing approaches are based on extra penalty terms and or expensive numerical simulation. While the approach studied does not beat state of the art finite element approaches, it does provide a significant enhancement for physics inspired neural networks, which will provide a platform for further work to continue advancing neural network approaches for physics problems.
train
[ "k85oKTdwZgX", "ifySuf6XjUg", "etREjSd4HOw", "Tc5SZLt2YGK", "IVVjwYyq7yB", "cIkA5d4TKau", "JN1tKzLQjzaI", "ZOJ_3_J55t6B", "x-96ybVpRZE", "0P6j6RkeN4R", "Ok8sOCmOJeQ" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " > I appreciate the compute time comparison in Figure 2, but I would really rather see a comparison of modeling accuracy or a learning curve. Both approaches seem to have roughly similar computational scaling.\n\nJust to clarify our perspective: the matrix field construction is faster to compute than the vector fi...
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, 2, 3 ]
[ "ifySuf6XjUg", "JN1tKzLQjzaI", "cIkA5d4TKau", "ZOJ_3_J55t6B", "nips_2022_prQkA_NjuuB", "Ok8sOCmOJeQ", "0P6j6RkeN4R", "x-96ybVpRZE", "nips_2022_prQkA_NjuuB", "nips_2022_prQkA_NjuuB", "nips_2022_prQkA_NjuuB" ]
nips_2022_MjaROj4BOwk
Sparse Hypergraph Community Detection Thresholds in Stochastic Block Model
Community detection in random graphs or hypergraphs is an interesting fundamental problem in statistics, machine learning and computer vision. When the hypergraphs are generated by a {\em stochastic block model}, the existence of a sharp threshold on the model parameters for community detection was conjectured by Angelini et al. 2015. In this paper, we confirm the positive part of the conjecture, the possibility of non-trivial reconstruction above the threshold, for the case of two blocks. We do so by comparing the hypergraph stochastic block model with its Erd{\"o}s-R{\'e}nyi counterpart. Furthermore, we show the negative part of the conjecture by relating the model with the so-called {\em multi-type Galton-Watson hypertrees} and considering the broadcasting problem on these hypertrees. The methods developed in this paper are generalised from the study of sparse random graphs by Mossel et al. 2015.
Accept
This paper confirm the positive part of the conjecture of Angelini et al (2015) about the existence of a sharp threshold on the model parameters for community detection for sparse hyper-graphs. Here the author’s contribution is a simpler proof than the existing one and consistent estimation of the model parameters. The authors also prove the negative part of the conjecture. One of the main negative comments of the reviewers is that the presentation can be improved by a careful proofreading and editing. The authors should incorporate the changes suggested by the reviewers for making the paper more accessible to the general audience and point out what the new contributions are in their proof, which seems to closely follow the proof in the graph setting by Mossel et al[23].
train
[ "c8Md1nvBDt", "JzVtXd_LjzB", "kuCwafntQW1", "1mY-v6Kf2m8", "Olq55nE8y50", "hSOQ04wNim1", "M1p7GwAfWFa", "iVMLc9Zxab6", "zxrARex-4U2", "QQTTc376O3P" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your comments! These would be valuable points to include in an updated version.", " In our initial revision we made only two of the three required changes to fix the caption: all consequential on the same inverted English expression in the original manuscript.\nSpecifically we should have changed ...
[ -1, -1, -1, -1, -1, -1, -1, 6, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "M1p7GwAfWFa", "kuCwafntQW1", "1mY-v6Kf2m8", "hSOQ04wNim1", "zxrARex-4U2", "QQTTc376O3P", "iVMLc9Zxab6", "nips_2022_MjaROj4BOwk", "nips_2022_MjaROj4BOwk", "nips_2022_MjaROj4BOwk" ]
nips_2022_4NpoSrT8uU-
Deep Bidirectional Language-Knowledge Graph Pretraining
Pretraining a language model (LM) on text has been shown to help various downstream NLP tasks. Recent works show that a knowledge graph (KG) can complement text data, offering structured background knowledge that provides a useful scaffold for reasoning. However, these works are not pretrained to learn a deep fusion of the two modalities at scale, limiting the potential to acquire fully joint representations of text and KG. Here we propose DRAGON (Deep Bidirectional Language-Knowledge Graph Pretraining), a self-supervised approach to pretraining a deeply joint language-knowledge foundation model from text and KG at scale. Specifically, our model takes pairs of text segments and relevant KG subgraphs as input and bidirectionally fuses information from both modalities. We pretrain this model by unifying two self-supervised reasoning tasks, masked language modeling and KG link prediction. DRAGON outperforms existing LM and LM+KG models on diverse downstream tasks including question answering across general and biomedical domains, with +5% absolute gain on average. In particular, DRAGON achieves notable performance on complex reasoning about language and knowledge (+10% on questions involving long contexts or multi-step reasoning) and low-resource QA (+8% on OBQA and RiddleSense), and new state-of-the-art results on various BioNLP tasks.
Accept
This paper describes a pretraining approach that can leverage both text and knowledge graphs. The model has a cross-modal model that fuses text and KG bidirectionally, and a bidirectional self-supervised objective that learns joint reasoning over text and KG. For pretraiing the models uses traditional masked language modeling (MLM) as in BERT and link prediction, which drops and predicts edges in the input KG. Experiments on two domains demonstrate the effectiveness of the pretraining method. Reviewers agree that the paper will make a good addition to NeurIPS, but disagree in their level of enthusiasm. The concerns expressed is that the approach is not particularly novel and that it requires entity linking, which adds more complexity to pretraining. Nevertheless, the reviewers agree that adding knowledge graph information to pretrained models is important, the experimental results are convincing, and the paper is clear and easy to read. There was a productive discussion between the reviewers and the authors. As result, the paper was improved by adding results and clarifications. The improved version will make a good contribution to the conference program.
train
[ "CA2IB0qwNFu", "1CEmPtaee7b", "0lQjJIV3g2c", "V4lAVb5InN", "SSk3vTzmamk", "UUBpjURyxl", "B7aHYwncHcA", "bY4dntrDDJY", "df3wn56Hfg1", "XgleNZb6J-3", "UxNiPfRgEcH", "XJUAxpspU4z", "4zfZOebIwd6" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear authors,\n\nEntity linker: whilst its true that the entity linker is not complex, it doesnt change the fundamental limitation that it is still required! but thank you for highlighting this anyway, it is certainly a benefit that a complex system is not needed for the domains you study.\n\nThanks for explainin...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "SSk3vTzmamk", "0lQjJIV3g2c", "B7aHYwncHcA", "df3wn56Hfg1", "bY4dntrDDJY", "B7aHYwncHcA", "XJUAxpspU4z", "4zfZOebIwd6", "UxNiPfRgEcH", "nips_2022_4NpoSrT8uU-", "nips_2022_4NpoSrT8uU-", "nips_2022_4NpoSrT8uU-", "nips_2022_4NpoSrT8uU-" ]
nips_2022_toR64fsPir
Structure-Preserving Embedding of Multi-layer Networks
This paper investigates structure-preserving embedding for multi-layer networks with community structure. We propose a novel generative tensor-based latent space model (TLSM) that allows heterogeneity among vertices. It embeds vertices into a low-dimensional latent space so that vertices within the same community are close to each other in the ambient space, and captures layer heterogeneity through a layer-effect factor matrix. With a general and flexible tensor decomposition on the expected network adjacency tensor, TLSM is dedicated to preserving the original vertex relations and layer-specific effects in the network embedding. An efficient alternative updating scheme is developed to estimate the model parameters and conduct community detection simultaneously. Theoretically, we establish the asymptotic consistencies of TLSM in terms of both multi-layer network estimation and community detection. The theoretical results are supported by extensive numerical experiments on both synthetic and real-life multi-layer networks.
Reject
This paper applies tensor decomposition to study the structure-preserving embedding of multi-layer networks, for the tasks such as community detection and link prediction. While the reviewers appreciate several technical novelties in the paper, they still have a number of concerns, such as the novelty of tensor decomposition in this context, the theoretical convergence, the practicality of the consistency analysis, the empirical performance of the proposed algorithm with the real-world data. Overall, the paper looks to be promising, but a little below the high bar of NeurIPS. The authors are encouraged to revise the paper based on the reviewer comments and submit the paper to the next venue.
train
[ "eE48ezrrBXY", "opVCF_m7-OO", "ytXv0kyxVi2", "O5aB5_8cPJf", "4DVT8dPwPlh", "lRjlMXl6_eo", "dDiS4dr6d4p", "kzs9OHRj33P", "6iPUuTV1yOq" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you again for your insightful comments, and we had try our best efforts to address them. Further discussions or elaborations are welcome.", " We really appreciate for your valuable comments as well as positive feedback, which indeed motivates us to improve the quality of the paper. We are also open for fu...
[ -1, -1, -1, -1, -1, -1, 4, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "O5aB5_8cPJf", "4DVT8dPwPlh", "lRjlMXl6_eo", "6iPUuTV1yOq", "kzs9OHRj33P", "dDiS4dr6d4p", "nips_2022_toR64fsPir", "nips_2022_toR64fsPir", "nips_2022_toR64fsPir" ]
nips_2022_u6GIDyHitzF
An Asymptotically Optimal Batched Algorithm for the Dueling Bandit Problem
We study the $K$-armed dueling bandit problem, a variation of the traditional multi-armed bandit problem in which feedback is obtained in the form of pairwise comparisons. Previous learning algorithms have focused on the fully adaptive setting, where the algorithm can make updates after every comparison. The "batched" dueling bandit problem is motivated by large-scale applications like web search ranking and recommendation systems, where performing sequential updates may be infeasible. In this work, we ask: is there a solution using only a few adaptive rounds that matches the asymptotic regret bounds of the best sequential algorithms for $K$-armed dueling bandits? We answer this in the affirmative under the Condorcet condition, a standard setting of the $K$-armed dueling bandit problem. We obtain asymptotic regret of $O(K^2\log^2(K))$ + $O(K\log(T))$ in $O(\log(T))$ rounds, where $T$ is the time horizon. Our regret bounds nearly match the best regret bounds known in the fully sequential setting under the Condorcet condition. Finally, in computational experiments over a variety of real-world datasets, we observe that our algorithm using $O(\log(T))$ rounds achieves almost the same performance as fully sequential algorithms (that use $T$ rounds).
Accept
This paper makes nice progress in the "dueling bandits" framework by giving a near-optimal regret bound under (exponentially) fewer rounds of adaptivity. This makes theoretical progress on this problem and may have real-world applications in settings where adaptivity is costly or difficult to implement with each round.
train
[ "PZ8iVT_EObh", "i0tCzcqhvnQ", "B1x4JlU9Tz", "d9JEdD4z8pV", "mFq-7RzeXU", "JCnjcRXYGuP", "asvxnifX6X", "rEgPyKqtS_", "QnScr5ckDmR" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks the authors for clarification. I increased the rating.", " Dear authors,\n\nmany thanks for your response!\n\n### Re: KL-divergence proof. \nGreat! I agree that it is straightforward to use a KL-based elimination criteria and show theoretical results. Therefore, I have no doubts that you have done there ...
[ -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "mFq-7RzeXU", "d9JEdD4z8pV", "nips_2022_u6GIDyHitzF", "QnScr5ckDmR", "rEgPyKqtS_", "asvxnifX6X", "nips_2022_u6GIDyHitzF", "nips_2022_u6GIDyHitzF", "nips_2022_u6GIDyHitzF" ]
nips_2022_Cl9dcH6Xkcj
Faster Deep Reinforcement Learning with Slower Online Network
Deep reinforcement learning algorithms often use two networks for value function optimization: an online network, and a target network that tracks the online network with some delay. Using two separate networks enables the agent to hedge against issues that arise when performing bootstrapping. In this paper we endow two popular deep reinforcement learning algorithms, namely DQN and Rainbow, with updates that incentivize the online network to remain in the proximity of the target network. This improves the robustness of deep reinforcement learning in presence of noisy updates. The resultant agents, called DQN Pro and Rainbow Pro, exhibit significant performance improvements over their original counterparts on the Atari benchmark demonstrating the effectiveness of this simple idea in deep reinforcement learning. The code for our paper is available here: Github.com/amazon-research/fast-rl-with-slow-updates.
Accept
This paper proposes a a simple method to improve the robustness and sample-efficiency of value-based deep RL methods. The idea is to regularize parameters not to deviate much from target-network parameters. The paper also provides theoretical results showing its convergence property and justifying why the proposed method can accelerate convergence. The results across all Atari games are strong. All of the reviewers appreciated the simplicity and the effectiveness of the method. Some of the minor concerns were addressed during the rebuttal period. Thus, I recommend accepting this paper.
train
[ "QWJWOSXeCT-", "qtrK_nTWDWM_", "RIz1qF1re8k", "knfOsvBT-CC", "VMV2D8G7sk7", "iE0n8NY473", "uEBXVy0QF9h", "fCrr-YgjPm4", "AnU4kxxK9GY" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for your response and for providing additional results. New results look good and you have made a number of fair comments. Based on what I saw so far I recommend acceptance of this work. ", " Thanks for the reply. My concerns have been resolved, and thus I recommend an acceptance.", " We a...
[ -1, -1, -1, -1, -1, -1, 7, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "knfOsvBT-CC", "VMV2D8G7sk7", "nips_2022_Cl9dcH6Xkcj", "fCrr-YgjPm4", "uEBXVy0QF9h", "AnU4kxxK9GY", "nips_2022_Cl9dcH6Xkcj", "nips_2022_Cl9dcH6Xkcj", "nips_2022_Cl9dcH6Xkcj" ]
nips_2022_mhp4wLwiAI-
Old can be Gold: Better Gradient Flow can Make Vanilla-GCNs Great Again
Despite the enormous success of Graph Convolutional Networks (GCNs) in modeling graph-structured data, most of the current GCNs are shallow due to the notoriously challenging problems of over-smoothening and information squashing along with conventional difficulty caused by vanishing gradients and over-fitting. Previous works have been primarily focused on the study of over-smoothening and over-squashing phenomena in training deep GCNs. Surprisingly, in comparison with CNNs/RNNs, very limited attention has been given to understanding how healthy gradient flow can benefit the trainability of deep GCNs. In this paper, firstly, we provide a new perspective of gradient flow to understand the substandard performance of deep GCNs and hypothesize that by facilitating healthy gradient flow, we can significantly improve their trainability, as well as achieve state-of-the-art (SOTA) level performance from vanilla-GCNs. Next, we argue that blindly adopting the Glorot initialization for GCNs is not optimal, and derive a topology-aware isometric initialization scheme for vanilla-GCNs based on the principles of isometry. Additionally, contrary to ad-hoc addition of skip-connections, we propose to use gradient-guided dynamic rewiring of vanilla-GCNs with skip connections. Our dynamic rewiring method uses the gradient flow within each layer during training to introduce on-demand skip-connections adaptively. We provide extensive empirical evidence across multiple datasets that our methods improve gradient flow in deep vanilla-GCNs and significantly boost their performance to comfortably compete and outperform many fancy state-of-the-art methods. Codes are available at: https://github.com/VITA-Group/GradientGCN.
Accept
The paper proposes to promote the training of deep GCNs from the gradient flow perspective by introducing topology-aware isometric initialization and Dirichlet energy guided rewiring. The intuition of looking into healthy gradient flow is innovative and the observations are insightful. The experiments generally perform good, which demonstrate the effectiveness of proposed method. The reviewers have raised several concerns about the technical and experiments details, such as the use of Dirichlet energy and the discussion of trainability issue. The authors provide a nice rebuttal, and the discussions should be included in the revision to better present the research work.
val
[ "sUuEyiP4iHw", "EN37g6bWnUk", "OFuEgSsjXh4", "gkkQeqLfMQ", "vR6nEAxNU4Z", "0VevBpw95gu", "0TPy9kWmWU5", "MuDs5qf_pIf", "tGfJjvrbtqi", "1WpXL1JaWc2", "2BnU0482RfX", "vV8wQM_uU8K", "vUxTpVJOfsQ", "rt74E_jxcsk", "3mWdfjGSeu-", "djHzBvkz6c5", "7gp4ZLYAdo", "LUyxEMhScf", "1u2NuXg8iNe"...
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response and some of my questions are addressed, but I still have some concerns left:\n\n1. Comparison with the results in [1]: You should compare with the results in table 2 of [1], not table 1. I don't see significant improvement.\n\n2. Dirichlet Energy and the expressiveness: From my point of v...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 2, 4 ]
[ "rt74E_jxcsk", "OFuEgSsjXh4", "7gp4ZLYAdo", "LUyxEMhScf", "7gp4ZLYAdo", "djHzBvkz6c5", "LUyxEMhScf", "7gp4ZLYAdo", "djHzBvkz6c5", "LUyxEMhScf", "1u2NuXg8iNe", "vUxTpVJOfsQ", "7gp4ZLYAdo", "3mWdfjGSeu-", "djHzBvkz6c5", "nips_2022_mhp4wLwiAI-", "nips_2022_mhp4wLwiAI-", "nips_2022_mhp...
nips_2022__h29VprPHD
Offline Goal-Conditioned Reinforcement Learning via $f$-Advantage Regression
Offline goal-conditioned reinforcement learning (GCRL) promises general-purpose skill learning in the form of reaching diverse goals from purely offline datasets. We propose $\textbf{Go}$al-conditioned $f$-$\textbf{A}$dvantage $\textbf{R}$egression (GoFAR), a novel regression-based offline GCRL algorithm derived from a state-occupancy matching perspective; the key intuition is that the goal-reaching task can be formulated as a state-occupancy matching problem between a dynamics-abiding imitator agent and an expert agent that directly teleports to the goal. In contrast to prior approaches, GoFAR does not require any hindsight relabeling and enjoys uninterleaved optimization for its value and policy networks. These distinct features confer GoFAR with much better offline performance and stability as well as statistical performance guarantee that is unattainable for prior methods. Furthermore, we demonstrate that GoFAR's training objectives can be re-purposed to learn an agent-independent goal-conditioned planner from purely offline source-domain data, which enables zero-shot transfer to new target domains. Through extensive experiments, we validate GoFAR's effectiveness in various problem settings and tasks, significantly outperforming prior state-of-art. Notably, on a real robotic dexterous manipulation task, while no other method makes meaningful progress, GoFAR acquires complex manipulation behavior that successfully accomplishes diverse goals.
Accept
The paper presents a novel method for offline goal-conditional RL that is based on reformulating offline Goal-Contitional RL as a state-occupancy matching problem. From this observation, the author are able to leverage and adapt previous work (in particular SMODICE) to their setting. Reviewers were in agreement this was a strong paper, with excellent writing, good mathematical rigor and thorough experimental work. There were some concerns regarding similarity to SMODICE which the authors addressed in their rebuttal.
test
[ "n02aJMXxF5", "eDsJqptkh4j", "BhPDjw5DJsy", "flDzPdK15H_", "mv22HrPh3p5P", "SuITz6_ZQL3", "ZEfwcrNjnrZ", "ZaVbWrjhC4R3", "GHz3XwmcfKH", "oWzlVric5EX", "PZc5Zk_FXGW", "pJ1mSpMC4kj", "Qx-BXxB5Ht1", "lwg_cWz-JMto", "qy_GyYO4s9U", "_VsLdm5mAx0", "QcYCQXw3YRK", "0QapFmzvfz6", "Ao84CqY...
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", ...
[ " Dear reviewer PYXz,\n\nThank you for your time and thoughtful feedback. We will include the posted references and connections to the mentioned prior works/problem formulations in our next revision. Here, we provide some brief clarifications to the new questions.\n\n**Question 1: How does sampling work in Line 5,6...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 8, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "eDsJqptkh4j", "PZc5Zk_FXGW", "SuITz6_ZQL3", "GHz3XwmcfKH", "oWzlVric5EX", "ZaVbWrjhC4R3", "0QapFmzvfz6", "qy_GyYO4s9U", "QcYCQXw3YRK", "nips_2022__h29VprPHD", "pJ1mSpMC4kj", "Qx-BXxB5Ht1", "lwg_cWz-JMto", "Ao84CqYgQGP", "_VsLdm5mAx0", "3xuCF7FNXGY", "AD3zw0crhM", "ybVV7mDZTzB", ...
nips_2022_3yO3MiSOkH4
Graph Few-shot Learning with Task-specific Structures
Graph few-shot learning is of great importance among various graph learning tasks. Under the few-shot scenario, models are often required to conduct classification given limited labeled samples. Existing graph few-shot learning methods typically leverage Graph Neural Networks (GNNs) and perform classification across a series of meta-tasks. Nevertheless, these methods generally rely on the original graph (i.e., the graph that the meta-task is sampled from) to learn node representations. Consequently, the learned representations for the same nodes are identical in all meta-tasks. Since the class sets are different across meta-tasks, node representations should be task-specific to promote classification performance. Therefore, to adaptively learn node representations across meta-tasks, we propose a novel framework that learns a task-specific structure for each meta-task. To handle the variety of nodes across meta-tasks, we extract relevant nodes and learn task-specific structures based on node influence and mutual information. In this way, we can learn node representations with the task-specific structure tailored for each meta-task. We further conduct extensive experiments on five node classification datasets under both single- and multiple-graph settings to validate the superiority of our framework over the state-of-the-art baselines.
Accept
The paper attempts to improve few shot learning over graphs. In this regards, the authors propose a multi-stage approach where first relevant nodes is identified and desirable edge weights is learnt for each few-shot node classification task, so that the input graph structure is tailored for each task individually. This proposed method is based on insights from theoretical analysis. Experiments are carried to show the proposed method is more effective over some baseline methods. We thank the authors and reviewers for actively engaging in discussion and taking steps towards improving the paper including for providing additional experiments. Some concerns about theoretical analysis remain on interaction of influence between nodes and layers and would be nice to discuss in the final version. Other minor fixes: - Line 23 in appendix typo: fix partial latex symbol - Eq 6 below line 38: First equality should be inequality
val
[ "vCRU2Dn7HB0", "RLTgD8TICQN", "UVx0lM7GmV8", "VEPSJqbCPN7", "1xr9MB49os", "utl88etnj4K", "He7-60QW8cEa", "CvRDDy3ie23", "RzFlQt2Ujj5", "teziGXrXLDf", "wyVgl6Vl8v", "UQ6x5danr7DG", "D00dupngDw", "NhASbsaruhw", "JamJgbx3C2t", "-NVEB9WF4dD", "BxqWdmS65XK", "hw-wD4Hj0o-", "6mB6KYwKxn...
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_r...
[ " Dear Reviewer,\n\nThank you so much! We really appreciate your suggestions and would like to include the analysis in the paper. We will keep up the good work!\n\nThank you!", " Dear Reviewer,\n\nThank you so much! We really appreciate your cognition of our paper. We will keep up the good work!\n\nThank you!", ...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 2, 5 ]
[ "utl88etnj4K", "UVx0lM7GmV8", "VEPSJqbCPN7", "BxqWdmS65XK", "He7-60QW8cEa", "JamJgbx3C2t", "-NVEB9WF4dD", "6mB6KYwKxnl", "hw-wD4Hj0o-", "BxqWdmS65XK", "UQ6x5danr7DG", "D00dupngDw", "P0C--Yl561j", "BxqWdmS65XK", "6mB6KYwKxnl", "hw-wD4Hj0o-", "nips_2022_3yO3MiSOkH4", "nips_2022_3yO3M...